pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
tokens_length
sequencelengths
1
723
input_texts
sequencelengths
1
1
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dpo_helpfulhelpful_human_subset20000_modelgpt2_maxsteps5000_bz8_lr5e-06 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 15 - training_steps: 5000 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "dpo_helpfulhelpful_human_subset20000_modelgpt2_maxsteps5000_bz8_lr5e-06", "results": []}]}
Holarissun/dpo_helpfulhelpful_human_subset20000_modelgpt2_maxsteps5000_bz8_lr5e-06
null
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:gpt2", "license:mit", "region:us" ]
null
2024-05-01T05:18:23+00:00
[]
[]
TAGS #peft #safetensors #trl #dpo #generated_from_trainer #base_model-gpt2 #license-mit #region-us
# dpo_helpfulhelpful_human_subset20000_modelgpt2_maxsteps5000_bz8_lr5e-06 This model is a fine-tuned version of gpt2 on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 15 - training_steps: 5000 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# dpo_helpfulhelpful_human_subset20000_modelgpt2_maxsteps5000_bz8_lr5e-06\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 15\n- training_steps: 5000", "### Training results", "### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.38.2\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-gpt2 #license-mit #region-us \n", "# dpo_helpfulhelpful_human_subset20000_modelgpt2_maxsteps5000_bz8_lr5e-06\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 15\n- training_steps: 5000", "### Training results", "### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.38.2\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ 36, 54, 7, 9, 9, 4, 122, 5, 48 ]
[ "TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-gpt2 #license-mit #region-us \n# dpo_helpfulhelpful_human_subset20000_modelgpt2_maxsteps5000_bz8_lr5e-06\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 15\n- training_steps: 5000### Training results### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.38.2\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
transformers
# Uploaded model - **Developed by:** dmorrigan - **License:** apache-2.0 - **Finetuned from model :** yam-peleg/Hebrew-Mistral-7B This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "yam-peleg/Hebrew-Mistral-7B"}
dmorrigan/HebrewLyricsLoRA-HebrewMistral-23K-5Epoch
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:yam-peleg/Hebrew-Mistral-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T05:18:42+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-yam-peleg/Hebrew-Mistral-7B #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: dmorrigan - License: apache-2.0 - Finetuned from model : yam-peleg/Hebrew-Mistral-7B This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: dmorrigan\n- License: apache-2.0\n- Finetuned from model : yam-peleg/Hebrew-Mistral-7B\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-yam-peleg/Hebrew-Mistral-7B #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: dmorrigan\n- License: apache-2.0\n- Finetuned from model : yam-peleg/Hebrew-Mistral-7B\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 60, 76 ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-yam-peleg/Hebrew-Mistral-7B #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: dmorrigan\n- License: apache-2.0\n- Finetuned from model : yam-peleg/Hebrew-Mistral-7B\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dpo_helpfulhelpful_human_subset20000_modelgpt2_maxsteps5000_bz8_lr1e-06 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 15 - training_steps: 5000 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "dpo_helpfulhelpful_human_subset20000_modelgpt2_maxsteps5000_bz8_lr1e-06", "results": []}]}
Holarissun/dpo_helpfulhelpful_human_subset20000_modelgpt2_maxsteps5000_bz8_lr1e-06
null
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:gpt2", "license:mit", "region:us" ]
null
2024-05-01T05:18:45+00:00
[]
[]
TAGS #peft #safetensors #trl #dpo #generated_from_trainer #base_model-gpt2 #license-mit #region-us
# dpo_helpfulhelpful_human_subset20000_modelgpt2_maxsteps5000_bz8_lr1e-06 This model is a fine-tuned version of gpt2 on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 15 - training_steps: 5000 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# dpo_helpfulhelpful_human_subset20000_modelgpt2_maxsteps5000_bz8_lr1e-06\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-06\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 15\n- training_steps: 5000", "### Training results", "### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.38.2\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-gpt2 #license-mit #region-us \n", "# dpo_helpfulhelpful_human_subset20000_modelgpt2_maxsteps5000_bz8_lr1e-06\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-06\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 15\n- training_steps: 5000", "### Training results", "### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.38.2\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ 36, 54, 7, 9, 9, 4, 122, 5, 48 ]
[ "TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-gpt2 #license-mit #region-us \n# dpo_helpfulhelpful_human_subset20000_modelgpt2_maxsteps5000_bz8_lr1e-06\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-06\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 15\n- training_steps: 5000### Training results### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.38.2\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-to-audio
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Danish_TTS This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the common_voice_13_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.4699 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-------:|:----:|:---------------:| | 0.531 | 12.9450 | 1000 | 0.4886 | | 0.4994 | 25.8900 | 2000 | 0.4748 | | 0.4882 | 38.8350 | 3000 | 0.4684 | | 0.4777 | 51.7799 | 4000 | 0.4699 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["common_voice_13_0"], "base_model": "microsoft/speecht5_tts", "model-index": [{"name": "Danish_TTS", "results": []}]}
arham061/Danish_TTS
null
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "dataset:common_voice_13_0", "base_model:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-05-01T05:18:58+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #dataset-common_voice_13_0 #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us
Danish\_TTS =========== This model is a fine-tuned version of microsoft/speecht5\_tts on the common\_voice\_13\_0 dataset. It achieves the following results on the evaluation set: * Loss: 0.4699 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 4 * eval\_batch\_size: 2 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * training\_steps: 4000 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #dataset-common_voice_13_0 #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 63, 149, 5, 44 ]
[ "TAGS\n#transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #dataset-common_voice_13_0 #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
theGhoul21/srl-sft-010524
null
[ "transformers", "safetensors", "mistral", "text-generation", "unsloth", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T05:21:18+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #unsloth #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #unsloth #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 51, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #unsloth #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # outputs This model is a fine-tuned version of [nextab/athena-2b-v1.5](https://huggingface.co/nextab/athena-2b-v1.5) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 3407 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "unsloth", "generated_from_trainer"], "base_model": "nextab/athena-2b-v1.5", "model-index": [{"name": "outputs", "results": []}]}
nextab/Med-Athena-v1.5-adapter
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "unsloth", "generated_from_trainer", "base_model:nextab/athena-2b-v1.5", "license:apache-2.0", "region:us" ]
null
2024-05-01T05:22:03+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #unsloth #generated_from_trainer #base_model-nextab/athena-2b-v1.5 #license-apache-2.0 #region-us
# outputs This model is a fine-tuned version of nextab/athena-2b-v1.5 on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 3407 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# outputs\n\nThis model is a fine-tuned version of nextab/athena-2b-v1.5 on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 3407\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 5\n- training_steps: 100\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.1.0+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #unsloth #generated_from_trainer #base_model-nextab/athena-2b-v1.5 #license-apache-2.0 #region-us \n", "# outputs\n\nThis model is a fine-tuned version of nextab/athena-2b-v1.5 on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 3407\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 5\n- training_steps: 100\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.1.0+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ 56, 29, 7, 9, 9, 4, 132, 5, 52 ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #unsloth #generated_from_trainer #base_model-nextab/athena-2b-v1.5 #license-apache-2.0 #region-us \n# outputs\n\nThis model is a fine-tuned version of nextab/athena-2b-v1.5 on the None dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 3407\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 5\n- training_steps: 100\n- mixed_precision_training: Native AMP### Training results### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.1.0+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
Additional "helping" of Tiefighter to make it more "Tiefighterie" from orginal Orca2/Tiefighter merge. # D_AU-Orac-13B-Tiefighter-Plus-slerp D_AU-Orac-13B-Tiefighter-Plus-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [KoboldAI/LLaMA2-13B-Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter) * [DavidAU/D_AU-Orac-13B-Tiefighter-slerp](https://huggingface.co/DavidAU/D_AU-Orac-13B-Tiefighter-slerp) ## 🧩 Configuration ```yaml models: - model: KoboldAI/LLaMA2-13B-Tiefighter parameters: weight: 0.7 - model: DavidAU/D_AU-Orac-13B-Tiefighter-slerp parameters: weight: 0.3 merge_method: linear dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "DavidAU/D_AU-Orac-13B-Tiefighter-Plus-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"tags": ["merge", "mergekit", "lazymergekit", "KoboldAI/LLaMA2-13B-Tiefighter", "DavidAU/D_AU-Orac-13B-Tiefighter-slerp"], "base_model": ["KoboldAI/LLaMA2-13B-Tiefighter", "DavidAU/D_AU-Orac-13B-Tiefighter-slerp"]}
DavidAU/D_AU-Orac-13B-Tiefighter-Plus-slerp
null
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "KoboldAI/LLaMA2-13B-Tiefighter", "DavidAU/D_AU-Orac-13B-Tiefighter-slerp", "base_model:KoboldAI/LLaMA2-13B-Tiefighter", "base_model:DavidAU/D_AU-Orac-13B-Tiefighter-slerp", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T05:22:12+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #KoboldAI/LLaMA2-13B-Tiefighter #DavidAU/D_AU-Orac-13B-Tiefighter-slerp #base_model-KoboldAI/LLaMA2-13B-Tiefighter #base_model-DavidAU/D_AU-Orac-13B-Tiefighter-slerp #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Additional "helping" of Tiefighter to make it more "Tiefighterie" from orginal Orca2/Tiefighter merge. # D_AU-Orac-13B-Tiefighter-Plus-slerp D_AU-Orac-13B-Tiefighter-Plus-slerp is a merge of the following models using LazyMergekit: * KoboldAI/LLaMA2-13B-Tiefighter * DavidAU/D_AU-Orac-13B-Tiefighter-slerp ## Configuration ## Usage
[ "# D_AU-Orac-13B-Tiefighter-Plus-slerp\n\nD_AU-Orac-13B-Tiefighter-Plus-slerp is a merge of the following models using LazyMergekit:\n* KoboldAI/LLaMA2-13B-Tiefighter\n* DavidAU/D_AU-Orac-13B-Tiefighter-slerp", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #KoboldAI/LLaMA2-13B-Tiefighter #DavidAU/D_AU-Orac-13B-Tiefighter-slerp #base_model-KoboldAI/LLaMA2-13B-Tiefighter #base_model-DavidAU/D_AU-Orac-13B-Tiefighter-slerp #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# D_AU-Orac-13B-Tiefighter-Plus-slerp\n\nD_AU-Orac-13B-Tiefighter-Plus-slerp is a merge of the following models using LazyMergekit:\n* KoboldAI/LLaMA2-13B-Tiefighter\n* DavidAU/D_AU-Orac-13B-Tiefighter-slerp", "## Configuration", "## Usage" ]
[ 120, 84, 3, 3 ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #KoboldAI/LLaMA2-13B-Tiefighter #DavidAU/D_AU-Orac-13B-Tiefighter-slerp #base_model-KoboldAI/LLaMA2-13B-Tiefighter #base_model-DavidAU/D_AU-Orac-13B-Tiefighter-slerp #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# D_AU-Orac-13B-Tiefighter-Plus-slerp\n\nD_AU-Orac-13B-Tiefighter-Plus-slerp is a merge of the following models using LazyMergekit:\n* KoboldAI/LLaMA2-13B-Tiefighter\n* DavidAU/D_AU-Orac-13B-Tiefighter-slerp## Configuration## Usage" ]
null
null
At RWKV-Dev, we upload experimental models and such. Please use them at your own risk.
{"license": "apache-2.0"}
OpenMOSE/RWKV-Dev
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-01T05:25:42+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
At RWKV-Dev, we upload experimental models and such. Please use them at your own risk.
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
[ 13 ]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
text-generation
transformers
# Experimental Model Warning This model is an experimental prototype and should not be considered production-ready. Reasons for Experimental Status Potential for Bias: Due to the experimental nature of the model, it may exhibit biases in its output, which could lead to incorrect or unfair results. ### Precautions to Take **Use with Caution**: Be aware that the model's output may contain factual inaccuracies or biases. **Verify Output**: Always verify the model's output with other sources to ensure its accuracy. **Report Issues**: If you encounter any issues or biases in the model's output, please report them so that they can be addressed in future updates. **Avoid Sensitive Applications**: Do not use the model for applications where accuracy and reliability are critical, such as medical or financial decision-making. By understanding the experimental nature of this model and taking the necessary precautions, you can help ensure that it is used responsibly and effectively **License**: This model is strictly non-commercial (cc-by-nc-4.0) use only. The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included cc-by-nc-4.0 license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me. **Disclaimer**: By Downloading And/Or using the model, you fully agree to the license (**cc-by-nc-4.0**) and its commercial-use restrictions.
{"license": "cc-by-nc-4.0"}
0ai/0ai-7B-v4
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T05:27:08+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Experimental Model Warning This model is an experimental prototype and should not be considered production-ready. Reasons for Experimental Status Potential for Bias: Due to the experimental nature of the model, it may exhibit biases in its output, which could lead to incorrect or unfair results. ### Precautions to Take Use with Caution: Be aware that the model's output may contain factual inaccuracies or biases. Verify Output: Always verify the model's output with other sources to ensure its accuracy. Report Issues: If you encounter any issues or biases in the model's output, please report them so that they can be addressed in future updates. Avoid Sensitive Applications: Do not use the model for applications where accuracy and reliability are critical, such as medical or financial decision-making. By understanding the experimental nature of this model and taking the necessary precautions, you can help ensure that it is used responsibly and effectively License: This model is strictly non-commercial (cc-by-nc-4.0) use only. The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included cc-by-nc-4.0 license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me. Disclaimer: By Downloading And/Or using the model, you fully agree to the license (cc-by-nc-4.0) and its commercial-use restrictions.
[ "# Experimental Model Warning\nThis model is an experimental prototype and should not be considered production-ready.\nReasons for Experimental Status\nPotential for Bias: Due to the experimental nature of the model, it may exhibit biases in its output, which could lead to incorrect or unfair results.", "### Precautions to Take\nUse with Caution: Be aware that the model's output may contain factual inaccuracies or biases.\n\n\nVerify Output: Always verify the model's output with other sources to ensure its accuracy.\n\nReport Issues: If you encounter any issues or biases in the model's output, please report them so that they can be addressed in future updates.\n\n\nAvoid Sensitive Applications: Do not use the model for applications where accuracy and reliability are critical, such as medical or financial decision-making.\n\nBy understanding the experimental nature of this model and taking the necessary precautions, you can help ensure that it is used responsibly and effectively\n\nLicense:\nThis model is strictly non-commercial (cc-by-nc-4.0) use only. The \"Model\" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included cc-by-nc-4.0 license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me.\n\nDisclaimer: By Downloading And/Or using the model, you fully agree to the license (cc-by-nc-4.0) and its commercial-use restrictions." ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Experimental Model Warning\nThis model is an experimental prototype and should not be considered production-ready.\nReasons for Experimental Status\nPotential for Bias: Due to the experimental nature of the model, it may exhibit biases in its output, which could lead to incorrect or unfair results.", "### Precautions to Take\nUse with Caution: Be aware that the model's output may contain factual inaccuracies or biases.\n\n\nVerify Output: Always verify the model's output with other sources to ensure its accuracy.\n\nReport Issues: If you encounter any issues or biases in the model's output, please report them so that they can be addressed in future updates.\n\n\nAvoid Sensitive Applications: Do not use the model for applications where accuracy and reliability are critical, such as medical or financial decision-making.\n\nBy understanding the experimental nature of this model and taking the necessary precautions, you can help ensure that it is used responsibly and effectively\n\nLicense:\nThis model is strictly non-commercial (cc-by-nc-4.0) use only. The \"Model\" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included cc-by-nc-4.0 license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me.\n\nDisclaimer: By Downloading And/Or using the model, you fully agree to the license (cc-by-nc-4.0) and its commercial-use restrictions." ]
[ 49, 54, 285 ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Experimental Model Warning\nThis model is an experimental prototype and should not be considered production-ready.\nReasons for Experimental Status\nPotential for Bias: Due to the experimental nature of the model, it may exhibit biases in its output, which could lead to incorrect or unfair results.### Precautions to Take\nUse with Caution: Be aware that the model's output may contain factual inaccuracies or biases.\n\n\nVerify Output: Always verify the model's output with other sources to ensure its accuracy.\n\nReport Issues: If you encounter any issues or biases in the model's output, please report them so that they can be addressed in future updates.\n\n\nAvoid Sensitive Applications: Do not use the model for applications where accuracy and reliability are critical, such as medical or financial decision-making.\n\nBy understanding the experimental nature of this model and taking the necessary precautions, you can help ensure that it is used responsibly and effectively\n\nLicense:\nThis model is strictly non-commercial (cc-by-nc-4.0) use only. The \"Model\" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included cc-by-nc-4.0 license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me.\n\nDisclaimer: By Downloading And/Or using the model, you fully agree to the license (cc-by-nc-4.0) and its commercial-use restrictions." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-Instruct-v0.2-GPTQ_retrained_NF_TON_IOT This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2117 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9088 | 1.0 | 1 | 1.3562 | | 0.9303 | 2.0 | 2 | 1.3320 | | 0.9044 | 3.0 | 3 | 1.2748 | | 0.8551 | 4.0 | 4 | 1.2338 | | 0.821 | 5.0 | 5 | 1.2117 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.1.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "model-index": [{"name": "Mistral-7B-Instruct-v0.2-GPTQ_retrained_NF_TON_IOT", "results": []}]}
rnaveensrinivas/Mistral-7B-Instruct-v0.2-GPTQ_retrained_NF_TON_IOT
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-05-01T05:28:05+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us
Mistral-7B-Instruct-v0.2-GPTQ\_retrained\_NF\_TON\_IOT ====================================================== This model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.2-GPTQ on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.2117 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2 * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.40.1 * Pytorch 2.1.0+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.1.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.1.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 55, 151, 5, 52 ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.1.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"}
vu3/mistral_finetuned
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-05-01T05:30:37+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.7.1
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.7.1" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.7.1" ]
[ 44, 6, 4, 50, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5, 13 ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact### Framework versions\n\n- PEFT 0.7.1" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Evidence_Retrieval_model_vit5_base_word This model is a fine-tuned version of [VietAI/vit5-base](https://huggingface.co/VietAI/vit5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5457 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.4579 | 1.0 | 1047 | 0.4647 | | 0.306 | 2.0 | 2094 | 0.3441 | | 0.2501 | 3.0 | 3141 | 0.3864 | | 0.1737 | 4.0 | 4188 | 0.5220 | | 0.1354 | 5.0 | 5235 | 0.5457 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-base", "model-index": [{"name": "Evidence_Retrieval_model_vit5_base_word", "results": []}]}
tringuyen-uit/Evidence_Retrieval_model_vit5_base_word
null
[ "transformers", "tensorboard", "safetensors", "t5", "question-answering", "generated_from_trainer", "base_model:VietAI/vit5-base", "license:mit", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T05:33:57+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #question-answering #generated_from_trainer #base_model-VietAI/vit5-base #license-mit #endpoints_compatible #text-generation-inference #region-us
Evidence\_Retrieval\_model\_vit5\_base\_word ============================================ This model is a fine-tuned version of VietAI/vit5-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.5457 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.1.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #question-answering #generated_from_trainer #base_model-VietAI/vit5-base #license-mit #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ 55, 101, 5, 40 ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #question-answering #generated_from_trainer #base_model-VietAI/vit5-base #license-mit #endpoints_compatible #text-generation-inference #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5### Training results### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) zephyr-7b-gemma-sft-v0.1 - bnb 4bits - Model creator: https://huggingface.co/HuggingFaceH4/ - Original model: https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-sft-v0.1/ Original model description: --- license: other license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms base_model: google/gemma-7b tags: - alignment-handbook - trl - sft - generated_from_trainer - trl - sft - generated_from_trainer datasets: - HuggingFaceH4/deita-10k-v0-sft model-index: - name: zephyr-7b-gemma-sft results: [] language: - en --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-gemma-sft This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the HuggingFaceH4/deita-10k-v0-sft dataset. It achieves the following results on the evaluation set: - Loss: 0.9732 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9482 | 1.0 | 299 | 0.9848 | | 0.8139 | 2.0 | 599 | 0.9610 | | 0.722 | 2.99 | 897 | 0.9732 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.1
{}
RichardErkhov/HuggingFaceH4_-_zephyr-7b-gemma-sft-v0.1-4bits
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-05-01T05:35:18+00:00
[]
[]
TAGS #transformers #safetensors #gemma #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models zephyr-7b-gemma-sft-v0.1 - bnb 4bits * Model creator: URL * Original model: URL Original model description: --------------------------- license: other license\_name: gemma-terms-of-use license\_link: URL base\_model: google/gemma-7b tags: * alignment-handbook * trl * sft * generated\_from\_trainer * trl * sft * generated\_from\_trainer datasets: * HuggingFaceH4/deita-10k-v0-sft model-index: * name: zephyr-7b-gemma-sft results: [] language: * en --- zephyr-7b-gemma-sft =================== This model is a fine-tuned version of google/gemma-7b on the HuggingFaceH4/deita-10k-v0-sft dataset. It achieves the following results on the evaluation set: * Loss: 0.9732 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 16 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 128 * total\_eval\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.39.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.15.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.1" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.1" ]
[ 40, 176, 5, 47 ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.1" ]
text-generation
transformers
# Dolphin 2.9 Mixtral 8x22b 🐬 Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations Discord: https://discord.gg/8fbBeC7ZGx <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> My appreciation for the sponsors of Dolphin 2.9: - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xH100 node This model is based on Dolphin-2.9-Mixtral-8x22b, and is Apache-2.0 licensed. The base model has 64k context, and the full-weight fine-tuning was with 4k sequence length. It took 1 week on 8xH100 provided by Crusoe Cloud This model was trained FFT on 50% parameters (targeted with [Laser Scanner](https://github.com/cognitivecomputations/laserRMT/blob/main/laser_scanner.py) by Fernando Fernandes, David Golchinfar, Lucas Atkins, and Eric Hartford) , using ChatML prompt template format. example: ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` Dolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling. Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. Dolphin is licensed Apache 2.0. I grant permission for any use, including commercial, that falls within accordance with Apache-2.0 license. Dolphin was trained on data generated from GPT4, among other models. ## Evals ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/Nb6f_dS_M6fN_v2ACK98x.png) ## Training [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: mistral-community/Mixtral-8x22B-v0.1 model_type: AutoModelForCausalLM tokenizer_type: LlamaTokenizer trust_remote_code: true load_in_8bit: false load_in_4bit: false strict: false unfrozen_parameters: - ^lm_head.weight$ - ^model.embed_tokens.weight$ - model.layers.0.self_attn.q_proj - model.layers.1.self_attn.q_proj - model.layers.2.self_attn.q_proj - model.layers.22.self_attn.q_proj - model.layers.27.self_attn.q_proj - model.layers.28.self_attn.q_proj - model.layers.13.self_attn.q_proj - model.layers.21.self_attn.q_proj - model.layers.24.self_attn.q_proj - model.layers.14.self_attn.q_proj - model.layers.15.self_attn.q_proj - model.layers.11.self_attn.q_proj - model.layers.20.self_attn.q_proj - model.layers.23.self_attn.q_proj - model.layers.30.self_attn.k_proj - model.layers.31.self_attn.k_proj - model.layers.25.self_attn.k_proj - model.layers.23.self_attn.k_proj - model.layers.27.self_attn.k_proj - model.layers.26.self_attn.k_proj - model.layers.29.self_attn.k_proj - model.layers.28.self_attn.k_proj - model.layers.24.self_attn.k_proj - model.layers.16.self_attn.k_proj - model.layers.19.self_attn.k_proj - model.layers.22.self_attn.k_proj - model.layers.20.self_attn.k_proj - model.layers.6.self_attn.k_proj - model.layers.22.self_attn.v_proj - model.layers.29.self_attn.v_proj - model.layers.31.self_attn.v_proj - model.layers.5.self_attn.v_proj - model.layers.8.self_attn.v_proj - model.layers.4.self_attn.v_proj - model.layers.25.self_attn.v_proj - model.layers.30.self_attn.v_proj - model.layers.17.self_attn.v_proj - model.layers.23.self_attn.v_proj - model.layers.28.self_attn.v_proj - model.layers.9.self_attn.v_proj - model.layers.26.self_attn.v_proj - model.layers.27.self_attn.v_proj - model.layers.20.self_attn.o_proj - model.layers.19.self_attn.o_proj - model.layers.16.self_attn.o_proj - model.layers.13.self_attn.o_proj - model.layers.18.self_attn.o_proj - model.layers.17.self_attn.o_proj - model.layers.12.self_attn.o_proj - model.layers.15.self_attn.o_proj - model.layers.14.self_attn.o_proj - model.layers.22.self_attn.o_proj - model.layers.23.self_attn.o_proj - model.layers.21.self_attn.o_proj - model.layers.10.self_attn.o_proj - model.layers.0.self_attn.o_proj - model.layers.0.block_sparse_moe.experts.0.w1 - model.layers.1.block_sparse_moe.experts.0.w1 - model.layers.2.block_sparse_moe.experts.0.w1 - model.layers.3.block_sparse_moe.experts.0.w1 - model.layers.4.block_sparse_moe.experts.0.w1 - model.layers.5.block_sparse_moe.experts.0.w1 - model.layers.6.block_sparse_moe.experts.0.w1 - model.layers.7.block_sparse_moe.experts.0.w1 - model.layers.8.block_sparse_moe.experts.0.w1 - model.layers.9.block_sparse_moe.experts.0.w1 - model.layers.10.block_sparse_moe.experts.0.w1 - model.layers.11.block_sparse_moe.experts.0.w1 - model.layers.12.block_sparse_moe.experts.0.w1 - model.layers.13.block_sparse_moe.experts.0.w1 - model.layers.0.block_sparse_moe.experts.0.w2 - model.layers.1.block_sparse_moe.experts.0.w2 - model.layers.2.block_sparse_moe.experts.0.w2 - model.layers.3.block_sparse_moe.experts.0.w2 - model.layers.4.block_sparse_moe.experts.0.w2 - model.layers.5.block_sparse_moe.experts.0.w2 - model.layers.6.block_sparse_moe.experts.0.w2 - model.layers.7.block_sparse_moe.experts.0.w2 - model.layers.8.block_sparse_moe.experts.0.w2 - model.layers.9.block_sparse_moe.experts.0.w2 - model.layers.10.block_sparse_moe.experts.0.w2 - model.layers.11.block_sparse_moe.experts.0.w2 - model.layers.12.block_sparse_moe.experts.0.w2 - model.layers.13.block_sparse_moe.experts.0.w2 - model.layers.0.block_sparse_moe.experts.0.w3 - model.layers.1.block_sparse_moe.experts.0.w3 - model.layers.2.block_sparse_moe.experts.0.w3 - model.layers.3.block_sparse_moe.experts.0.w3 - model.layers.4.block_sparse_moe.experts.0.w3 - model.layers.5.block_sparse_moe.experts.0.w3 - model.layers.6.block_sparse_moe.experts.0.w3 - model.layers.7.block_sparse_moe.experts.0.w3 - model.layers.8.block_sparse_moe.experts.0.w3 - model.layers.9.block_sparse_moe.experts.0.w3 - model.layers.10.block_sparse_moe.experts.0.w3 - model.layers.11.block_sparse_moe.experts.0.w3 - model.layers.12.block_sparse_moe.experts.0.w3 - model.layers.13.block_sparse_moe.experts.0.w3 - model.layers.0.block_sparse_moe.experts.1.w1 - model.layers.1.block_sparse_moe.experts.1.w1 - model.layers.2.block_sparse_moe.experts.1.w1 - model.layers.3.block_sparse_moe.experts.1.w1 - model.layers.4.block_sparse_moe.experts.1.w1 - model.layers.5.block_sparse_moe.experts.1.w1 - model.layers.6.block_sparse_moe.experts.1.w1 - model.layers.7.block_sparse_moe.experts.1.w1 - model.layers.8.block_sparse_moe.experts.1.w1 - model.layers.9.block_sparse_moe.experts.1.w1 - model.layers.10.block_sparse_moe.experts.1.w1 - model.layers.11.block_sparse_moe.experts.1.w1 - model.layers.12.block_sparse_moe.experts.1.w1 - model.layers.13.block_sparse_moe.experts.1.w1 - model.layers.40.block_sparse_moe.experts.1.w2 - model.layers.0.block_sparse_moe.experts.1.w2 - model.layers.1.block_sparse_moe.experts.1.w2 - model.layers.2.block_sparse_moe.experts.1.w2 - model.layers.3.block_sparse_moe.experts.1.w2 - model.layers.4.block_sparse_moe.experts.1.w2 - model.layers.5.block_sparse_moe.experts.1.w2 - model.layers.6.block_sparse_moe.experts.1.w2 - model.layers.7.block_sparse_moe.experts.1.w2 - model.layers.8.block_sparse_moe.experts.1.w2 - model.layers.9.block_sparse_moe.experts.1.w2 - model.layers.10.block_sparse_moe.experts.1.w2 - model.layers.11.block_sparse_moe.experts.1.w2 - model.layers.12.block_sparse_moe.experts.1.w2 - model.layers.5.block_sparse_moe.experts.1.w3 - model.layers.0.block_sparse_moe.experts.1.w3 - model.layers.1.block_sparse_moe.experts.1.w3 - model.layers.2.block_sparse_moe.experts.1.w3 - model.layers.3.block_sparse_moe.experts.1.w3 - model.layers.4.block_sparse_moe.experts.1.w3 - model.layers.6.block_sparse_moe.experts.1.w3 - model.layers.7.block_sparse_moe.experts.1.w3 - model.layers.8.block_sparse_moe.experts.1.w3 - model.layers.9.block_sparse_moe.experts.1.w3 - model.layers.10.block_sparse_moe.experts.1.w3 - model.layers.11.block_sparse_moe.experts.1.w3 - model.layers.12.block_sparse_moe.experts.1.w3 - model.layers.13.block_sparse_moe.experts.1.w3 - model.layers.1.block_sparse_moe.experts.2.w1 - model.layers.0.block_sparse_moe.experts.2.w1 - model.layers.2.block_sparse_moe.experts.2.w1 - model.layers.3.block_sparse_moe.experts.2.w1 - model.layers.4.block_sparse_moe.experts.2.w1 - model.layers.5.block_sparse_moe.experts.2.w1 - model.layers.6.block_sparse_moe.experts.2.w1 - model.layers.7.block_sparse_moe.experts.2.w1 - model.layers.8.block_sparse_moe.experts.2.w1 - model.layers.9.block_sparse_moe.experts.2.w1 - model.layers.10.block_sparse_moe.experts.2.w1 - model.layers.11.block_sparse_moe.experts.2.w1 - model.layers.12.block_sparse_moe.experts.2.w1 - model.layers.13.block_sparse_moe.experts.2.w1 - model.layers.1.block_sparse_moe.experts.2.w2 - model.layers.0.block_sparse_moe.experts.2.w2 - model.layers.2.block_sparse_moe.experts.2.w2 - model.layers.3.block_sparse_moe.experts.2.w2 - model.layers.4.block_sparse_moe.experts.2.w2 - model.layers.5.block_sparse_moe.experts.2.w2 - model.layers.6.block_sparse_moe.experts.2.w2 - model.layers.7.block_sparse_moe.experts.2.w2 - model.layers.8.block_sparse_moe.experts.2.w2 - model.layers.9.block_sparse_moe.experts.2.w2 - model.layers.10.block_sparse_moe.experts.2.w2 - model.layers.11.block_sparse_moe.experts.2.w2 - model.layers.12.block_sparse_moe.experts.2.w2 - model.layers.13.block_sparse_moe.experts.2.w2 - model.layers.1.block_sparse_moe.experts.2.w3 - model.layers.0.block_sparse_moe.experts.2.w3 - model.layers.2.block_sparse_moe.experts.2.w3 - model.layers.3.block_sparse_moe.experts.2.w3 - model.layers.4.block_sparse_moe.experts.2.w3 - model.layers.5.block_sparse_moe.experts.2.w3 - model.layers.6.block_sparse_moe.experts.2.w3 - model.layers.7.block_sparse_moe.experts.2.w3 - model.layers.8.block_sparse_moe.experts.2.w3 - model.layers.9.block_sparse_moe.experts.2.w3 - model.layers.10.block_sparse_moe.experts.2.w3 - model.layers.11.block_sparse_moe.experts.2.w3 - model.layers.12.block_sparse_moe.experts.2.w3 - model.layers.13.block_sparse_moe.experts.2.w3 - model.layers.2.block_sparse_moe.experts.3.w1 - model.layers.1.block_sparse_moe.experts.3.w1 - model.layers.0.block_sparse_moe.experts.3.w1 - model.layers.3.block_sparse_moe.experts.3.w1 - model.layers.4.block_sparse_moe.experts.3.w1 - model.layers.5.block_sparse_moe.experts.3.w1 - model.layers.6.block_sparse_moe.experts.3.w1 - model.layers.7.block_sparse_moe.experts.3.w1 - model.layers.8.block_sparse_moe.experts.3.w1 - model.layers.9.block_sparse_moe.experts.3.w1 - model.layers.10.block_sparse_moe.experts.3.w1 - model.layers.11.block_sparse_moe.experts.3.w1 - model.layers.12.block_sparse_moe.experts.3.w1 - model.layers.13.block_sparse_moe.experts.3.w1 - model.layers.2.block_sparse_moe.experts.3.w2 - model.layers.1.block_sparse_moe.experts.3.w2 - model.layers.0.block_sparse_moe.experts.3.w2 - model.layers.3.block_sparse_moe.experts.3.w2 - model.layers.4.block_sparse_moe.experts.3.w2 - model.layers.5.block_sparse_moe.experts.3.w2 - model.layers.6.block_sparse_moe.experts.3.w2 - model.layers.7.block_sparse_moe.experts.3.w2 - model.layers.8.block_sparse_moe.experts.3.w2 - model.layers.9.block_sparse_moe.experts.3.w2 - model.layers.10.block_sparse_moe.experts.3.w2 - model.layers.11.block_sparse_moe.experts.3.w2 - model.layers.12.block_sparse_moe.experts.3.w2 - model.layers.13.block_sparse_moe.experts.3.w2 - model.layers.2.block_sparse_moe.experts.3.w3 - model.layers.1.block_sparse_moe.experts.3.w3 - model.layers.0.block_sparse_moe.experts.3.w3 - model.layers.3.block_sparse_moe.experts.3.w3 - model.layers.4.block_sparse_moe.experts.3.w3 - model.layers.5.block_sparse_moe.experts.3.w3 - model.layers.6.block_sparse_moe.experts.3.w3 - model.layers.7.block_sparse_moe.experts.3.w3 - model.layers.8.block_sparse_moe.experts.3.w3 - model.layers.9.block_sparse_moe.experts.3.w3 - model.layers.10.block_sparse_moe.experts.3.w3 - model.layers.11.block_sparse_moe.experts.3.w3 - model.layers.12.block_sparse_moe.experts.3.w3 - model.layers.13.block_sparse_moe.experts.3.w3 - model.layers.3.block_sparse_moe.experts.4.w1 - model.layers.2.block_sparse_moe.experts.4.w1 - model.layers.1.block_sparse_moe.experts.4.w1 - model.layers.0.block_sparse_moe.experts.4.w1 - model.layers.4.block_sparse_moe.experts.4.w1 - model.layers.5.block_sparse_moe.experts.4.w1 - model.layers.6.block_sparse_moe.experts.4.w1 - model.layers.7.block_sparse_moe.experts.4.w1 - model.layers.8.block_sparse_moe.experts.4.w1 - model.layers.9.block_sparse_moe.experts.4.w1 - model.layers.10.block_sparse_moe.experts.4.w1 - model.layers.11.block_sparse_moe.experts.4.w1 - model.layers.12.block_sparse_moe.experts.4.w1 - model.layers.13.block_sparse_moe.experts.4.w1 - model.layers.2.block_sparse_moe.experts.4.w2 - model.layers.3.block_sparse_moe.experts.4.w2 - model.layers.1.block_sparse_moe.experts.4.w2 - model.layers.20.block_sparse_moe.experts.4.w2 - model.layers.0.block_sparse_moe.experts.4.w2 - model.layers.4.block_sparse_moe.experts.4.w2 - model.layers.5.block_sparse_moe.experts.4.w2 - model.layers.6.block_sparse_moe.experts.4.w2 - model.layers.7.block_sparse_moe.experts.4.w2 - model.layers.8.block_sparse_moe.experts.4.w2 - model.layers.9.block_sparse_moe.experts.4.w2 - model.layers.10.block_sparse_moe.experts.4.w2 - model.layers.11.block_sparse_moe.experts.4.w2 - model.layers.12.block_sparse_moe.experts.4.w2 - model.layers.3.block_sparse_moe.experts.4.w3 - model.layers.2.block_sparse_moe.experts.4.w3 - model.layers.1.block_sparse_moe.experts.4.w3 - model.layers.0.block_sparse_moe.experts.4.w3 - model.layers.4.block_sparse_moe.experts.4.w3 - model.layers.5.block_sparse_moe.experts.4.w3 - model.layers.6.block_sparse_moe.experts.4.w3 - model.layers.7.block_sparse_moe.experts.4.w3 - model.layers.8.block_sparse_moe.experts.4.w3 - model.layers.9.block_sparse_moe.experts.4.w3 - model.layers.10.block_sparse_moe.experts.4.w3 - model.layers.11.block_sparse_moe.experts.4.w3 - model.layers.12.block_sparse_moe.experts.4.w3 - model.layers.13.block_sparse_moe.experts.4.w3 - model.layers.4.block_sparse_moe.experts.5.w1 - model.layers.3.block_sparse_moe.experts.5.w1 - model.layers.2.block_sparse_moe.experts.5.w1 - model.layers.1.block_sparse_moe.experts.5.w1 - model.layers.0.block_sparse_moe.experts.5.w1 - model.layers.5.block_sparse_moe.experts.5.w1 - model.layers.6.block_sparse_moe.experts.5.w1 - model.layers.7.block_sparse_moe.experts.5.w1 - model.layers.8.block_sparse_moe.experts.5.w1 - model.layers.9.block_sparse_moe.experts.5.w1 - model.layers.10.block_sparse_moe.experts.5.w1 - model.layers.11.block_sparse_moe.experts.5.w1 - model.layers.12.block_sparse_moe.experts.5.w1 - model.layers.13.block_sparse_moe.experts.5.w1 - model.layers.4.block_sparse_moe.experts.5.w2 - model.layers.2.block_sparse_moe.experts.5.w2 - model.layers.3.block_sparse_moe.experts.5.w2 - model.layers.1.block_sparse_moe.experts.5.w2 - model.layers.0.block_sparse_moe.experts.5.w2 - model.layers.5.block_sparse_moe.experts.5.w2 - model.layers.6.block_sparse_moe.experts.5.w2 - model.layers.7.block_sparse_moe.experts.5.w2 - model.layers.8.block_sparse_moe.experts.5.w2 - model.layers.9.block_sparse_moe.experts.5.w2 - model.layers.10.block_sparse_moe.experts.5.w2 - model.layers.11.block_sparse_moe.experts.5.w2 - model.layers.12.block_sparse_moe.experts.5.w2 - model.layers.13.block_sparse_moe.experts.5.w2 - model.layers.4.block_sparse_moe.experts.5.w3 - model.layers.3.block_sparse_moe.experts.5.w3 - model.layers.2.block_sparse_moe.experts.5.w3 - model.layers.1.block_sparse_moe.experts.5.w3 - model.layers.0.block_sparse_moe.experts.5.w3 - model.layers.5.block_sparse_moe.experts.5.w3 - model.layers.6.block_sparse_moe.experts.5.w3 - model.layers.7.block_sparse_moe.experts.5.w3 - model.layers.8.block_sparse_moe.experts.5.w3 - model.layers.9.block_sparse_moe.experts.5.w3 - model.layers.10.block_sparse_moe.experts.5.w3 - model.layers.11.block_sparse_moe.experts.5.w3 - model.layers.12.block_sparse_moe.experts.5.w3 - model.layers.13.block_sparse_moe.experts.5.w3 - model.layers.5.block_sparse_moe.experts.6.w1 - model.layers.4.block_sparse_moe.experts.6.w1 - model.layers.3.block_sparse_moe.experts.6.w1 - model.layers.2.block_sparse_moe.experts.6.w1 - model.layers.1.block_sparse_moe.experts.6.w1 - model.layers.0.block_sparse_moe.experts.6.w1 - model.layers.6.block_sparse_moe.experts.6.w1 - model.layers.7.block_sparse_moe.experts.6.w1 - model.layers.8.block_sparse_moe.experts.6.w1 - model.layers.9.block_sparse_moe.experts.6.w1 - model.layers.10.block_sparse_moe.experts.6.w1 - model.layers.11.block_sparse_moe.experts.6.w1 - model.layers.12.block_sparse_moe.experts.6.w1 - model.layers.13.block_sparse_moe.experts.6.w1 - model.layers.5.block_sparse_moe.experts.6.w2 - model.layers.4.block_sparse_moe.experts.6.w2 - model.layers.2.block_sparse_moe.experts.6.w2 - model.layers.3.block_sparse_moe.experts.6.w2 - model.layers.1.block_sparse_moe.experts.6.w2 - model.layers.0.block_sparse_moe.experts.6.w2 - model.layers.6.block_sparse_moe.experts.6.w2 - model.layers.7.block_sparse_moe.experts.6.w2 - model.layers.8.block_sparse_moe.experts.6.w2 - model.layers.9.block_sparse_moe.experts.6.w2 - model.layers.10.block_sparse_moe.experts.6.w2 - model.layers.11.block_sparse_moe.experts.6.w2 - model.layers.12.block_sparse_moe.experts.6.w2 - model.layers.13.block_sparse_moe.experts.6.w2 - model.layers.5.block_sparse_moe.experts.6.w3 - model.layers.4.block_sparse_moe.experts.6.w3 - model.layers.3.block_sparse_moe.experts.6.w3 - model.layers.2.block_sparse_moe.experts.6.w3 - model.layers.1.block_sparse_moe.experts.6.w3 - model.layers.0.block_sparse_moe.experts.6.w3 - model.layers.6.block_sparse_moe.experts.6.w3 - model.layers.7.block_sparse_moe.experts.6.w3 - model.layers.8.block_sparse_moe.experts.6.w3 - model.layers.9.block_sparse_moe.experts.6.w3 - model.layers.10.block_sparse_moe.experts.6.w3 - model.layers.11.block_sparse_moe.experts.6.w3 - model.layers.12.block_sparse_moe.experts.6.w3 - model.layers.13.block_sparse_moe.experts.6.w3 - model.layers.5.block_sparse_moe.experts.7.w1 - model.layers.6.block_sparse_moe.experts.7.w1 - model.layers.3.block_sparse_moe.experts.7.w1 - model.layers.4.block_sparse_moe.experts.7.w1 - model.layers.2.block_sparse_moe.experts.7.w1 - model.layers.0.block_sparse_moe.experts.7.w1 - model.layers.7.block_sparse_moe.experts.7.w1 - model.layers.8.block_sparse_moe.experts.7.w1 - model.layers.9.block_sparse_moe.experts.7.w1 - model.layers.10.block_sparse_moe.experts.7.w1 - model.layers.11.block_sparse_moe.experts.7.w1 - model.layers.12.block_sparse_moe.experts.7.w1 - model.layers.13.block_sparse_moe.experts.7.w1 - model.layers.14.block_sparse_moe.experts.7.w1 - model.layers.6.block_sparse_moe.experts.7.w2 - model.layers.5.block_sparse_moe.experts.7.w2 - model.layers.4.block_sparse_moe.experts.7.w2 - model.layers.2.block_sparse_moe.experts.7.w2 - model.layers.3.block_sparse_moe.experts.7.w2 - model.layers.1.block_sparse_moe.experts.7.w2 - model.layers.0.block_sparse_moe.experts.7.w2 - model.layers.7.block_sparse_moe.experts.7.w2 - model.layers.8.block_sparse_moe.experts.7.w2 - model.layers.9.block_sparse_moe.experts.7.w2 - model.layers.10.block_sparse_moe.experts.7.w2 - model.layers.11.block_sparse_moe.experts.7.w2 - model.layers.12.block_sparse_moe.experts.7.w2 - model.layers.13.block_sparse_moe.experts.7.w2 - model.layers.6.block_sparse_moe.experts.7.w3 - model.layers.5.block_sparse_moe.experts.7.w3 - model.layers.4.block_sparse_moe.experts.7.w3 - model.layers.3.block_sparse_moe.experts.7.w3 - model.layers.2.block_sparse_moe.experts.7.w3 - model.layers.0.block_sparse_moe.experts.7.w3 - model.layers.7.block_sparse_moe.experts.7.w3 - model.layers.8.block_sparse_moe.experts.7.w3 - model.layers.9.block_sparse_moe.experts.7.w3 - model.layers.10.block_sparse_moe.experts.7.w3 - model.layers.11.block_sparse_moe.experts.7.w3 - model.layers.12.block_sparse_moe.experts.7.w3 - model.layers.13.block_sparse_moe.experts.7.w3 - model.layers.14.block_sparse_moe.experts.7.w3 - model.layers.0.block_sparse_moe.gate - model.layers.1.block_sparse_moe.gate - model.layers.2.block_sparse_moe.gate - model.layers.3.block_sparse_moe.gate - model.layers.4.block_sparse_moe.gate - model.layers.5.block_sparse_moe.gate - model.layers.6.block_sparse_moe.gate - model.layers.7.block_sparse_moe.gate - model.layers.8.block_sparse_moe.gate - model.layers.9.block_sparse_moe.gate - model.layers.10.block_sparse_moe.gate - model.layers.11.block_sparse_moe.gate - model.layers.12.block_sparse_moe.gate - model.layers.13.block_sparse_moe.gate model_config: output_router_logits: true datasets: - path: /workspace/datasets/dolphin-2.9/dolphin201-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/Ultrachat200kunfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/dolphin-coder-translate-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/dolphin-coder-codegen-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/not_samantha_norefusals.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/Orca-Math-resort-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/agent_instruct_react_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_instruct_j1s1_3k_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_negative_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_react_10p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_tflan_cot_30p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/openhermes200k_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/SystemConversations.jsonl type: sharegpt conversation: chatml chat_template: chatml dataset_prepared_path: thingy val_set_size: 0.0002 output_dir: ./out sequence_len: 4096 sample_packing: true pad_to_sequence_len: true gradient_accumulation_steps: 8 micro_batch_size: 4 num_epochs: 3 logging_steps: 1 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 2.7e-5 wandb_project: dolphin-2.9-mixtral-8x22b wandb_watch: wandb_run_id: wandb_log_model: train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: true gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: # resume_from_checkpoint: /home/ehartford/axolotl/out/checkpoint-316 local_rank: logging_steps: 1 xformers_attention: flash_attention: true saves_per_epoch: 8 save_total_limit: 2 save_steps: evals_per_epoch: 4 eval_sample_packing: false debug: deepspeed: deepspeed_configs/zero3_bf16_cpuoffload_params.json weight_decay: 0.05 fsdp: fsdp_config: special_tokens: eos_token: "<|im_end|>" tokens: - "<|im_start|>" ``` </details><br> ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.7022 | 0.0 | 1 | 0.6989 | | 0.5344 | 0.25 | 238 | 0.5138 | | 0.5204 | 0.5 | 476 | 0.5018 | | 0.5059 | 0.75 | 714 | 0.4951 | | 0.5112 | 1.0 | 952 | 0.4911 | | 0.4561 | 1.24 | 1190 | 0.4978 | | 0.478 | 1.49 | 1428 | 0.4935 | | 0.4714 | 1.74 | 1666 | 0.4899 | | 0.4626 | 1.99 | 1904 | 0.4861 | | 0.3675 | 2.22 | 2142 | 0.5240 | | 0.3595 | 2.47 | 2380 | 0.5229 | | 0.3438 | 2.72 | 2618 | 0.5217 | ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer", "axolotl"], "datasets": ["cognitivecomputations/Dolphin-2.9", "teknium/OpenHermes-2.5", "m-a-p/CodeFeedback-Filtered-Instruction", "cognitivecomputations/dolphin-coder", "cognitivecomputations/samantha-data", "HuggingFaceH4/ultrachat_200k", "microsoft/orca-math-word-problems-200k", "abacusai/SystemChat-1.1", "Locutusque/function-calling-chatml", "internlm/Agent-FLAN"], "base_model": "mistral-community/Mixtral-8x22B-v0.1", "model-index": [{"name": "out", "results": []}]}
blockblockblock/dolphin-2.9-mixtral-8x22b-bpw3-exl2
null
[ "transformers", "safetensors", "mixtral", "text-generation", "generated_from_trainer", "axolotl", "conversational", "en", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:microsoft/orca-math-word-problems-200k", "dataset:abacusai/SystemChat-1.1", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "base_model:mistral-community/Mixtral-8x22B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "3-bit", "region:us" ]
null
2024-05-01T05:36:32+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mixtral #text-generation #generated_from_trainer #axolotl #conversational #en #dataset-cognitivecomputations/Dolphin-2.9 #dataset-teknium/OpenHermes-2.5 #dataset-m-a-p/CodeFeedback-Filtered-Instruction #dataset-cognitivecomputations/dolphin-coder #dataset-cognitivecomputations/samantha-data #dataset-HuggingFaceH4/ultrachat_200k #dataset-microsoft/orca-math-word-problems-200k #dataset-abacusai/SystemChat-1.1 #dataset-Locutusque/function-calling-chatml #dataset-internlm/Agent-FLAN #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #3-bit #region-us
Dolphin 2.9 Mixtral 8x22b ========================= Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations Discord: URL <img src="URL width="600" /> My appreciation for the sponsors of Dolphin 2.9: * Crusoe Cloud - provided excellent on-demand 8xH100 node This model is based on Dolphin-2.9-Mixtral-8x22b, and is Apache-2.0 licensed. The base model has 64k context, and the full-weight fine-tuning was with 4k sequence length. It took 1 week on 8xH100 provided by Crusoe Cloud This model was trained FFT on 50% parameters (targeted with Laser Scanner by Fernando Fernandes, David Golchinfar, Lucas Atkins, and Eric Hartford) , using ChatML prompt template format. example: Dolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling. Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. URL You are responsible for any content you create using this model. Enjoy responsibly. Dolphin is licensed Apache 2.0. I grant permission for any use, including commercial, that falls within accordance with Apache-2.0 license. Dolphin was trained on data generated from GPT4, among other models. Evals ----- !image/png Training -------- <img src="URL alt="Built with Axolotl" width="200" height="32"/> See axolotl config axolotl version: '0.4.0' ### Training results ### Framework versions * Transformers 4.40.0.dev0 * Pytorch 2.2.2+cu121 * Datasets 2.15.0 * Tokenizers 0.15.0
[ "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.2+cu121\n* Datasets 2.15.0\n* Tokenizers 0.15.0" ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #generated_from_trainer #axolotl #conversational #en #dataset-cognitivecomputations/Dolphin-2.9 #dataset-teknium/OpenHermes-2.5 #dataset-m-a-p/CodeFeedback-Filtered-Instruction #dataset-cognitivecomputations/dolphin-coder #dataset-cognitivecomputations/samantha-data #dataset-HuggingFaceH4/ultrachat_200k #dataset-microsoft/orca-math-word-problems-200k #dataset-abacusai/SystemChat-1.1 #dataset-Locutusque/function-calling-chatml #dataset-internlm/Agent-FLAN #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #3-bit #region-us \n", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.2+cu121\n* Datasets 2.15.0\n* Tokenizers 0.15.0" ]
[ 228, 5, 47 ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #generated_from_trainer #axolotl #conversational #en #dataset-cognitivecomputations/Dolphin-2.9 #dataset-teknium/OpenHermes-2.5 #dataset-m-a-p/CodeFeedback-Filtered-Instruction #dataset-cognitivecomputations/dolphin-coder #dataset-cognitivecomputations/samantha-data #dataset-HuggingFaceH4/ultrachat_200k #dataset-microsoft/orca-math-word-problems-200k #dataset-abacusai/SystemChat-1.1 #dataset-Locutusque/function-calling-chatml #dataset-internlm/Agent-FLAN #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #3-bit #region-us \n### Training results### Framework versions\n\n\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.2+cu121\n* Datasets 2.15.0\n* Tokenizers 0.15.0" ]
null
null
The OSTtoPSTAPP PDF Split Application helps users to split a single file into two files. Users can benefit much from the PDF Split Tool. This application that is both independent and simple to use is the tool. To split a PDF file, Adobe Reader does not need to be installed. To split a specific PDF file, the tool requires just a few steps. It is safe and cutting-edge software. PDF files can be chosen in two different ways: file mode and folder mode. Users can upload multiple PDF files at once while using folder mode. With file mode, just one particular PDF file is loaded at once. Split PDF is an application that allows users to divide a PDF file without changing the file format or data structure. Everything from the data to the pictures to the charts, graphs, and tables is preserved by the application. This program works with every Windows operating system version, including Windows 11, 10, 8.1, 8, 7, and all earlier versions. Save the license for unlimited usage and download the trial version for testing. Read More:- https://www.osttopstapp.com/pdf-split.html
{}
olivernoah/OSTtoPSTAPP-PDF-Split-Application
null
[ "region:us" ]
null
2024-05-01T05:38:37+00:00
[]
[]
TAGS #region-us
The OSTtoPSTAPP PDF Split Application helps users to split a single file into two files. Users can benefit much from the PDF Split Tool. This application that is both independent and simple to use is the tool. To split a PDF file, Adobe Reader does not need to be installed. To split a specific PDF file, the tool requires just a few steps. It is safe and cutting-edge software. PDF files can be chosen in two different ways: file mode and folder mode. Users can upload multiple PDF files at once while using folder mode. With file mode, just one particular PDF file is loaded at once. Split PDF is an application that allows users to divide a PDF file without changing the file format or data structure. Everything from the data to the pictures to the charts, graphs, and tables is preserved by the application. This program works with every Windows operating system version, including Windows 11, 10, 8.1, 8, 7, and all earlier versions. Save the license for unlimited usage and download the trial version for testing. Read More:- URL
[]
[ "TAGS\n#region-us \n" ]
[ 5 ]
[ "TAGS\n#region-us \n" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small En - MrOli This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Trelis/llm-lingo dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Wer: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:---:| | 0.0 | 1000.0 | 1000 | 0.0000 | 0.0 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["Trelis/llm-lingo"], "metrics": ["wer"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper Small En - MrOli", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Trelis/llm-lingo", "type": "Trelis/llm-lingo", "args": "config: En, split: test"}, "metrics": [{"type": "wer", "value": 0.0, "name": "Wer"}]}]}]}
PuspaKamal/Speech_recognition
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "en", "dataset:Trelis/llm-lingo", "base_model:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-05-01T05:38:49+00:00
[]
[ "en" ]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #en #dataset-Trelis/llm-lingo #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us
Whisper Small En - MrOli ======================== This model is a fine-tuned version of openai/whisper-small on the Trelis/llm-lingo dataset. It achieves the following results on the evaluation set: * Loss: 0.0000 * Wer: 0.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * training\_steps: 1000 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 1000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #en #dataset-Trelis/llm-lingo #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 1000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 70, 126, 5, 44 ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #en #dataset-Trelis/llm-lingo #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 1000\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
{"license": "apache-2.0", "library_name": "peft"}
SF-Foundation/Ein2-70B
null
[ "peft", "safetensors", "arxiv:1910.09700", "license:apache-2.0", "region:us" ]
null
2024-05-01T05:38:54+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #license-apache-2.0 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.8.2
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.8.2" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #license-apache-2.0 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.8.2" ]
[ 30, 6, 4, 50, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5, 13 ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #license-apache-2.0 #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact### Framework versions\n\n- PEFT 0.8.2" ]
null
transformers
# Uploaded model - **Developed by:** harrygens - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b"}
harrygens/llama3-8b-lora-unsloth
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T05:40:53+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: harrygens - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: harrygens\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: harrygens\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 58, 73 ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: harrygens\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) zephyr-7b-gemma-sft-v0.1 - bnb 8bits - Model creator: https://huggingface.co/HuggingFaceH4/ - Original model: https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-sft-v0.1/ Original model description: --- license: other license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms base_model: google/gemma-7b tags: - alignment-handbook - trl - sft - generated_from_trainer - trl - sft - generated_from_trainer datasets: - HuggingFaceH4/deita-10k-v0-sft model-index: - name: zephyr-7b-gemma-sft results: [] language: - en --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-gemma-sft This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the HuggingFaceH4/deita-10k-v0-sft dataset. It achieves the following results on the evaluation set: - Loss: 0.9732 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9482 | 1.0 | 299 | 0.9848 | | 0.8139 | 2.0 | 599 | 0.9610 | | 0.722 | 2.99 | 897 | 0.9732 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.1
{}
RichardErkhov/HuggingFaceH4_-_zephyr-7b-gemma-sft-v0.1-8bits
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-05-01T05:42:30+00:00
[]
[]
TAGS #transformers #safetensors #gemma #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models zephyr-7b-gemma-sft-v0.1 - bnb 8bits * Model creator: URL * Original model: URL Original model description: --------------------------- license: other license\_name: gemma-terms-of-use license\_link: URL base\_model: google/gemma-7b tags: * alignment-handbook * trl * sft * generated\_from\_trainer * trl * sft * generated\_from\_trainer datasets: * HuggingFaceH4/deita-10k-v0-sft model-index: * name: zephyr-7b-gemma-sft results: [] language: * en --- zephyr-7b-gemma-sft =================== This model is a fine-tuned version of google/gemma-7b on the HuggingFaceH4/deita-10k-v0-sft dataset. It achieves the following results on the evaluation set: * Loss: 0.9732 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 16 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 128 * total\_eval\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.39.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.15.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.1" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.1" ]
[ 40, 176, 5, 47 ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.1" ]
text-classification
setfit
Readme
{"language": ["en"], "license": "apache-2.0", "library_name": "setfit", "tags": ["endpoints", "text-classification-inference"], "pipeline_tag": "text-classification"}
YaMaMa421/testpublic
null
[ "setfit", "bert", "endpoints", "text-classification-inference", "text-classification", "en", "license:apache-2.0", "region:us" ]
null
2024-05-01T05:42:55+00:00
[]
[ "en" ]
TAGS #setfit #bert #endpoints #text-classification-inference #text-classification #en #license-apache-2.0 #region-us
Readme
[]
[ "TAGS\n#setfit #bert #endpoints #text-classification-inference #text-classification #en #license-apache-2.0 #region-us \n" ]
[ 34 ]
[ "TAGS\n#setfit #bert #endpoints #text-classification-inference #text-classification #en #license-apache-2.0 #region-us \n" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) zephyr-7b-gemma-v0.1 - bnb 4bits - Model creator: https://huggingface.co/HuggingFaceH4/ - Original model: https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1/ Original model description: --- license: other tags: - alignment-handbook - trl - dpo - generated_from_trainer base_model: HuggingFaceH4/zephyr-7b-gemma-sft-v0.1 datasets: - argilla/dpo-mix-7k license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms pipeline_tag: text-generation model-index: - name: zephyr-7b-gemma results: - task: type: text-generation name: Text Generation dataset: name: MT-Bench type: unknown metrics: - type: unknown value: 7.81 name: score source: url: https://huggingface.co/spaces/lmsys/mt-bench - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 58.45 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-gemma-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.48 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-gemma-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 60.68 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-gemma-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 52.07 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-gemma-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 74.19 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-gemma-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 45.56 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-gemma-v0.1 name: Open LLM Leaderboard --- <img src="https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1/resolve/main/thumbnail.png" alt="Zephyr 7B Gemma Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for Zephyr 7B Gemma Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 7B Gemma is the third model in the series, and is a fine-tuned version of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) that was trained on on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO). You can reproduce the training of this model via the recipe provided in the [Alignment Handbook](https://github.com/huggingface/alignment-handbook). ## Model description - **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily English - **License:** Gemma Terms of Use - **Finetuned from model:** [google/gemma-7b](https://huggingface.co/google/gemma-7b) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/huggingface/alignment-handbook - **Demo:** https://huggingface.co/spaces/HuggingFaceH4/zephyr-7b-gemma-chat ## Performance | Model |MT Bench⬇️|IFEval| |-----------------------------------------------------------------------|------:|------:| |[zephyr-7b-gemma-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1)| 7.81 | 28.76| |[zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | 7.34 | 43.81| |[google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) | 6.38 | 38.01| | Model |AGIEval|GPT4All|TruthfulQA|BigBench|Average ⬇️| |-----------------------------------------------------------------------|------:|------:|---------:|-------:|------:| |[zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | 37.52| 71.77| 55.26| 39.77| 51.08| |[zephyr-7b-gemma-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1)| 34.22| 66.37| 52.19| 37.10| 47.47| |[mlabonne/Gemmalpaca-7B](https://huggingface.co/mlabonne/Gemmalpaca-7B)| 21.6 | 40.87| 44.85 | 30.49| 34.45| |[google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) | 21.33| 40.84| 41.70| 30.25| 33.53| <details><summary>Details of AGIEval, GPT4All, TruthfulQA, BigBench </summary> ### AGIEval | Task |Version| Metric |Value| |Stderr| |------------------------------|------:|--------|----:|---|-----:| |agieval_aqua_rat | 0|acc |21.65|± | 2.59| | | |acc_norm|25.20|± | 2.73| |agieval_logiqa_en | 0|acc |34.72|± | 1.87| | | |acc_norm|35.94|± | 1.88| |agieval_lsat_ar | 0|acc |19.57|± | 2.62| | | |acc_norm|21.74|± | 2.73| |agieval_lsat_lr | 0|acc |30.59|± | 2.04| | | |acc_norm|32.55|± | 2.08| |agieval_lsat_rc | 0|acc |49.07|± | 3.05| | | |acc_norm|42.75|± | 3.02| |agieval_sat_en | 0|acc |54.85|± | 3.48| | | |acc_norm|53.40|± | 3.48| |agieval_sat_en_without_passage| 0|acc |37.38|± | 3.38| | | |acc_norm|33.98|± | 3.31| |agieval_sat_math | 0|acc |30.91|± | 3.12| | | |acc_norm|28.18|± | 3.04| Average: 34.22% ### GPT4All | Task |Version| Metric |Value| |Stderr| |-------------|------:|--------|----:|---|-----:| |arc_challenge| 0|acc |49.15|± | 1.46| | | |acc_norm|52.47|± | 1.46| |arc_easy | 0|acc |77.44|± | 0.86| | | |acc_norm|74.75|± | 0.89| |boolq | 1|acc |79.69|± | 0.70| |hellaswag | 0|acc |60.59|± | 0.49| | | |acc_norm|78.00|± | 0.41| |openbookqa | 0|acc |29.20|± | 2.04| | | |acc_norm|37.80|± | 2.17| |piqa | 0|acc |76.82|± | 0.98| | | |acc_norm|77.80|± | 0.97| |winogrande | 0|acc |64.09|± | 1.35| Average: 66.37% ### TruthfulQA | Task |Version|Metric|Value| |Stderr| |-------------|------:|------|----:|---|-----:| |truthfulqa_mc| 1|mc1 |35.74|± | 1.68| | | |mc2 |52.19|± | 1.59| Average: 52.19% ### Bigbench | Task |Version| Metric |Value| |Stderr| |------------------------------------------------|------:|---------------------|----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|53.68|± | 3.63| |bigbench_date_understanding | 0|multiple_choice_grade|59.89|± | 2.55| |bigbench_disambiguation_qa | 0|multiple_choice_grade|30.23|± | 2.86| |bigbench_geometric_shapes | 0|multiple_choice_grade|11.42|± | 1.68| | | |exact_str_match | 0.00|± | 0.00| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|28.40|± | 2.02| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|19.14|± | 1.49| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|44.67|± | 2.88| |bigbench_movie_recommendation | 0|multiple_choice_grade|26.80|± | 1.98| |bigbench_navigate | 0|multiple_choice_grade|50.00|± | 1.58| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|52.75|± | 1.12| |bigbench_ruin_names | 0|multiple_choice_grade|33.04|± | 2.22| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|33.37|± | 1.49| |bigbench_snarks | 0|multiple_choice_grade|48.62|± | 3.73| |bigbench_sports_understanding | 0|multiple_choice_grade|58.11|± | 1.57| |bigbench_temporal_sequences | 0|multiple_choice_grade|37.20|± | 1.53| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|20.08|± | 1.13| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|15.77|± | 0.87| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|44.67|± | 2.88| Average: 37.1% </details> ## Intended uses & limitations The model was initially fine-tuned on the [DEITA 10K](https://huggingface.co/datasets/HuggingFaceH4/deita-10k-v0-sft) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT. We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [argilla/dpo-mix-7k](https://huggingface.co/datasets/argilla/dpo-mix-7k) dataset, which contains 7k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat) to test its capabilities. Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: ```python # pip install transformers>=4.38.2 # pip install accelerate import torch from transformers import pipeline pipe = pipeline( "text-generation", model="HuggingFaceH4/zephyr-7b-gemma-v0.1", device_map="auto", torch_dtype=torch.bfloat16, ) messages = [ { "role": "system", "content": "", # Model not yet trained for follow this }, {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] outputs = pipe( messages, max_new_tokens=128, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, stop_sequence="<|im_end|>", ) print(outputs[0]["generated_text"][-1]["content"]) # It is not possible for a human to eat a helicopter in one sitting, as a # helicopter is a large and inedible machine. Helicopters are made of metal, # plastic, and other materials that are not meant to be consumed by humans. # Eating a helicopter would be extremely dangerous and would likely cause # serious health problems, including choking, suffocation, and poisoning. It is # important to only eat food that is safe and intended for human consumption. ``` ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Zephyr 7B Gemma has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (`google/gemma-7b`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [StarCoder2 model card](https://huggingface.co/bigcode/starcoder2-15b) for an example of this. ## Training and evaluation data This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-gemma-sft-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-sft-v0.1) on the argilla/dpo-mix-7k dataset. It achieves the following results on the evaluation set: - Loss: 0.4695 - Rewards/chosen: -3.3746 - Rewards/rejected: -4.9715 - Rewards/accuracies: 0.7188 - Rewards/margins: 1.5970 - Logps/rejected: -459.4853 - Logps/chosen: -429.9115 - Logits/rejected: 86.4684 - Logits/chosen: 92.8200 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.1923 | 1.9 | 100 | 0.4736 | -3.4575 | -4.9556 | 0.75 | 1.4980 | -459.1662 | -431.5707 | 86.3863 | 92.7360 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.1 ## Citation Information If you find this model useful in your work, please consider citing the Zephyr technical report: ``` @misc{tunstall2023zephyr, title={Zephyr: Direct Distillation of LM Alignment}, author={Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro von Werra and Clémentine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M. Rush and Thomas Wolf}, year={2023}, eprint={2310.16944}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` You may also wish to cite the creators of this model as well: ``` @misc{zephyr_7b_gemma, author = {Lewis Tunstall and Philipp Schmid}, title = {Zephyr 7B Gemma}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1}} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_HuggingFaceH4__zephyr-7b-gemma-v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |62.41| |AI2 Reasoning Challenge (25-Shot)|58.45| |HellaSwag (10-Shot) |83.48| |MMLU (5-Shot) |60.68| |TruthfulQA (0-shot) |52.07| |Winogrande (5-shot) |74.19| |GSM8k (5-shot) |45.56|
{}
RichardErkhov/HuggingFaceH4_-_zephyr-7b-gemma-v0.1-4bits
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:2310.16944", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-05-01T05:45:22+00:00
[ "2310.16944" ]
[]
TAGS #transformers #safetensors #gemma #text-generation #conversational #arxiv-2310.16944 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models zephyr-7b-gemma-v0.1 - bnb 4bits * Model creator: URL * Original model: URL Original model description: --------------------------- license: other tags: * alignment-handbook * trl * dpo * generated\_from\_trainer base\_model: HuggingFaceH4/zephyr-7b-gemma-sft-v0.1 datasets: * argilla/dpo-mix-7k license\_name: gemma-terms-of-use license\_link: URL pipeline\_tag: text-generation model-index: * name: zephyr-7b-gemma results: + task: type: text-generation name: Text Generation dataset: name: MT-Bench type: unknown metrics: - type: unknown value: 7.81 name: score source: url: URL + task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2\_arc config: ARC-Challenge split: test args: num\_few\_shot: 25 metrics: - type: acc\_norm value: 58.45 name: normalized accuracy source: url: URL name: Open LLM Leaderboard + task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num\_few\_shot: 10 metrics: - type: acc\_norm value: 83.48 name: normalized accuracy source: url: URL name: Open LLM Leaderboard + task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num\_few\_shot: 5 metrics: - type: acc value: 60.68 name: accuracy source: url: URL name: Open LLM Leaderboard + task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful\_qa config: multiple\_choice split: validation args: num\_few\_shot: 0 metrics: - type: mc2 value: 52.07 source: url: URL name: Open LLM Leaderboard + task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande\_xl split: validation args: num\_few\_shot: 5 metrics: - type: acc value: 74.19 name: accuracy source: url: URL name: Open LLM Leaderboard + task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num\_few\_shot: 5 metrics: - type: acc value: 45.56 name: accuracy source: url: URL name: Open LLM Leaderboard --- <img src="URL alt="Zephyr 7B Gemma Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Model Card for Zephyr 7B Gemma ============================== Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 7B Gemma is the third model in the series, and is a fine-tuned version of 'google/gemma-7b' that was trained on on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO). You can reproduce the training of this model via the recipe provided in the Alignment Handbook. Model description ----------------- * Model type: A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. * Language(s) (NLP): Primarily English * License: Gemma Terms of Use * Finetuned from model: google/gemma-7b ### Model Sources * Repository: URL * Demo: URL Performance ----------- Details of AGIEval, GPT4All, TruthfulQA, BigBench ### AGIEval Average: 34.22% ### GPT4All Average: 66.37% ### TruthfulQA Average: 52.19% ### Bigbench Average: 37.1% Intended uses & limitations --------------------------- The model was initially fine-tuned on the DEITA 10K dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT. We then further aligned the model with TRL's 'DPOTrainer' on the argilla/dpo-mix-7k dataset, which contains 7k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our demo to test its capabilities. Here's how you can run the model using the 'pipeline()' function from Transformers: Bias, Risks, and Limitations ---------------------------- Zephyr 7B Gemma has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model ('google/gemma-7b'), however it is likely to have included a mix of Web data and technical sources like books and code. See the StarCoder2 model card for an example of this. Training and evaluation data ---------------------------- This model is a fine-tuned version of HuggingFaceH4/zephyr-7b-gemma-sft-v0.1 on the argilla/dpo-mix-7k dataset. It achieves the following results on the evaluation set: * Loss: 0.4695 * Rewards/chosen: -3.3746 * Rewards/rejected: -4.9715 * Rewards/accuracies: 0.7188 * Rewards/margins: 1.5970 * Logps/rejected: -459.4853 * Logps/chosen: -429.9115 * Logits/rejected: 86.4684 * Logits/chosen: 92.8200 ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-07 * train\_batch\_size: 2 * eval\_batch\_size: 4 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 8 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 128 * total\_eval\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.39.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.15.1 If you find this model useful in your work, please consider citing the Zephyr technical report: You may also wish to cite the creators of this model as well: Open LLM Leaderboard Evaluation Results ======================================= Detailed results can be found here
[ "### Model Sources\n\n\n* Repository: URL\n* Demo: URL\n\n\nPerformance\n-----------\n\n\n\n\nDetails of AGIEval, GPT4All, TruthfulQA, BigBench", "### AGIEval\n\n\n\nAverage: 34.22%", "### GPT4All\n\n\n\nAverage: 66.37%", "### TruthfulQA\n\n\n\nAverage: 52.19%", "### Bigbench\n\n\n\nAverage: 37.1%\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was initially fine-tuned on the DEITA 10K dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.\nWe then further aligned the model with TRL's 'DPOTrainer' on the argilla/dpo-mix-7k dataset, which contains 7k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our demo to test its capabilities.\n\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nZephyr 7B Gemma has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model ('google/gemma-7b'), however it is likely to have included a mix of Web data and technical sources like books and code. See the StarCoder2 model card for an example of this.\n\n\nTraining and evaluation data\n----------------------------\n\n\nThis model is a fine-tuned version of HuggingFaceH4/zephyr-7b-gemma-sft-v0.1 on the argilla/dpo-mix-7k dataset.\n\n\nIt achieves the following results on the evaluation set:\n\n\n* Loss: 0.4695\n* Rewards/chosen: -3.3746\n* Rewards/rejected: -4.9715\n* Rewards/accuracies: 0.7188\n* Rewards/margins: 1.5970\n* Logps/rejected: -459.4853\n* Logps/chosen: -429.9115\n* Logits/rejected: 86.4684\n* Logits/chosen: 92.8200", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.1\n\n\nIf you find this model useful in your work, please consider citing the Zephyr technical report:\n\n\nYou may also wish to cite the creators of this model as well:\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-2310.16944 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "### Model Sources\n\n\n* Repository: URL\n* Demo: URL\n\n\nPerformance\n-----------\n\n\n\n\nDetails of AGIEval, GPT4All, TruthfulQA, BigBench", "### AGIEval\n\n\n\nAverage: 34.22%", "### GPT4All\n\n\n\nAverage: 66.37%", "### TruthfulQA\n\n\n\nAverage: 52.19%", "### Bigbench\n\n\n\nAverage: 37.1%\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was initially fine-tuned on the DEITA 10K dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.\nWe then further aligned the model with TRL's 'DPOTrainer' on the argilla/dpo-mix-7k dataset, which contains 7k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our demo to test its capabilities.\n\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nZephyr 7B Gemma has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model ('google/gemma-7b'), however it is likely to have included a mix of Web data and technical sources like books and code. See the StarCoder2 model card for an example of this.\n\n\nTraining and evaluation data\n----------------------------\n\n\nThis model is a fine-tuned version of HuggingFaceH4/zephyr-7b-gemma-sft-v0.1 on the argilla/dpo-mix-7k dataset.\n\n\nIt achieves the following results on the evaluation set:\n\n\n* Loss: 0.4695\n* Rewards/chosen: -3.3746\n* Rewards/rejected: -4.9715\n* Rewards/accuracies: 0.7188\n* Rewards/margins: 1.5970\n* Logps/rejected: -459.4853\n* Logps/chosen: -429.9115\n* Logits/rejected: 86.4684\n* Logits/chosen: 92.8200", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.1\n\n\nIf you find this model useful in your work, please consider citing the Zephyr technical report:\n\n\nYou may also wish to cite the creators of this model as well:\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here" ]
[ 50, 45, 12, 13, 12, 503, 176, 5, 133 ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-2310.16944 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n### Model Sources\n\n\n* Repository: URL\n* Demo: URL\n\n\nPerformance\n-----------\n\n\n\n\nDetails of AGIEval, GPT4All, TruthfulQA, BigBench### AGIEval\n\n\n\nAverage: 34.22%### GPT4All\n\n\n\nAverage: 66.37%### TruthfulQA\n\n\n\nAverage: 52.19%### Bigbench\n\n\n\nAverage: 37.1%\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was initially fine-tuned on the DEITA 10K dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.\nWe then further aligned the model with TRL's 'DPOTrainer' on the argilla/dpo-mix-7k dataset, which contains 7k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our demo to test its capabilities.\n\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nZephyr 7B Gemma has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model ('google/gemma-7b'), however it is likely to have included a mix of Web data and technical sources like books and code. See the StarCoder2 model card for an example of this.\n\n\nTraining and evaluation data\n----------------------------\n\n\nThis model is a fine-tuned version of HuggingFaceH4/zephyr-7b-gemma-sft-v0.1 on the argilla/dpo-mix-7k dataset.\n\n\nIt achieves the following results on the evaluation set:\n\n\n* Loss: 0.4695\n* Rewards/chosen: -3.3746\n* Rewards/rejected: -4.9715\n* Rewards/accuracies: 0.7188\n* Rewards/margins: 1.5970\n* Logps/rejected: -459.4853\n* Logps/chosen: -429.9115\n* Logits/rejected: 86.4684\n* Logits/chosen: 92.8200### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 2### Training results### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.1\n\n\nIf you find this model useful in your work, please consider citing the Zephyr technical report:\n\n\nYou may also wish to cite the creators of this model as well:\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # auravaces/nsut-bert-pronoun-coreference This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0622 - Validation Loss: 0.0964 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 375, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1263 | 0.0988 | 0 | | 0.0777 | 0.0943 | 1 | | 0.0622 | 0.0964 | 2 | ### Framework versions - Transformers 4.40.1 - TensorFlow 2.15.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "bert-base-uncased", "model-index": [{"name": "auravaces/nsut-bert-pronoun-coreference", "results": []}]}
auravaces/nsut-bert-pronoun-coreference
null
[ "transformers", "tf", "bert", "token-classification", "generated_from_keras_callback", "base_model:bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T05:45:27+00:00
[]
[]
TAGS #transformers #tf #bert #token-classification #generated_from_keras_callback #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
auravaces/nsut-bert-pronoun-coreference ======================================= This model is a fine-tuned version of bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 0.0622 * Validation Loss: 0.0964 * Epoch: 2 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 3e-05, 'decay\_steps': 375, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\_decay\_rate': 0.01} * training\_precision: mixed\_float16 ### Training results ### Framework versions * Transformers 4.40.1 * TensorFlow 2.15.0 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 3e-05, 'decay\\_steps': 375, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: mixed\\_float16", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tf #bert #token-classification #generated_from_keras_callback #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 3e-05, 'decay\\_steps': 375, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: mixed\\_float16", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 55, 222, 5, 38 ]
[ "TAGS\n#transformers #tf #bert #token-classification #generated_from_keras_callback #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 3e-05, 'decay\\_steps': 375, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: mixed\\_float16### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tsuneakikato/bert-base-japanese-v3-wrime
null
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T05:46:11+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 37, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text-generation
transformers
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import ( AutoConfig, AutoModelForCausalLM, AutoTokenizer, Trainer, BitsAndBytesConfig, ) from peft import PeftModel, PeftConfig config = PeftConfig.from_pretrained("trottdw/CE104FinalLlama3Orca") model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3-8B") model = model.to('cuda:0') lora_model = PeftModel.from_pretrained(model, "trottdw/CE104FinalLlama3Orca") tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B", use_fast=True def prompt(text, tmp): model_inputs = tokenizer(text, return_tensors="pt").to("cuda:0") output = lora_model.generate(**model_inputs, temperature=tmp) return print(tokenizer.decode(output[0], skip_special_tokens=True)) print(prompt('Describe the solar system',0.7)) ```
{"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]}
trottdw/CE104FinalLlama3Orca
null
[ "transformers", "safetensors", "autotrain", "text-generation-inference", "text-generation", "peft", "conversational", "license:other", "endpoints_compatible", "region:us" ]
null
2024-05-01T05:46:46+00:00
[]
[]
TAGS #transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit AutoTrain. # Usage
[ "# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.", "# Usage" ]
[ "TAGS\n#transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us \n", "# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.", "# Usage" ]
[ 39, 23, 2 ]
[ "TAGS\n#transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us \n# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.# Usage" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) zephyr-7b-gemma-v0.1 - bnb 8bits - Model creator: https://huggingface.co/HuggingFaceH4/ - Original model: https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1/ Original model description: --- license: other tags: - alignment-handbook - trl - dpo - generated_from_trainer base_model: HuggingFaceH4/zephyr-7b-gemma-sft-v0.1 datasets: - argilla/dpo-mix-7k license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms pipeline_tag: text-generation model-index: - name: zephyr-7b-gemma results: - task: type: text-generation name: Text Generation dataset: name: MT-Bench type: unknown metrics: - type: unknown value: 7.81 name: score source: url: https://huggingface.co/spaces/lmsys/mt-bench - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 58.45 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-gemma-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.48 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-gemma-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 60.68 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-gemma-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 52.07 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-gemma-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 74.19 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-gemma-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 45.56 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-gemma-v0.1 name: Open LLM Leaderboard --- <img src="https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1/resolve/main/thumbnail.png" alt="Zephyr 7B Gemma Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for Zephyr 7B Gemma Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 7B Gemma is the third model in the series, and is a fine-tuned version of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) that was trained on on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO). You can reproduce the training of this model via the recipe provided in the [Alignment Handbook](https://github.com/huggingface/alignment-handbook). ## Model description - **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily English - **License:** Gemma Terms of Use - **Finetuned from model:** [google/gemma-7b](https://huggingface.co/google/gemma-7b) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/huggingface/alignment-handbook - **Demo:** https://huggingface.co/spaces/HuggingFaceH4/zephyr-7b-gemma-chat ## Performance | Model |MT Bench⬇️|IFEval| |-----------------------------------------------------------------------|------:|------:| |[zephyr-7b-gemma-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1)| 7.81 | 28.76| |[zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | 7.34 | 43.81| |[google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) | 6.38 | 38.01| | Model |AGIEval|GPT4All|TruthfulQA|BigBench|Average ⬇️| |-----------------------------------------------------------------------|------:|------:|---------:|-------:|------:| |[zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | 37.52| 71.77| 55.26| 39.77| 51.08| |[zephyr-7b-gemma-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1)| 34.22| 66.37| 52.19| 37.10| 47.47| |[mlabonne/Gemmalpaca-7B](https://huggingface.co/mlabonne/Gemmalpaca-7B)| 21.6 | 40.87| 44.85 | 30.49| 34.45| |[google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) | 21.33| 40.84| 41.70| 30.25| 33.53| <details><summary>Details of AGIEval, GPT4All, TruthfulQA, BigBench </summary> ### AGIEval | Task |Version| Metric |Value| |Stderr| |------------------------------|------:|--------|----:|---|-----:| |agieval_aqua_rat | 0|acc |21.65|± | 2.59| | | |acc_norm|25.20|± | 2.73| |agieval_logiqa_en | 0|acc |34.72|± | 1.87| | | |acc_norm|35.94|± | 1.88| |agieval_lsat_ar | 0|acc |19.57|± | 2.62| | | |acc_norm|21.74|± | 2.73| |agieval_lsat_lr | 0|acc |30.59|± | 2.04| | | |acc_norm|32.55|± | 2.08| |agieval_lsat_rc | 0|acc |49.07|± | 3.05| | | |acc_norm|42.75|± | 3.02| |agieval_sat_en | 0|acc |54.85|± | 3.48| | | |acc_norm|53.40|± | 3.48| |agieval_sat_en_without_passage| 0|acc |37.38|± | 3.38| | | |acc_norm|33.98|± | 3.31| |agieval_sat_math | 0|acc |30.91|± | 3.12| | | |acc_norm|28.18|± | 3.04| Average: 34.22% ### GPT4All | Task |Version| Metric |Value| |Stderr| |-------------|------:|--------|----:|---|-----:| |arc_challenge| 0|acc |49.15|± | 1.46| | | |acc_norm|52.47|± | 1.46| |arc_easy | 0|acc |77.44|± | 0.86| | | |acc_norm|74.75|± | 0.89| |boolq | 1|acc |79.69|± | 0.70| |hellaswag | 0|acc |60.59|± | 0.49| | | |acc_norm|78.00|± | 0.41| |openbookqa | 0|acc |29.20|± | 2.04| | | |acc_norm|37.80|± | 2.17| |piqa | 0|acc |76.82|± | 0.98| | | |acc_norm|77.80|± | 0.97| |winogrande | 0|acc |64.09|± | 1.35| Average: 66.37% ### TruthfulQA | Task |Version|Metric|Value| |Stderr| |-------------|------:|------|----:|---|-----:| |truthfulqa_mc| 1|mc1 |35.74|± | 1.68| | | |mc2 |52.19|± | 1.59| Average: 52.19% ### Bigbench | Task |Version| Metric |Value| |Stderr| |------------------------------------------------|------:|---------------------|----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|53.68|± | 3.63| |bigbench_date_understanding | 0|multiple_choice_grade|59.89|± | 2.55| |bigbench_disambiguation_qa | 0|multiple_choice_grade|30.23|± | 2.86| |bigbench_geometric_shapes | 0|multiple_choice_grade|11.42|± | 1.68| | | |exact_str_match | 0.00|± | 0.00| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|28.40|± | 2.02| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|19.14|± | 1.49| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|44.67|± | 2.88| |bigbench_movie_recommendation | 0|multiple_choice_grade|26.80|± | 1.98| |bigbench_navigate | 0|multiple_choice_grade|50.00|± | 1.58| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|52.75|± | 1.12| |bigbench_ruin_names | 0|multiple_choice_grade|33.04|± | 2.22| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|33.37|± | 1.49| |bigbench_snarks | 0|multiple_choice_grade|48.62|± | 3.73| |bigbench_sports_understanding | 0|multiple_choice_grade|58.11|± | 1.57| |bigbench_temporal_sequences | 0|multiple_choice_grade|37.20|± | 1.53| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|20.08|± | 1.13| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|15.77|± | 0.87| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|44.67|± | 2.88| Average: 37.1% </details> ## Intended uses & limitations The model was initially fine-tuned on the [DEITA 10K](https://huggingface.co/datasets/HuggingFaceH4/deita-10k-v0-sft) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT. We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [argilla/dpo-mix-7k](https://huggingface.co/datasets/argilla/dpo-mix-7k) dataset, which contains 7k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat) to test its capabilities. Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: ```python # pip install transformers>=4.38.2 # pip install accelerate import torch from transformers import pipeline pipe = pipeline( "text-generation", model="HuggingFaceH4/zephyr-7b-gemma-v0.1", device_map="auto", torch_dtype=torch.bfloat16, ) messages = [ { "role": "system", "content": "", # Model not yet trained for follow this }, {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] outputs = pipe( messages, max_new_tokens=128, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, stop_sequence="<|im_end|>", ) print(outputs[0]["generated_text"][-1]["content"]) # It is not possible for a human to eat a helicopter in one sitting, as a # helicopter is a large and inedible machine. Helicopters are made of metal, # plastic, and other materials that are not meant to be consumed by humans. # Eating a helicopter would be extremely dangerous and would likely cause # serious health problems, including choking, suffocation, and poisoning. It is # important to only eat food that is safe and intended for human consumption. ``` ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Zephyr 7B Gemma has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (`google/gemma-7b`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [StarCoder2 model card](https://huggingface.co/bigcode/starcoder2-15b) for an example of this. ## Training and evaluation data This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-gemma-sft-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-sft-v0.1) on the argilla/dpo-mix-7k dataset. It achieves the following results on the evaluation set: - Loss: 0.4695 - Rewards/chosen: -3.3746 - Rewards/rejected: -4.9715 - Rewards/accuracies: 0.7188 - Rewards/margins: 1.5970 - Logps/rejected: -459.4853 - Logps/chosen: -429.9115 - Logits/rejected: 86.4684 - Logits/chosen: 92.8200 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.1923 | 1.9 | 100 | 0.4736 | -3.4575 | -4.9556 | 0.75 | 1.4980 | -459.1662 | -431.5707 | 86.3863 | 92.7360 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.1 ## Citation Information If you find this model useful in your work, please consider citing the Zephyr technical report: ``` @misc{tunstall2023zephyr, title={Zephyr: Direct Distillation of LM Alignment}, author={Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro von Werra and Clémentine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M. Rush and Thomas Wolf}, year={2023}, eprint={2310.16944}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` You may also wish to cite the creators of this model as well: ``` @misc{zephyr_7b_gemma, author = {Lewis Tunstall and Philipp Schmid}, title = {Zephyr 7B Gemma}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1}} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_HuggingFaceH4__zephyr-7b-gemma-v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |62.41| |AI2 Reasoning Challenge (25-Shot)|58.45| |HellaSwag (10-Shot) |83.48| |MMLU (5-Shot) |60.68| |TruthfulQA (0-shot) |52.07| |Winogrande (5-shot) |74.19| |GSM8k (5-shot) |45.56|
{}
RichardErkhov/HuggingFaceH4_-_zephyr-7b-gemma-v0.1-8bits
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:2310.16944", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-05-01T05:53:49+00:00
[ "2310.16944" ]
[]
TAGS #transformers #safetensors #gemma #text-generation #conversational #arxiv-2310.16944 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models zephyr-7b-gemma-v0.1 - bnb 8bits * Model creator: URL * Original model: URL Original model description: --------------------------- license: other tags: * alignment-handbook * trl * dpo * generated\_from\_trainer base\_model: HuggingFaceH4/zephyr-7b-gemma-sft-v0.1 datasets: * argilla/dpo-mix-7k license\_name: gemma-terms-of-use license\_link: URL pipeline\_tag: text-generation model-index: * name: zephyr-7b-gemma results: + task: type: text-generation name: Text Generation dataset: name: MT-Bench type: unknown metrics: - type: unknown value: 7.81 name: score source: url: URL + task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2\_arc config: ARC-Challenge split: test args: num\_few\_shot: 25 metrics: - type: acc\_norm value: 58.45 name: normalized accuracy source: url: URL name: Open LLM Leaderboard + task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num\_few\_shot: 10 metrics: - type: acc\_norm value: 83.48 name: normalized accuracy source: url: URL name: Open LLM Leaderboard + task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num\_few\_shot: 5 metrics: - type: acc value: 60.68 name: accuracy source: url: URL name: Open LLM Leaderboard + task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful\_qa config: multiple\_choice split: validation args: num\_few\_shot: 0 metrics: - type: mc2 value: 52.07 source: url: URL name: Open LLM Leaderboard + task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande\_xl split: validation args: num\_few\_shot: 5 metrics: - type: acc value: 74.19 name: accuracy source: url: URL name: Open LLM Leaderboard + task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num\_few\_shot: 5 metrics: - type: acc value: 45.56 name: accuracy source: url: URL name: Open LLM Leaderboard --- <img src="URL alt="Zephyr 7B Gemma Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Model Card for Zephyr 7B Gemma ============================== Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 7B Gemma is the third model in the series, and is a fine-tuned version of 'google/gemma-7b' that was trained on on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO). You can reproduce the training of this model via the recipe provided in the Alignment Handbook. Model description ----------------- * Model type: A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. * Language(s) (NLP): Primarily English * License: Gemma Terms of Use * Finetuned from model: google/gemma-7b ### Model Sources * Repository: URL * Demo: URL Performance ----------- Details of AGIEval, GPT4All, TruthfulQA, BigBench ### AGIEval Average: 34.22% ### GPT4All Average: 66.37% ### TruthfulQA Average: 52.19% ### Bigbench Average: 37.1% Intended uses & limitations --------------------------- The model was initially fine-tuned on the DEITA 10K dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT. We then further aligned the model with TRL's 'DPOTrainer' on the argilla/dpo-mix-7k dataset, which contains 7k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our demo to test its capabilities. Here's how you can run the model using the 'pipeline()' function from Transformers: Bias, Risks, and Limitations ---------------------------- Zephyr 7B Gemma has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model ('google/gemma-7b'), however it is likely to have included a mix of Web data and technical sources like books and code. See the StarCoder2 model card for an example of this. Training and evaluation data ---------------------------- This model is a fine-tuned version of HuggingFaceH4/zephyr-7b-gemma-sft-v0.1 on the argilla/dpo-mix-7k dataset. It achieves the following results on the evaluation set: * Loss: 0.4695 * Rewards/chosen: -3.3746 * Rewards/rejected: -4.9715 * Rewards/accuracies: 0.7188 * Rewards/margins: 1.5970 * Logps/rejected: -459.4853 * Logps/chosen: -429.9115 * Logits/rejected: 86.4684 * Logits/chosen: 92.8200 ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-07 * train\_batch\_size: 2 * eval\_batch\_size: 4 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 8 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 128 * total\_eval\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.39.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.15.1 If you find this model useful in your work, please consider citing the Zephyr technical report: You may also wish to cite the creators of this model as well: Open LLM Leaderboard Evaluation Results ======================================= Detailed results can be found here
[ "### Model Sources\n\n\n* Repository: URL\n* Demo: URL\n\n\nPerformance\n-----------\n\n\n\n\nDetails of AGIEval, GPT4All, TruthfulQA, BigBench", "### AGIEval\n\n\n\nAverage: 34.22%", "### GPT4All\n\n\n\nAverage: 66.37%", "### TruthfulQA\n\n\n\nAverage: 52.19%", "### Bigbench\n\n\n\nAverage: 37.1%\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was initially fine-tuned on the DEITA 10K dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.\nWe then further aligned the model with TRL's 'DPOTrainer' on the argilla/dpo-mix-7k dataset, which contains 7k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our demo to test its capabilities.\n\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nZephyr 7B Gemma has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model ('google/gemma-7b'), however it is likely to have included a mix of Web data and technical sources like books and code. See the StarCoder2 model card for an example of this.\n\n\nTraining and evaluation data\n----------------------------\n\n\nThis model is a fine-tuned version of HuggingFaceH4/zephyr-7b-gemma-sft-v0.1 on the argilla/dpo-mix-7k dataset.\n\n\nIt achieves the following results on the evaluation set:\n\n\n* Loss: 0.4695\n* Rewards/chosen: -3.3746\n* Rewards/rejected: -4.9715\n* Rewards/accuracies: 0.7188\n* Rewards/margins: 1.5970\n* Logps/rejected: -459.4853\n* Logps/chosen: -429.9115\n* Logits/rejected: 86.4684\n* Logits/chosen: 92.8200", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.1\n\n\nIf you find this model useful in your work, please consider citing the Zephyr technical report:\n\n\nYou may also wish to cite the creators of this model as well:\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-2310.16944 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "### Model Sources\n\n\n* Repository: URL\n* Demo: URL\n\n\nPerformance\n-----------\n\n\n\n\nDetails of AGIEval, GPT4All, TruthfulQA, BigBench", "### AGIEval\n\n\n\nAverage: 34.22%", "### GPT4All\n\n\n\nAverage: 66.37%", "### TruthfulQA\n\n\n\nAverage: 52.19%", "### Bigbench\n\n\n\nAverage: 37.1%\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was initially fine-tuned on the DEITA 10K dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.\nWe then further aligned the model with TRL's 'DPOTrainer' on the argilla/dpo-mix-7k dataset, which contains 7k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our demo to test its capabilities.\n\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nZephyr 7B Gemma has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model ('google/gemma-7b'), however it is likely to have included a mix of Web data and technical sources like books and code. See the StarCoder2 model card for an example of this.\n\n\nTraining and evaluation data\n----------------------------\n\n\nThis model is a fine-tuned version of HuggingFaceH4/zephyr-7b-gemma-sft-v0.1 on the argilla/dpo-mix-7k dataset.\n\n\nIt achieves the following results on the evaluation set:\n\n\n* Loss: 0.4695\n* Rewards/chosen: -3.3746\n* Rewards/rejected: -4.9715\n* Rewards/accuracies: 0.7188\n* Rewards/margins: 1.5970\n* Logps/rejected: -459.4853\n* Logps/chosen: -429.9115\n* Logits/rejected: 86.4684\n* Logits/chosen: 92.8200", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.1\n\n\nIf you find this model useful in your work, please consider citing the Zephyr technical report:\n\n\nYou may also wish to cite the creators of this model as well:\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here" ]
[ 50, 45, 12, 13, 12, 503, 176, 5, 133 ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-2310.16944 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n### Model Sources\n\n\n* Repository: URL\n* Demo: URL\n\n\nPerformance\n-----------\n\n\n\n\nDetails of AGIEval, GPT4All, TruthfulQA, BigBench### AGIEval\n\n\n\nAverage: 34.22%### GPT4All\n\n\n\nAverage: 66.37%### TruthfulQA\n\n\n\nAverage: 52.19%### Bigbench\n\n\n\nAverage: 37.1%\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was initially fine-tuned on the DEITA 10K dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.\nWe then further aligned the model with TRL's 'DPOTrainer' on the argilla/dpo-mix-7k dataset, which contains 7k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our demo to test its capabilities.\n\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nZephyr 7B Gemma has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model ('google/gemma-7b'), however it is likely to have included a mix of Web data and technical sources like books and code. See the StarCoder2 model card for an example of this.\n\n\nTraining and evaluation data\n----------------------------\n\n\nThis model is a fine-tuned version of HuggingFaceH4/zephyr-7b-gemma-sft-v0.1 on the argilla/dpo-mix-7k dataset.\n\n\nIt achieves the following results on the evaluation set:\n\n\n* Loss: 0.4695\n* Rewards/chosen: -3.3746\n* Rewards/rejected: -4.9715\n* Rewards/accuracies: 0.7188\n* Rewards/margins: 1.5970\n* Logps/rejected: -459.4853\n* Logps/chosen: -429.9115\n* Logits/rejected: 86.4684\n* Logits/chosen: 92.8200### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 2### Training results### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.1\n\n\nIf you find this model useful in your work, please consider citing the Zephyr technical report:\n\n\nYou may also wish to cite the creators of this model as well:\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here" ]
reinforcement-learning
ml-agents
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: ArnavModanwal/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
ArnavModanwal/ppo-Huggy
null
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
null
2024-05-01T05:56:27+00:00
[]
[]
TAGS #ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
# ppo Agent playing Huggy This is a trained model of a ppo agent playing Huggy using the Unity ML-Agents Library. ## Usage (with ML-Agents) The Documentation: URL We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your browser: URL - A *longer tutorial* to understand how works ML-Agents: URL ### Resume the training ### Watch your Agent play You can watch your agent playing directly in your browser 1. If the environment is part of ML-Agents official environments, go to URL 2. Step 1: Find your model_id: ArnavModanwal/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play
[ "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: ArnavModanwal/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ "TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n", "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: ArnavModanwal/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ 35, 202 ]
[ "TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: ArnavModanwal/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-3 This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 3 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-1b", "model-index": [{"name": "robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-3", "results": []}]}
AlignmentResearch/robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-3
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:00:05+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-3 This model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 3 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-3\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 3\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-3\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 3\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ 69, 56, 7, 9, 9, 4, 93, 5, 40 ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-3\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 3\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1### Training results### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Audino/my-awesome-modelv3-small
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:03:24+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 46, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
saeedasadibagloee/T5_timeseries
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:03:32+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 26, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Mubin1917/Mistral-7B-Instruct-v0.2-lamini-docs-adapters
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:04:07+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 26, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
unconditional-image-generation
diffusers
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('livewalk/sd-class-butterflies-32') image = pipeline().images[0] image ```
{"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]}
livewalk/sd-class-butterflies-32
null
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
null
2024-05-01T06:05:41+00:00
[]
[]
TAGS #diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us
# Model Card for Unit 1 of the Diffusion Models Class This model is a diffusion model for unconditional image generation of cute . ## Usage
[ "# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .", "## Usage" ]
[ "TAGS\n#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us \n", "# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .", "## Usage" ]
[ 43, 26, 3 ]
[ "TAGS\n#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us \n# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .## Usage" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-1.3b-finetuned-mnli-nn This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8505 - Accuracy: 0.5201 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 1 | 0.8826 | 0.5192 | | No log | 2.0 | 2 | 0.8740 | 0.5195 | | No log | 3.0 | 3 | 0.8665 | 0.5198 | | No log | 4.0 | 4 | 0.8601 | 0.5210 | | No log | 5.0 | 5 | 0.8548 | 0.5207 | | No log | 6.0 | 6 | 0.8505 | 0.5214 | | No log | 7.0 | 7 | 0.8470 | 0.5216 | | No log | 8.0 | 8 | 0.8447 | 0.5214 | | No log | 9.0 | 9 | 0.8431 | 0.5213 | | No log | 10.0 | 10 | 0.8423 | 0.5211 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "other", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "facebook/opt-1.3b", "model-index": [{"name": "opt-1.3b-finetuned-mnli-nn", "results": []}]}
elliottfitzgerald/opt-1.3b-finetuned-mnli-nn
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:facebook/opt-1.3b", "license:other", "region:us" ]
null
2024-05-01T06:06:18+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #base_model-facebook/opt-1.3b #license-other #region-us
opt-1.3b-finetuned-mnli-nn ========================== This model is a fine-tuned version of facebook/opt-1.3b on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.8505 * Accuracy: 0.5201 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-facebook/opt-1.3b #license-other #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 38, 101, 5, 52 ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-facebook/opt-1.3b #license-other #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10### Training results### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
<img src="./ninjalogo.svg" width="100%" height="20%" alt=""> # Our Models - [Vecteus](https://huggingface.co/Local-Novel-LLM-project/Vecteus-v1) - [Ninja-v1](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1) - [Ninja-v1-NSFW](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW) - [Ninja-v1-128k](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-128k) - [Ninja-v1-NSFW-128k](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW-128k) ## Model Card for Ninja-v1-NSFW-128k The Mistral-7B--based Large Language Model (LLM) is an noveldataset fine-tuned version of the Mistral-7B-v0.1 Ninja-NSFW-128k has the following changes compared to Mistral-7B-v0.1. - 128k context window (8k context in v0.1) - Achieving both high quality Japanese and English generation - Memory ability that does not forget even after long-context generation - Can be generated NSFW This model was created with the help of GPUs from the first LocalAI hackathon. We would like to take this opportunity to thank ## List of Creation Methods - Chatvector for multiple models - Simple linear merging of result models - Domain and Sentence Enhancement with LORA - Context expansion ## Instruction format Ninja adopts the prompt format from Vicuna and supports multi-turn conversation. The prompt should be as following: ``` USER: Hi ASSISTANT: Hello.</s> USER: Who are you? ASSISTANT: I am ninja.</s> ``` ## Example prompts to improve (Japanese) - BAD: あなたは○○として振る舞います - GOOD: あなたは○○です - BAD: あなたは○○ができます - GOOD: あなたは○○をします ## Performing inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_id = "Local-Novel-LLM-project/Ninja-v1-NSFW" new_tokens = 1024 model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, attn_implementation="flash_attention_2", device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_id) system_prompt = "あなたはプロの小説家です。\n小説を書いてください\n-------- " prompt = input("Enter a prompt: ") system_prompt += prompt + "\n-------- " model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda") generated_ids = model.generate(**model_inputs, max_new_tokens=new_tokens, do_sample=True) print(tokenizer.batch_decode(generated_ids)[0]) ```` ## Merge recipe - WizardLM2 - mistralai/Mistral-7B-v0.1 - NousResearch/Yarn-Mistral-7b-128k - mistralai/Mistral-7B-v0.1 - Elizezen/Antler-7B - stabilityai/japanese-stablelm-instruct-gamma-7b - Elizezen/LewdSniffyOtter-7B - Elizezen/SniffyOtter-7B - NTQAI/chatntq-ja-7b-v1.0 The characteristics of each model are as follows. - WizardLM2: High quality multitasking model - Yarn-Mistral-7b-128k: Mistral model with 128k context window - Antler-7B: Model specialized for novel writing - NTQAI/chatntq-ja-7b-v1.0 High quality Japanese specialized model - Elizezen/LewdSniffyOtter-7B Japanese NSFW specialized model ## Other points to keep in mind - The training data may be biased. Be careful with the generated sentences. - Set trust_remote_code to True for context expansion with YaRN. - Memory usage may be large for long inferences. - If possible, we recommend inferring with llamacpp rather than Transformers.
{"language": ["en", "ja"], "license": "apache-2.0", "library_name": "transformers", "tags": ["finetuned", "not-for-all-audiences"], "pipeline_tag": "text-generation"}
Local-Novel-LLM-project/Ninja-v1-NSFW
null
[ "transformers", "safetensors", "mistral", "text-generation", "finetuned", "not-for-all-audiences", "en", "ja", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:06:39+00:00
[]
[ "en", "ja" ]
TAGS #transformers #safetensors #mistral #text-generation #finetuned #not-for-all-audiences #en #ja #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<img src="./URL" width="100%" height="20%" alt=""> # Our Models - Vecteus - Ninja-v1 - Ninja-v1-NSFW - Ninja-v1-128k - Ninja-v1-NSFW-128k ## Model Card for Ninja-v1-NSFW-128k The Mistral-7B--based Large Language Model (LLM) is an noveldataset fine-tuned version of the Mistral-7B-v0.1 Ninja-NSFW-128k has the following changes compared to Mistral-7B-v0.1. - 128k context window (8k context in v0.1) - Achieving both high quality Japanese and English generation - Memory ability that does not forget even after long-context generation - Can be generated NSFW This model was created with the help of GPUs from the first LocalAI hackathon. We would like to take this opportunity to thank ## List of Creation Methods - Chatvector for multiple models - Simple linear merging of result models - Domain and Sentence Enhancement with LORA - Context expansion ## Instruction format Ninja adopts the prompt format from Vicuna and supports multi-turn conversation. The prompt should be as following: ## Example prompts to improve (Japanese) - BAD: あなたは○○として振る舞います - GOOD: あなたは○○です - BAD: あなたは○○ができます - GOOD: あなたは○○をします ## Performing inference ' ## Merge recipe - WizardLM2 - mistralai/Mistral-7B-v0.1 - NousResearch/Yarn-Mistral-7b-128k - mistralai/Mistral-7B-v0.1 - Elizezen/Antler-7B - stabilityai/japanese-stablelm-instruct-gamma-7b - Elizezen/LewdSniffyOtter-7B - Elizezen/SniffyOtter-7B - NTQAI/chatntq-ja-7b-v1.0 The characteristics of each model are as follows. - WizardLM2: High quality multitasking model - Yarn-Mistral-7b-128k: Mistral model with 128k context window - Antler-7B: Model specialized for novel writing - NTQAI/chatntq-ja-7b-v1.0 High quality Japanese specialized model - Elizezen/LewdSniffyOtter-7B Japanese NSFW specialized model ## Other points to keep in mind - The training data may be biased. Be careful with the generated sentences. - Set trust_remote_code to True for context expansion with YaRN. - Memory usage may be large for long inferences. - If possible, we recommend inferring with llamacpp rather than Transformers.
[ "# Our Models\n- Vecteus\n\n- Ninja-v1 \n\n- Ninja-v1-NSFW\n\n- Ninja-v1-128k\n\n- Ninja-v1-NSFW-128k", "## Model Card for Ninja-v1-NSFW-128k\n\nThe Mistral-7B--based Large Language Model (LLM) is an noveldataset fine-tuned version of the Mistral-7B-v0.1\n\nNinja-NSFW-128k has the following changes compared to Mistral-7B-v0.1.\n- 128k context window (8k context in v0.1)\n- Achieving both high quality Japanese and English generation\n- Memory ability that does not forget even after long-context generation\n- Can be generated NSFW\n\nThis model was created with the help of GPUs from the first LocalAI hackathon.\n\nWe would like to take this opportunity to thank", "## List of Creation Methods\n\n- Chatvector for multiple models\n- Simple linear merging of result models\n- Domain and Sentence Enhancement with LORA\n- Context expansion", "## Instruction format\n\n Ninja adopts the prompt format from Vicuna and supports multi-turn conversation.\n The prompt should be as following:", "## Example prompts to improve (Japanese)\n\n - BAD: あなたは○○として振る舞います\n - GOOD: あなたは○○です\n\n - BAD: あなたは○○ができます\n - GOOD: あなたは○○をします", "## Performing inference\n\n'", "## Merge recipe\n\n- WizardLM2 - mistralai/Mistral-7B-v0.1\n- NousResearch/Yarn-Mistral-7b-128k - mistralai/Mistral-7B-v0.1\n- Elizezen/Antler-7B - stabilityai/japanese-stablelm-instruct-gamma-7b\n- Elizezen/LewdSniffyOtter-7B - Elizezen/SniffyOtter-7B\n- NTQAI/chatntq-ja-7b-v1.0\n\nThe characteristics of each model are as follows.\n\n- WizardLM2: High quality multitasking model\n- Yarn-Mistral-7b-128k: Mistral model with 128k context window\n- Antler-7B: Model specialized for novel writing\n- NTQAI/chatntq-ja-7b-v1.0 High quality Japanese specialized model\n- Elizezen/LewdSniffyOtter-7B Japanese NSFW specialized model", "## Other points to keep in mind\n- The training data may be biased. Be careful with the generated sentences.\n- Set trust_remote_code to True for context expansion with YaRN.\n- Memory usage may be large for long inferences.\n- If possible, we recommend inferring with llamacpp rather than Transformers." ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #finetuned #not-for-all-audiences #en #ja #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Our Models\n- Vecteus\n\n- Ninja-v1 \n\n- Ninja-v1-NSFW\n\n- Ninja-v1-128k\n\n- Ninja-v1-NSFW-128k", "## Model Card for Ninja-v1-NSFW-128k\n\nThe Mistral-7B--based Large Language Model (LLM) is an noveldataset fine-tuned version of the Mistral-7B-v0.1\n\nNinja-NSFW-128k has the following changes compared to Mistral-7B-v0.1.\n- 128k context window (8k context in v0.1)\n- Achieving both high quality Japanese and English generation\n- Memory ability that does not forget even after long-context generation\n- Can be generated NSFW\n\nThis model was created with the help of GPUs from the first LocalAI hackathon.\n\nWe would like to take this opportunity to thank", "## List of Creation Methods\n\n- Chatvector for multiple models\n- Simple linear merging of result models\n- Domain and Sentence Enhancement with LORA\n- Context expansion", "## Instruction format\n\n Ninja adopts the prompt format from Vicuna and supports multi-turn conversation.\n The prompt should be as following:", "## Example prompts to improve (Japanese)\n\n - BAD: あなたは○○として振る舞います\n - GOOD: あなたは○○です\n\n - BAD: あなたは○○ができます\n - GOOD: あなたは○○をします", "## Performing inference\n\n'", "## Merge recipe\n\n- WizardLM2 - mistralai/Mistral-7B-v0.1\n- NousResearch/Yarn-Mistral-7b-128k - mistralai/Mistral-7B-v0.1\n- Elizezen/Antler-7B - stabilityai/japanese-stablelm-instruct-gamma-7b\n- Elizezen/LewdSniffyOtter-7B - Elizezen/SniffyOtter-7B\n- NTQAI/chatntq-ja-7b-v1.0\n\nThe characteristics of each model are as follows.\n\n- WizardLM2: High quality multitasking model\n- Yarn-Mistral-7b-128k: Mistral model with 128k context window\n- Antler-7B: Model specialized for novel writing\n- NTQAI/chatntq-ja-7b-v1.0 High quality Japanese specialized model\n- Elizezen/LewdSniffyOtter-7B Japanese NSFW specialized model", "## Other points to keep in mind\n- The training data may be biased. Be careful with the generated sentences.\n- Set trust_remote_code to True for context expansion with YaRN.\n- Memory usage may be large for long inferences.\n- If possible, we recommend inferring with llamacpp rather than Transformers." ]
[ 58, 41, 151, 31, 27, 32, 5, 219, 67 ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #finetuned #not-for-all-audiences #en #ja #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Our Models\n- Vecteus\n\n- Ninja-v1 \n\n- Ninja-v1-NSFW\n\n- Ninja-v1-128k\n\n- Ninja-v1-NSFW-128k## Model Card for Ninja-v1-NSFW-128k\n\nThe Mistral-7B--based Large Language Model (LLM) is an noveldataset fine-tuned version of the Mistral-7B-v0.1\n\nNinja-NSFW-128k has the following changes compared to Mistral-7B-v0.1.\n- 128k context window (8k context in v0.1)\n- Achieving both high quality Japanese and English generation\n- Memory ability that does not forget even after long-context generation\n- Can be generated NSFW\n\nThis model was created with the help of GPUs from the first LocalAI hackathon.\n\nWe would like to take this opportunity to thank## List of Creation Methods\n\n- Chatvector for multiple models\n- Simple linear merging of result models\n- Domain and Sentence Enhancement with LORA\n- Context expansion## Instruction format\n\n Ninja adopts the prompt format from Vicuna and supports multi-turn conversation.\n The prompt should be as following:## Example prompts to improve (Japanese)\n\n - BAD: あなたは○○として振る舞います\n - GOOD: あなたは○○です\n\n - BAD: あなたは○○ができます\n - GOOD: あなたは○○をします## Performing inference\n\n'## Merge recipe\n\n- WizardLM2 - mistralai/Mistral-7B-v0.1\n- NousResearch/Yarn-Mistral-7b-128k - mistralai/Mistral-7B-v0.1\n- Elizezen/Antler-7B - stabilityai/japanese-stablelm-instruct-gamma-7b\n- Elizezen/LewdSniffyOtter-7B - Elizezen/SniffyOtter-7B\n- NTQAI/chatntq-ja-7b-v1.0\n\nThe characteristics of each model are as follows.\n\n- WizardLM2: High quality multitasking model\n- Yarn-Mistral-7b-128k: Mistral model with 128k context window\n- Antler-7B: Model specialized for novel writing\n- NTQAI/chatntq-ja-7b-v1.0 High quality Japanese specialized model\n- Elizezen/LewdSniffyOtter-7B Japanese NSFW specialized model## Other points to keep in mind\n- The training data may be biased. Be careful with the generated sentences.\n- Set trust_remote_code to True for context expansion with YaRN.\n- Memory usage may be large for long inferences.\n- If possible, we recommend inferring with llamacpp rather than Transformers." ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) starchat2-15b-v0.1 - bnb 4bits - Model creator: https://huggingface.co/HuggingFaceH4/ - Original model: https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1/ Original model description: --- base_model: HuggingFaceH4/starchat2-15b-sft-v0.1 tags: - alignment-handbook - generated_from_trainer datasets: - HuggingFaceH4/ultrafeedback_binarized - HuggingFaceH4/orca_dpo_pairs model-index: - name: starchat2-15b-v0.1 results: [] --- <img src="https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1/resolve/main/model_logo.png" alt="StarChat2 15B Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for StarChat2 15B StarChat is a series of language models that are trained to act as helpful coding assistants. StarChat2 is the latest model in the series, and is a fine-tuned version of [StarCoder2](https://huggingface.co/bigcode/starcoder2-15b) that was trained with SFT and DPO on a mix of synthetic datasets. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Model type:** A 16B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily English and 600+ programming languages. - **License:** BigCode Open RAIL-M v1 - **Finetuned from model:** [bigcode/starcoder2-15b](https://huggingface.co/bigcode/starcoder2-15b) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/huggingface/alignment-handbook - **Demo:** https://huggingface.co/spaces/HuggingFaceH4/starchat2-playground ## Performance StarChat2 15B was trained to balance chat and programming capabilities. It achieves strong performance on chat benchmarks like [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [IFEval](https://arxiv.org/abs/2311.07911), as well as the canonical HumanEval benchmark for Python code completion. The scores reported below were obtained using the [LightEval](https://github.com/huggingface/lighteval) evaluation suite (commit `988959cb905df4baa050f82b4d499d46e8b537f2`) and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard. | Model | MT Bench | IFEval | HumanEval | |-------------------------------------------------------------------------------------------------|---------:|-------:|----------:| | [starchat2-15b-v0.1](https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1) | 7.66 | 35.12 | 71.34 | | [deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) | 4.17 | 14.23 | 80.48 | | [CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) | 6.80 | 43.44 | 50.60 | ## Intended uses & limitations The model was fine-tuned on a blend of chat, code, math, and reasoning datasets. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/starchat2-playground) to test its coding capabilities. Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: ```python # pip install 'transformers @ git+https://github.com/huggingface/transformers.git@831bc25d8fdb85768402f772cf65cc3d7872b211' # pip install accelerate import torch from transformers import pipeline pipe = pipeline( "text-generation", model="HuggingFaceH4/starchat2-15b-v0.1", device_map="auto", torch_dtype=torch.bfloat16, ) messages = [ { "role": "system", "content": "You are StarChat2, an expert programming assistant", }, {"role": "user", "content": "Write a simple website in HTML. When a user clicks the button, it shows a random Chuck Norris joke."}, ] outputs = pipe( messages, max_new_tokens=512, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, stop_sequence="<|im_end|>", ) print(outputs[0]["generated_text"][-1]["content"]) ``` ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> StarChat2 15B has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Models trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the [StarCoder2 dataset](https://huggingface.co/datasets/bigcode/the-stack-v2) Since the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect. For example, it may produce code that does not compile or that produces incorrect results. It may also produce code that is vulnerable to security exploits. We have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking. StarChat2 15B was fine-tuned from the base model [StarCoder2](https://huggingface.co/bigcode/starcoder2-15b), please refer to its model card's [Limitations Section](https://huggingface.co/bigcode/starcoder2-15b#limitations) for relevant information. In particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its [technical report](https://huggingface.co/papers/2402.19173). ## Training details This model is a fine-tuned version of [starchat2-15b-sft-v0.1](https://huggingface.co/HuggingFaceH4/starchat2-15b-sft-v0.1) on the HuggingFaceH4/ultrafeedback_binarized and the HuggingFaceH4/orca_dpo_pairs datasets. Check out the recipe in the [Alignment Handbook](https://github.com/huggingface/alignment-handbook) for more details. It achieves the following results on the evaluation set: - Loss: 0.4347 - Rewards/chosen: -0.9461 - Rewards/rejected: -2.7745 - Rewards/accuracies: 0.7658 - Rewards/margins: 1.8284 - Logps/rejected: -322.1934 - Logps/chosen: -316.1898 - Logits/rejected: -2.3817 - Logits/chosen: -2.3005 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.717 | 0.17 | 100 | 0.6006 | -0.0924 | -0.2899 | 0.6329 | 0.1975 | -272.5022 | -299.1165 | -2.5313 | -2.4191 | | 0.6273 | 0.35 | 200 | 0.5160 | -0.3994 | -0.9461 | 0.6930 | 0.5467 | -285.6261 | -305.2568 | -2.5281 | -2.4278 | | 0.5538 | 0.52 | 300 | 0.4781 | -0.6589 | -1.5892 | 0.7247 | 0.9302 | -298.4870 | -310.4470 | -2.4996 | -2.4110 | | 0.5056 | 0.7 | 400 | 0.4594 | -0.8283 | -2.1332 | 0.7437 | 1.3050 | -309.3687 | -313.8344 | -2.4472 | -2.3644 | | 0.4983 | 0.87 | 500 | 0.4512 | -0.7758 | -2.2806 | 0.7468 | 1.5049 | -312.3167 | -312.7843 | -2.4223 | -2.3404 | | 0.4662 | 1.04 | 600 | 0.4431 | -0.7839 | -2.4016 | 0.7658 | 1.6177 | -314.7355 | -312.9465 | -2.4049 | -2.3215 | | 0.4411 | 1.22 | 700 | 0.4415 | -1.0090 | -2.7582 | 0.7690 | 1.7492 | -321.8679 | -317.4481 | -2.3840 | -2.3016 | | 0.471 | 1.39 | 800 | 0.4368 | -0.9617 | -2.7445 | 0.7690 | 1.7828 | -321.5930 | -316.5019 | -2.3809 | -2.2991 | | 0.4485 | 1.57 | 900 | 0.4351 | -0.9490 | -2.7594 | 0.7722 | 1.8103 | -321.8916 | -316.2497 | -2.3815 | -2.3004 | | 0.4411 | 1.74 | 1000 | 0.4348 | -0.9293 | -2.7469 | 0.7658 | 1.8176 | -321.6409 | -315.8547 | -2.3823 | -2.3011 | | 0.4499 | 1.92 | 1100 | 0.4348 | -0.9482 | -2.7767 | 0.7658 | 1.8285 | -322.2369 | -316.2320 | -2.3828 | -2.3012 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
{}
RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-4bits
null
[ "transformers", "safetensors", "starcoder2", "text-generation", "conversational", "arxiv:2311.07911", "arxiv:2402.19173", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-05-01T06:07:44+00:00
[ "2311.07911", "2402.19173" ]
[]
TAGS #transformers #safetensors #starcoder2 #text-generation #conversational #arxiv-2311.07911 #arxiv-2402.19173 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models starchat2-15b-v0.1 - bnb 4bits * Model creator: URL * Original model: URL Original model description: --------------------------- base\_model: HuggingFaceH4/starchat2-15b-sft-v0.1 tags: * alignment-handbook * generated\_from\_trainer datasets: * HuggingFaceH4/ultrafeedback\_binarized * HuggingFaceH4/orca\_dpo\_pairs model-index: * name: starchat2-15b-v0.1 results: [] --- <img src="URL alt="StarChat2 15B Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Model Card for StarChat2 15B ============================ StarChat is a series of language models that are trained to act as helpful coding assistants. StarChat2 is the latest model in the series, and is a fine-tuned version of StarCoder2 that was trained with SFT and DPO on a mix of synthetic datasets. Model Details ------------- ### Model Description * Model type: A 16B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. * Language(s) (NLP): Primarily English and 600+ programming languages. * License: BigCode Open RAIL-M v1 * Finetuned from model: bigcode/starcoder2-15b ### Model Sources * Repository: URL * Demo: URL Performance ----------- StarChat2 15B was trained to balance chat and programming capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval, as well as the canonical HumanEval benchmark for Python code completion. The scores reported below were obtained using the LightEval evaluation suite (commit '988959cb905df4baa050f82b4d499d46e8b537f2') and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard. Intended uses & limitations --------------------------- The model was fine-tuned on a blend of chat, code, math, and reasoning datasets. As a result, the model can be used for chat and you can check out our demo to test its coding capabilities. Here's how you can run the model using the 'pipeline()' function from Transformers: Bias, Risks, and Limitations ---------------------------- StarChat2 15B has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Models trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the StarCoder2 dataset Since the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect. For example, it may produce code that does not compile or that produces incorrect results. It may also produce code that is vulnerable to security exploits. We have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking. StarChat2 15B was fine-tuned from the base model StarCoder2, please refer to its model card's Limitations Section for relevant information. In particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its technical report. Training details ---------------- This model is a fine-tuned version of starchat2-15b-sft-v0.1 on the HuggingFaceH4/ultrafeedback\_binarized and the HuggingFaceH4/orca\_dpo\_pairs datasets. Check out the recipe in the Alignment Handbook for more details. It achieves the following results on the evaluation set: * Loss: 0.4347 * Rewards/chosen: -0.9461 * Rewards/rejected: -2.7745 * Rewards/accuracies: 0.7658 * Rewards/margins: 1.8284 * Logps/rejected: -322.1934 * Logps/chosen: -316.1898 * Logits/rejected: -2.3817 * Logits/chosen: -2.3005 Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-07 * train\_batch\_size: 2 * eval\_batch\_size: 4 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 8 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 128 * total\_eval\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.39.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.16.1 * Tokenizers 0.15.1
[ "### Model Description\n\n\n* Model type: A 16B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.\n* Language(s) (NLP): Primarily English and 600+ programming languages.\n* License: BigCode Open RAIL-M v1\n* Finetuned from model: bigcode/starcoder2-15b", "### Model Sources\n\n\n* Repository: URL\n* Demo: URL\n\n\nPerformance\n-----------\n\n\nStarChat2 15B was trained to balance chat and programming capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval, as well as the canonical HumanEval benchmark for Python code completion. The scores reported below were obtained using the LightEval evaluation suite (commit '988959cb905df4baa050f82b4d499d46e8b537f2') and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was fine-tuned on a blend of chat, code, math, and reasoning datasets. As a result, the model can be used for chat and you can check out our demo to test its coding capabilities.\n\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nStarChat2 15B has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nModels trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the StarCoder2 dataset\n\n\nSince the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect.\nFor example, it may produce code that does not compile or that produces incorrect results. \n\nIt may also produce code that is vulnerable to security exploits. \n\nWe have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.\n\n\nStarChat2 15B was fine-tuned from the base model StarCoder2, please refer to its model card's Limitations Section for relevant information.\nIn particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its technical report.\n\n\nTraining details\n----------------\n\n\nThis model is a fine-tuned version of starchat2-15b-sft-v0.1 on the HuggingFaceH4/ultrafeedback\\_binarized and the HuggingFaceH4/orca\\_dpo\\_pairs datasets. Check out the recipe in the Alignment Handbook for more details.\n\n\nIt achieves the following results on the evaluation set:\n\n\n* Loss: 0.4347\n* Rewards/chosen: -0.9461\n* Rewards/rejected: -2.7745\n* Rewards/accuracies: 0.7658\n* Rewards/margins: 1.8284\n* Logps/rejected: -322.1934\n* Logps/chosen: -316.1898\n* Logits/rejected: -2.3817\n* Logits/chosen: -2.3005\n\n\nTraining procedure\n------------------", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1" ]
[ "TAGS\n#transformers #safetensors #starcoder2 #text-generation #conversational #arxiv-2311.07911 #arxiv-2402.19173 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "### Model Description\n\n\n* Model type: A 16B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.\n* Language(s) (NLP): Primarily English and 600+ programming languages.\n* License: BigCode Open RAIL-M v1\n* Finetuned from model: bigcode/starcoder2-15b", "### Model Sources\n\n\n* Repository: URL\n* Demo: URL\n\n\nPerformance\n-----------\n\n\nStarChat2 15B was trained to balance chat and programming capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval, as well as the canonical HumanEval benchmark for Python code completion. The scores reported below were obtained using the LightEval evaluation suite (commit '988959cb905df4baa050f82b4d499d46e8b537f2') and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was fine-tuned on a blend of chat, code, math, and reasoning datasets. As a result, the model can be used for chat and you can check out our demo to test its coding capabilities.\n\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nStarChat2 15B has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nModels trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the StarCoder2 dataset\n\n\nSince the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect.\nFor example, it may produce code that does not compile or that produces incorrect results. \n\nIt may also produce code that is vulnerable to security exploits. \n\nWe have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.\n\n\nStarChat2 15B was fine-tuned from the base model StarCoder2, please refer to its model card's Limitations Section for relevant information.\nIn particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its technical report.\n\n\nTraining details\n----------------\n\n\nThis model is a fine-tuned version of starchat2-15b-sft-v0.1 on the HuggingFaceH4/ultrafeedback\\_binarized and the HuggingFaceH4/orca\\_dpo\\_pairs datasets. Check out the recipe in the Alignment Handbook for more details.\n\n\nIt achieves the following results on the evaluation set:\n\n\n* Loss: 0.4347\n* Rewards/chosen: -0.9461\n* Rewards/rejected: -2.7745\n* Rewards/accuracies: 0.7658\n* Rewards/margins: 1.8284\n* Logps/rejected: -322.1934\n* Logps/chosen: -316.1898\n* Logits/rejected: -2.3817\n* Logits/chosen: -2.3005\n\n\nTraining procedure\n------------------", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1" ]
[ 64, 79, 765, 176, 5, 47 ]
[ "TAGS\n#transformers #safetensors #starcoder2 #text-generation #conversational #arxiv-2311.07911 #arxiv-2402.19173 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n### Model Description\n\n\n* Model type: A 16B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.\n* Language(s) (NLP): Primarily English and 600+ programming languages.\n* License: BigCode Open RAIL-M v1\n* Finetuned from model: bigcode/starcoder2-15b### Model Sources\n\n\n* Repository: URL\n* Demo: URL\n\n\nPerformance\n-----------\n\n\nStarChat2 15B was trained to balance chat and programming capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval, as well as the canonical HumanEval benchmark for Python code completion. The scores reported below were obtained using the LightEval evaluation suite (commit '988959cb905df4baa050f82b4d499d46e8b537f2') and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was fine-tuned on a blend of chat, code, math, and reasoning datasets. As a result, the model can be used for chat and you can check out our demo to test its coding capabilities.\n\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nStarChat2 15B has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nModels trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the StarCoder2 dataset\n\n\nSince the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect.\nFor example, it may produce code that does not compile or that produces incorrect results. \n\nIt may also produce code that is vulnerable to security exploits. \n\nWe have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.\n\n\nStarChat2 15B was fine-tuned from the base model StarCoder2, please refer to its model card's Limitations Section for relevant information.\nIn particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its technical report.\n\n\nTraining details\n----------------\n\n\nThis model is a fine-tuned version of starchat2-15b-sft-v0.1 on the HuggingFaceH4/ultrafeedback\\_binarized and the HuggingFaceH4/orca\\_dpo\\_pairs datasets. Check out the recipe in the Alignment Handbook for more details.\n\n\nIt achieves the following results on the evaluation set:\n\n\n* Loss: 0.4347\n* Rewards/chosen: -0.9461\n* Rewards/rejected: -2.7745\n* Rewards/accuracies: 0.7658\n* Rewards/margins: 1.8284\n* Logps/rejected: -322.1934\n* Logps/chosen: -316.1898\n* Logits/rejected: -2.3817\n* Logits/chosen: -2.3005\n\n\nTraining procedure\n------------------### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 2### Training results### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
goodakdali/llama3_testing_ins_add_adapter
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:08:28+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 26, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results_HPE This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 200 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "ybelkada/falcon-7b-sharded-bf16", "model-index": [{"name": "results_HPE", "results": []}]}
chinmayn/falcon-sharded
null
[ "peft", "pytorch", "tensorboard", "safetensors", "falcon", "trl", "sft", "generated_from_trainer", "base_model:ybelkada/falcon-7b-sharded-bf16", "region:us" ]
null
2024-05-01T06:11:50+00:00
[]
[]
TAGS #peft #pytorch #tensorboard #safetensors #falcon #trl #sft #generated_from_trainer #base_model-ybelkada/falcon-7b-sharded-bf16 #region-us
# results_HPE This model is a fine-tuned version of ybelkada/falcon-7b-sharded-bf16 on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 200 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# results_HPE\r\n\r\nThis model is a fine-tuned version of ybelkada/falcon-7b-sharded-bf16 on the None dataset.", "## Model description\r\n\r\nMore information needed", "## Intended uses & limitations\r\n\r\nMore information needed", "## Training and evaluation data\r\n\r\nMore information needed", "## Training procedure", "### Training hyperparameters\r\n\r\nThe following hyperparameters were used during training:\r\n- learning_rate: 0.0002\r\n- train_batch_size: 4\r\n- eval_batch_size: 8\r\n- seed: 42\r\n- gradient_accumulation_steps: 4\r\n- total_train_batch_size: 16\r\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\r\n- lr_scheduler_type: constant\r\n- lr_scheduler_warmup_ratio: 0.03\r\n- training_steps: 200\r\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\r\n\r\n- PEFT 0.10.1.dev0\r\n- Transformers 4.40.1\r\n- Pytorch 2.2.1+cu121\r\n- Datasets 2.19.0\r\n- Tokenizers 0.19.1" ]
[ "TAGS\n#peft #pytorch #tensorboard #safetensors #falcon #trl #sft #generated_from_trainer #base_model-ybelkada/falcon-7b-sharded-bf16 #region-us \n", "# results_HPE\r\n\r\nThis model is a fine-tuned version of ybelkada/falcon-7b-sharded-bf16 on the None dataset.", "## Model description\r\n\r\nMore information needed", "## Intended uses & limitations\r\n\r\nMore information needed", "## Training and evaluation data\r\n\r\nMore information needed", "## Training procedure", "### Training hyperparameters\r\n\r\nThe following hyperparameters were used during training:\r\n- learning_rate: 0.0002\r\n- train_batch_size: 4\r\n- eval_batch_size: 8\r\n- seed: 42\r\n- gradient_accumulation_steps: 4\r\n- total_train_batch_size: 16\r\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\r\n- lr_scheduler_type: constant\r\n- lr_scheduler_warmup_ratio: 0.03\r\n- training_steps: 200\r\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\r\n\r\n- PEFT 0.10.1.dev0\r\n- Transformers 4.40.1\r\n- Pytorch 2.2.1+cu121\r\n- Datasets 2.19.0\r\n- Tokenizers 0.19.1" ]
[ 55, 36, 7, 9, 9, 4, 133, 5, 55 ]
[ "TAGS\n#peft #pytorch #tensorboard #safetensors #falcon #trl #sft #generated_from_trainer #base_model-ybelkada/falcon-7b-sharded-bf16 #region-us \n# results_HPE\r\n\r\nThis model is a fine-tuned version of ybelkada/falcon-7b-sharded-bf16 on the None dataset.## Model description\r\n\r\nMore information needed## Intended uses & limitations\r\n\r\nMore information needed## Training and evaluation data\r\n\r\nMore information needed## Training procedure### Training hyperparameters\r\n\r\nThe following hyperparameters were used during training:\r\n- learning_rate: 0.0002\r\n- train_batch_size: 4\r\n- eval_batch_size: 8\r\n- seed: 42\r\n- gradient_accumulation_steps: 4\r\n- total_train_batch_size: 16\r\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\r\n- lr_scheduler_type: constant\r\n- lr_scheduler_warmup_ratio: 0.03\r\n- training_steps: 200\r\n- mixed_precision_training: Native AMP### Training results### Framework versions\r\n\r\n- PEFT 0.10.1.dev0\r\n- Transformers 4.40.1\r\n- Pytorch 2.2.1+cu121\r\n- Datasets 2.19.0\r\n- Tokenizers 0.19.1" ]
text-generation
transformers
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```bash pip install peft ``` ```python from transformers import AutoModelForCausalLM, AutoTokenizer from peft import AutoPeftModelForCausalLM, PeftConfig model_id = "Aryan-401/phi-3-mini-4k-instruct-finetune-guanaco" peft_model=AutoPeftModelForCausalLM.from_pretrained(model_id) model = peft_model.merge_and_unload() tokenizer = AutoTokenizer.from_pretrained(model_id) messages = [ {"role": "user", "content": "What is the Value of Pi?"} ] device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = model.to(device).eval() input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to(device), max_length= 1000) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) print(response) ```
{"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]}
Aryan-401/phi-3-mini-4k-instruct-finetune-guanaco-PEFT-only
null
[ "transformers", "tensorboard", "safetensors", "autotrain", "text-generation-inference", "text-generation", "peft", "conversational", "license:other", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:12:21+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit AutoTrain. # Usage
[ "# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.", "# Usage" ]
[ "TAGS\n#transformers #tensorboard #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us \n", "# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.", "# Usage" ]
[ 42, 23, 2 ]
[ "TAGS\n#transformers #tensorboard #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us \n# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.# Usage" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
{"library_name": "peft", "base_model": "beomi/open-llama-2-ko-7b"}
zzunyang/law_sft
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:beomi/open-llama-2-ko-7b", "region:us" ]
null
2024-05-01T06:12:32+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-beomi/open-llama-2-ko-7b #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.7.1
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.7.1" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-beomi/open-llama-2-ko-7b #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.7.1" ]
[ 41, 6, 4, 50, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5, 13 ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-beomi/open-llama-2-ko-7b #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact### Framework versions\n\n- PEFT 0.7.1" ]
text-generation
transformers
# Uploaded model - **Developed by:** wttw - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "datasets": ["wttw/code_contest_instruct_cpp"], "base_model": "unsloth/llama-3-8b-bnb-4bit", "pipeline_tag": "text-generation"}
wttw/Llama3-8B-CPP
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "dataset:wttw/code_contest_instruct_cpp", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-05-01T06:18:24+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #en #dataset-wttw/code_contest_instruct_cpp #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us
# Uploaded model - Developed by: wttw - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: wttw\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #en #dataset-wttw/code_contest_instruct_cpp #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n", "# Uploaded model\n\n- Developed by: wttw\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 94, 80 ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #en #dataset-wttw/code_contest_instruct_cpp #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n# Uploaded model\n\n- Developed by: wttw\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-feedback This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6145 - Rouge1: 51.2809 - Rouge2: 27.3229 - Rougel: 49.2287 - Rougelsum: 49.211 - Gen Len: 10.1736 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 61 | 2.9832 | 24.9931 | 10.0881 | 21.9651 | 22.0687 | 16.4876 | | No log | 2.0 | 122 | 2.1822 | 36.3348 | 17.5969 | 34.3034 | 34.2834 | 12.1653 | | No log | 3.0 | 183 | 1.9607 | 43.7295 | 21.5907 | 41.8815 | 41.929 | 10.5372 | | No log | 4.0 | 244 | 1.8412 | 48.7074 | 25.1744 | 46.8382 | 46.8399 | 10.405 | | No log | 5.0 | 305 | 1.7674 | 50.1972 | 26.4116 | 48.1456 | 48.0538 | 10.2066 | | No log | 6.0 | 366 | 1.7195 | 51.0984 | 27.8685 | 48.9483 | 49.0108 | 10.3554 | | No log | 7.0 | 427 | 1.6832 | 50.272 | 27.3168 | 48.4083 | 48.4307 | 10.0331 | | No log | 8.0 | 488 | 1.6558 | 50.6829 | 27.5132 | 48.6684 | 48.735 | 10.2727 | | 2.363 | 9.0 | 549 | 1.6357 | 50.0286 | 27.0674 | 48.0211 | 48.0783 | 10.1736 | | 2.363 | 10.0 | 610 | 1.6240 | 50.8207 | 26.8345 | 48.6528 | 48.6903 | 10.1983 | | 2.363 | 11.0 | 671 | 1.6166 | 50.9796 | 27.0236 | 48.8888 | 48.8958 | 10.1901 | | 2.363 | 12.0 | 732 | 1.6145 | 51.2809 | 27.3229 | 49.2287 | 49.211 | 10.1736 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "t5-small", "model-index": [{"name": "t5-small-finetuned-feedback", "results": []}]}
phdreg/t5-small-finetuned-feedback
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:19:45+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-small-finetuned-feedback =========================== This model is a fine-tuned version of t5-small on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.6145 * Rouge1: 51.2809 * Rouge2: 27.3229 * Rougel: 49.2287 * Rougelsum: 49.211 * Gen Len: 10.1736 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 12 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 62, 112, 5, 44 ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # superrep-mail This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4050 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.1186 | 1.0 | 1 | 2.7996 | | 0.6556 | 1.3333 | 2 | 2.4050 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "superrep-mail", "results": []}]}
GeekRoom/superrep-mail
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-05-01T06:22:20+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
superrep-mail ============= This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.4050 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 4 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.05 * num\_epochs: 2 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 57, 155, 5, 52 ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
reinforcement-learning
ml-agents
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: AlkQ/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
AlkQ/ppo-Huggy
null
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
null
2024-05-01T06:22:52+00:00
[]
[]
TAGS #ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
# ppo Agent playing Huggy This is a trained model of a ppo agent playing Huggy using the Unity ML-Agents Library. ## Usage (with ML-Agents) The Documentation: URL We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your browser: URL - A *longer tutorial* to understand how works ML-Agents: URL ### Resume the training ### Watch your Agent play You can watch your agent playing directly in your browser 1. If the environment is part of ML-Agents official environments, go to URL 2. Step 1: Find your model_id: AlkQ/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play
[ "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: AlkQ/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ "TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n", "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: AlkQ/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ 35, 199 ]
[ "TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: AlkQ/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.0001_withdpo_4iters_bs256_533lr_iter_4 This model is a fine-tuned version of [ShenaoZ/0.0001_withdpo_4iters_bs256_531lr_iter_3](https://huggingface.co/ShenaoZ/0.0001_withdpo_4iters_bs256_531lr_iter_3) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.0001_withdpo_4iters_bs256_531lr_iter_3", "model-index": [{"name": "0.0001_withdpo_4iters_bs256_533lr_iter_4", "results": []}]}
ShenaoZ/0.0001_withdpo_4iters_bs256_533lr_iter_4
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZ/0.0001_withdpo_4iters_bs256_531lr_iter_3", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:23:02+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.0001_withdpo_4iters_bs256_531lr_iter_3 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.0001_withdpo_4iters_bs256_533lr_iter_4 This model is a fine-tuned version of ShenaoZ/0.0001_withdpo_4iters_bs256_531lr_iter_3 on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
[ "# 0.0001_withdpo_4iters_bs256_533lr_iter_4\n\nThis model is a fine-tuned version of ShenaoZ/0.0001_withdpo_4iters_bs256_531lr_iter_3 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.0001_withdpo_4iters_bs256_531lr_iter_3 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.0001_withdpo_4iters_bs256_533lr_iter_4\n\nThis model is a fine-tuned version of ShenaoZ/0.0001_withdpo_4iters_bs256_531lr_iter_3 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
[ 101, 74, 7, 9, 9, 4, 155, 5, 44 ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.0001_withdpo_4iters_bs256_531lr_iter_3 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# 0.0001_withdpo_4iters_bs256_533lr_iter_4\n\nThis model is a fine-tuned version of ShenaoZ/0.0001_withdpo_4iters_bs256_531lr_iter_3 on the updated and the original datasets.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1### Training results### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # advsafe-spin-iter0 This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 32 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "alignment-handbook/zephyr-7b-sft-full", "model-index": [{"name": "advsafe-spin-iter0", "results": []}]}
AmberYifan/advsafe-spin-iter0
null
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "conversational", "base_model:alignment-handbook/zephyr-7b-sft-full", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:24:02+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #mistral #text-generation #generated_from_trainer #conversational #base_model-alignment-handbook/zephyr-7b-sft-full #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# advsafe-spin-iter0 This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 32 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
[ "# advsafe-spin-iter0\n\nThis model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 4\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- total_train_batch_size: 32\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- Transformers 4.37.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #mistral #text-generation #generated_from_trainer #conversational #base_model-alignment-handbook/zephyr-7b-sft-full #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# advsafe-spin-iter0\n\nThis model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 4\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- total_train_batch_size: 32\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- Transformers 4.37.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
[ 74, 41, 7, 9, 9, 4, 145, 5, 44 ]
[ "TAGS\n#transformers #tensorboard #safetensors #mistral #text-generation #generated_from_trainer #conversational #base_model-alignment-handbook/zephyr-7b-sft-full #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# advsafe-spin-iter0\n\nThis model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the None dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 4\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- total_train_batch_size: 32\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 3### Training results### Framework versions\n\n- Transformers 4.37.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) starchat2-15b-v0.1 - bnb 8bits - Model creator: https://huggingface.co/HuggingFaceH4/ - Original model: https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1/ Original model description: --- base_model: HuggingFaceH4/starchat2-15b-sft-v0.1 tags: - alignment-handbook - generated_from_trainer datasets: - HuggingFaceH4/ultrafeedback_binarized - HuggingFaceH4/orca_dpo_pairs model-index: - name: starchat2-15b-v0.1 results: [] --- <img src="https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1/resolve/main/model_logo.png" alt="StarChat2 15B Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for StarChat2 15B StarChat is a series of language models that are trained to act as helpful coding assistants. StarChat2 is the latest model in the series, and is a fine-tuned version of [StarCoder2](https://huggingface.co/bigcode/starcoder2-15b) that was trained with SFT and DPO on a mix of synthetic datasets. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Model type:** A 16B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily English and 600+ programming languages. - **License:** BigCode Open RAIL-M v1 - **Finetuned from model:** [bigcode/starcoder2-15b](https://huggingface.co/bigcode/starcoder2-15b) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/huggingface/alignment-handbook - **Demo:** https://huggingface.co/spaces/HuggingFaceH4/starchat2-playground ## Performance StarChat2 15B was trained to balance chat and programming capabilities. It achieves strong performance on chat benchmarks like [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [IFEval](https://arxiv.org/abs/2311.07911), as well as the canonical HumanEval benchmark for Python code completion. The scores reported below were obtained using the [LightEval](https://github.com/huggingface/lighteval) evaluation suite (commit `988959cb905df4baa050f82b4d499d46e8b537f2`) and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard. | Model | MT Bench | IFEval | HumanEval | |-------------------------------------------------------------------------------------------------|---------:|-------:|----------:| | [starchat2-15b-v0.1](https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1) | 7.66 | 35.12 | 71.34 | | [deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) | 4.17 | 14.23 | 80.48 | | [CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) | 6.80 | 43.44 | 50.60 | ## Intended uses & limitations The model was fine-tuned on a blend of chat, code, math, and reasoning datasets. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/starchat2-playground) to test its coding capabilities. Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: ```python # pip install 'transformers @ git+https://github.com/huggingface/transformers.git@831bc25d8fdb85768402f772cf65cc3d7872b211' # pip install accelerate import torch from transformers import pipeline pipe = pipeline( "text-generation", model="HuggingFaceH4/starchat2-15b-v0.1", device_map="auto", torch_dtype=torch.bfloat16, ) messages = [ { "role": "system", "content": "You are StarChat2, an expert programming assistant", }, {"role": "user", "content": "Write a simple website in HTML. When a user clicks the button, it shows a random Chuck Norris joke."}, ] outputs = pipe( messages, max_new_tokens=512, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, stop_sequence="<|im_end|>", ) print(outputs[0]["generated_text"][-1]["content"]) ``` ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> StarChat2 15B has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Models trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the [StarCoder2 dataset](https://huggingface.co/datasets/bigcode/the-stack-v2) Since the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect. For example, it may produce code that does not compile or that produces incorrect results. It may also produce code that is vulnerable to security exploits. We have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking. StarChat2 15B was fine-tuned from the base model [StarCoder2](https://huggingface.co/bigcode/starcoder2-15b), please refer to its model card's [Limitations Section](https://huggingface.co/bigcode/starcoder2-15b#limitations) for relevant information. In particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its [technical report](https://huggingface.co/papers/2402.19173). ## Training details This model is a fine-tuned version of [starchat2-15b-sft-v0.1](https://huggingface.co/HuggingFaceH4/starchat2-15b-sft-v0.1) on the HuggingFaceH4/ultrafeedback_binarized and the HuggingFaceH4/orca_dpo_pairs datasets. Check out the recipe in the [Alignment Handbook](https://github.com/huggingface/alignment-handbook) for more details. It achieves the following results on the evaluation set: - Loss: 0.4347 - Rewards/chosen: -0.9461 - Rewards/rejected: -2.7745 - Rewards/accuracies: 0.7658 - Rewards/margins: 1.8284 - Logps/rejected: -322.1934 - Logps/chosen: -316.1898 - Logits/rejected: -2.3817 - Logits/chosen: -2.3005 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.717 | 0.17 | 100 | 0.6006 | -0.0924 | -0.2899 | 0.6329 | 0.1975 | -272.5022 | -299.1165 | -2.5313 | -2.4191 | | 0.6273 | 0.35 | 200 | 0.5160 | -0.3994 | -0.9461 | 0.6930 | 0.5467 | -285.6261 | -305.2568 | -2.5281 | -2.4278 | | 0.5538 | 0.52 | 300 | 0.4781 | -0.6589 | -1.5892 | 0.7247 | 0.9302 | -298.4870 | -310.4470 | -2.4996 | -2.4110 | | 0.5056 | 0.7 | 400 | 0.4594 | -0.8283 | -2.1332 | 0.7437 | 1.3050 | -309.3687 | -313.8344 | -2.4472 | -2.3644 | | 0.4983 | 0.87 | 500 | 0.4512 | -0.7758 | -2.2806 | 0.7468 | 1.5049 | -312.3167 | -312.7843 | -2.4223 | -2.3404 | | 0.4662 | 1.04 | 600 | 0.4431 | -0.7839 | -2.4016 | 0.7658 | 1.6177 | -314.7355 | -312.9465 | -2.4049 | -2.3215 | | 0.4411 | 1.22 | 700 | 0.4415 | -1.0090 | -2.7582 | 0.7690 | 1.7492 | -321.8679 | -317.4481 | -2.3840 | -2.3016 | | 0.471 | 1.39 | 800 | 0.4368 | -0.9617 | -2.7445 | 0.7690 | 1.7828 | -321.5930 | -316.5019 | -2.3809 | -2.2991 | | 0.4485 | 1.57 | 900 | 0.4351 | -0.9490 | -2.7594 | 0.7722 | 1.8103 | -321.8916 | -316.2497 | -2.3815 | -2.3004 | | 0.4411 | 1.74 | 1000 | 0.4348 | -0.9293 | -2.7469 | 0.7658 | 1.8176 | -321.6409 | -315.8547 | -2.3823 | -2.3011 | | 0.4499 | 1.92 | 1100 | 0.4348 | -0.9482 | -2.7767 | 0.7658 | 1.8285 | -322.2369 | -316.2320 | -2.3828 | -2.3012 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
{}
RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-8bits
null
[ "transformers", "safetensors", "starcoder2", "text-generation", "conversational", "arxiv:2311.07911", "arxiv:2402.19173", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-05-01T06:25:19+00:00
[ "2311.07911", "2402.19173" ]
[]
TAGS #transformers #safetensors #starcoder2 #text-generation #conversational #arxiv-2311.07911 #arxiv-2402.19173 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models starchat2-15b-v0.1 - bnb 8bits * Model creator: URL * Original model: URL Original model description: --------------------------- base\_model: HuggingFaceH4/starchat2-15b-sft-v0.1 tags: * alignment-handbook * generated\_from\_trainer datasets: * HuggingFaceH4/ultrafeedback\_binarized * HuggingFaceH4/orca\_dpo\_pairs model-index: * name: starchat2-15b-v0.1 results: [] --- <img src="URL alt="StarChat2 15B Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Model Card for StarChat2 15B ============================ StarChat is a series of language models that are trained to act as helpful coding assistants. StarChat2 is the latest model in the series, and is a fine-tuned version of StarCoder2 that was trained with SFT and DPO on a mix of synthetic datasets. Model Details ------------- ### Model Description * Model type: A 16B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. * Language(s) (NLP): Primarily English and 600+ programming languages. * License: BigCode Open RAIL-M v1 * Finetuned from model: bigcode/starcoder2-15b ### Model Sources * Repository: URL * Demo: URL Performance ----------- StarChat2 15B was trained to balance chat and programming capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval, as well as the canonical HumanEval benchmark for Python code completion. The scores reported below were obtained using the LightEval evaluation suite (commit '988959cb905df4baa050f82b4d499d46e8b537f2') and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard. Intended uses & limitations --------------------------- The model was fine-tuned on a blend of chat, code, math, and reasoning datasets. As a result, the model can be used for chat and you can check out our demo to test its coding capabilities. Here's how you can run the model using the 'pipeline()' function from Transformers: Bias, Risks, and Limitations ---------------------------- StarChat2 15B has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Models trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the StarCoder2 dataset Since the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect. For example, it may produce code that does not compile or that produces incorrect results. It may also produce code that is vulnerable to security exploits. We have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking. StarChat2 15B was fine-tuned from the base model StarCoder2, please refer to its model card's Limitations Section for relevant information. In particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its technical report. Training details ---------------- This model is a fine-tuned version of starchat2-15b-sft-v0.1 on the HuggingFaceH4/ultrafeedback\_binarized and the HuggingFaceH4/orca\_dpo\_pairs datasets. Check out the recipe in the Alignment Handbook for more details. It achieves the following results on the evaluation set: * Loss: 0.4347 * Rewards/chosen: -0.9461 * Rewards/rejected: -2.7745 * Rewards/accuracies: 0.7658 * Rewards/margins: 1.8284 * Logps/rejected: -322.1934 * Logps/chosen: -316.1898 * Logits/rejected: -2.3817 * Logits/chosen: -2.3005 Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-07 * train\_batch\_size: 2 * eval\_batch\_size: 4 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 8 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 128 * total\_eval\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.39.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.16.1 * Tokenizers 0.15.1
[ "### Model Description\n\n\n* Model type: A 16B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.\n* Language(s) (NLP): Primarily English and 600+ programming languages.\n* License: BigCode Open RAIL-M v1\n* Finetuned from model: bigcode/starcoder2-15b", "### Model Sources\n\n\n* Repository: URL\n* Demo: URL\n\n\nPerformance\n-----------\n\n\nStarChat2 15B was trained to balance chat and programming capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval, as well as the canonical HumanEval benchmark for Python code completion. The scores reported below were obtained using the LightEval evaluation suite (commit '988959cb905df4baa050f82b4d499d46e8b537f2') and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was fine-tuned on a blend of chat, code, math, and reasoning datasets. As a result, the model can be used for chat and you can check out our demo to test its coding capabilities.\n\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nStarChat2 15B has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nModels trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the StarCoder2 dataset\n\n\nSince the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect.\nFor example, it may produce code that does not compile or that produces incorrect results. \n\nIt may also produce code that is vulnerable to security exploits. \n\nWe have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.\n\n\nStarChat2 15B was fine-tuned from the base model StarCoder2, please refer to its model card's Limitations Section for relevant information.\nIn particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its technical report.\n\n\nTraining details\n----------------\n\n\nThis model is a fine-tuned version of starchat2-15b-sft-v0.1 on the HuggingFaceH4/ultrafeedback\\_binarized and the HuggingFaceH4/orca\\_dpo\\_pairs datasets. Check out the recipe in the Alignment Handbook for more details.\n\n\nIt achieves the following results on the evaluation set:\n\n\n* Loss: 0.4347\n* Rewards/chosen: -0.9461\n* Rewards/rejected: -2.7745\n* Rewards/accuracies: 0.7658\n* Rewards/margins: 1.8284\n* Logps/rejected: -322.1934\n* Logps/chosen: -316.1898\n* Logits/rejected: -2.3817\n* Logits/chosen: -2.3005\n\n\nTraining procedure\n------------------", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1" ]
[ "TAGS\n#transformers #safetensors #starcoder2 #text-generation #conversational #arxiv-2311.07911 #arxiv-2402.19173 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "### Model Description\n\n\n* Model type: A 16B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.\n* Language(s) (NLP): Primarily English and 600+ programming languages.\n* License: BigCode Open RAIL-M v1\n* Finetuned from model: bigcode/starcoder2-15b", "### Model Sources\n\n\n* Repository: URL\n* Demo: URL\n\n\nPerformance\n-----------\n\n\nStarChat2 15B was trained to balance chat and programming capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval, as well as the canonical HumanEval benchmark for Python code completion. The scores reported below were obtained using the LightEval evaluation suite (commit '988959cb905df4baa050f82b4d499d46e8b537f2') and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was fine-tuned on a blend of chat, code, math, and reasoning datasets. As a result, the model can be used for chat and you can check out our demo to test its coding capabilities.\n\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nStarChat2 15B has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nModels trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the StarCoder2 dataset\n\n\nSince the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect.\nFor example, it may produce code that does not compile or that produces incorrect results. \n\nIt may also produce code that is vulnerable to security exploits. \n\nWe have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.\n\n\nStarChat2 15B was fine-tuned from the base model StarCoder2, please refer to its model card's Limitations Section for relevant information.\nIn particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its technical report.\n\n\nTraining details\n----------------\n\n\nThis model is a fine-tuned version of starchat2-15b-sft-v0.1 on the HuggingFaceH4/ultrafeedback\\_binarized and the HuggingFaceH4/orca\\_dpo\\_pairs datasets. Check out the recipe in the Alignment Handbook for more details.\n\n\nIt achieves the following results on the evaluation set:\n\n\n* Loss: 0.4347\n* Rewards/chosen: -0.9461\n* Rewards/rejected: -2.7745\n* Rewards/accuracies: 0.7658\n* Rewards/margins: 1.8284\n* Logps/rejected: -322.1934\n* Logps/chosen: -316.1898\n* Logits/rejected: -2.3817\n* Logits/chosen: -2.3005\n\n\nTraining procedure\n------------------", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1" ]
[ 64, 79, 765, 176, 5, 47 ]
[ "TAGS\n#transformers #safetensors #starcoder2 #text-generation #conversational #arxiv-2311.07911 #arxiv-2402.19173 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n### Model Description\n\n\n* Model type: A 16B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.\n* Language(s) (NLP): Primarily English and 600+ programming languages.\n* License: BigCode Open RAIL-M v1\n* Finetuned from model: bigcode/starcoder2-15b### Model Sources\n\n\n* Repository: URL\n* Demo: URL\n\n\nPerformance\n-----------\n\n\nStarChat2 15B was trained to balance chat and programming capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval, as well as the canonical HumanEval benchmark for Python code completion. The scores reported below were obtained using the LightEval evaluation suite (commit '988959cb905df4baa050f82b4d499d46e8b537f2') and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was fine-tuned on a blend of chat, code, math, and reasoning datasets. As a result, the model can be used for chat and you can check out our demo to test its coding capabilities.\n\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nStarChat2 15B has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nModels trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the StarCoder2 dataset\n\n\nSince the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect.\nFor example, it may produce code that does not compile or that produces incorrect results. \n\nIt may also produce code that is vulnerable to security exploits. \n\nWe have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.\n\n\nStarChat2 15B was fine-tuned from the base model StarCoder2, please refer to its model card's Limitations Section for relevant information.\nIn particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its technical report.\n\n\nTraining details\n----------------\n\n\nThis model is a fine-tuned version of starchat2-15b-sft-v0.1 on the HuggingFaceH4/ultrafeedback\\_binarized and the HuggingFaceH4/orca\\_dpo\\_pairs datasets. Check out the recipe in the Alignment Handbook for more details.\n\n\nIt achieves the following results on the evaluation set:\n\n\n* Loss: 0.4347\n* Rewards/chosen: -0.9461\n* Rewards/rejected: -2.7745\n* Rewards/accuracies: 0.7658\n* Rewards/margins: 1.8284\n* Logps/rejected: -322.1934\n* Logps/chosen: -316.1898\n* Logits/rejected: -2.3817\n* Logits/chosen: -2.3005\n\n\nTraining procedure\n------------------### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 2### Training results### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) starchat2-15b-sft-v0.1 - bnb 4bits - Model creator: https://huggingface.co/HuggingFaceH4/ - Original model: https://huggingface.co/HuggingFaceH4/starchat2-15b-sft-v0.1/ Original model description: --- license: bigcode-openrail-m base_model: bigcode/starcoder2-15b tags: - alignment-handbook - generated_from_trainer datasets: - HuggingFaceH4/airoboros-3.2 - HuggingFaceH4/Code-Feedback - HuggingFaceH4/orca-math-word-problems-200k - HuggingFaceH4/SystemChat - HuggingFaceH4/capybara model-index: - name: starcoder2-15b-sft-v5.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Model Card for starchat2-15b-sft-v0.1 This model is a fine-tuned version of [bigcode/starcoder2-15b](https://huggingface.co/bigcode/starcoder2-15b) on the HuggingFaceH4/airoboros-3.2, the HuggingFaceH4/Code-Feedback, the HuggingFaceH4/orca-math-word-problems-200k, the HuggingFaceH4/SystemChat and the HuggingFaceH4/capybara datasets. It achieves the following results on the evaluation set: - Loss: 0.6614 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - total_train_batch_size: 128 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6422 | 1.0 | 910 | 0.6910 | | 0.5701 | 2.0 | 1820 | 0.6639 | | 0.5227 | 3.0 | 2730 | 0.6614 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
{}
RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-4bits
null
[ "transformers", "safetensors", "starcoder2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-05-01T06:27:08+00:00
[]
[]
TAGS #transformers #safetensors #starcoder2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models starchat2-15b-sft-v0.1 - bnb 4bits * Model creator: URL * Original model: URL Original model description: --------------------------- license: bigcode-openrail-m base\_model: bigcode/starcoder2-15b tags: * alignment-handbook * generated\_from\_trainer datasets: * HuggingFaceH4/airoboros-3.2 * HuggingFaceH4/Code-Feedback * HuggingFaceH4/orca-math-word-problems-200k * HuggingFaceH4/SystemChat * HuggingFaceH4/capybara model-index: * name: starcoder2-15b-sft-v5.0 results: [] --- Model Card for starchat2-15b-sft-v0.1 ===================================== This model is a fine-tuned version of bigcode/starcoder2-15b on the HuggingFaceH4/airoboros-3.2, the HuggingFaceH4/Code-Feedback, the HuggingFaceH4/orca-math-word-problems-200k, the HuggingFaceH4/SystemChat and the HuggingFaceH4/capybara datasets. It achieves the following results on the evaluation set: * Loss: 0.6614 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 16 * total\_train\_batch\_size: 128 * total\_eval\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.39.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.16.1 * Tokenizers 0.15.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1" ]
[ "TAGS\n#transformers #safetensors #starcoder2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1" ]
[ 43, 166, 5, 47 ]
[ "TAGS\n#transformers #safetensors #starcoder2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-stock-tweet-sentiment-analysis This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5815 - Accuracy: 0.781 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7065 | 1.0 | 1000 | 0.5816 | 0.7628 | | 0.4915 | 2.0 | 2000 | 0.5666 | 0.7762 | | 0.3766 | 3.0 | 3000 | 0.5815 | 0.781 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-stock-tweet-sentiment-analysis", "results": []}]}
elitenandu/distilbert-stock-tweet-sentiment-analysis
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:27:20+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-stock-tweet-sentiment-analysis ========================================= This model is a fine-tuned version of distilbert-base-uncased on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.5815 * Accuracy: 0.781 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 59, 101, 5, 44 ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-1 This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-1b", "model-index": [{"name": "robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-1", "results": []}]}
AlignmentResearch/robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-1
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:27:44+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-1 This model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-1\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 1\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-1\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 1\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ 69, 56, 7, 9, 9, 4, 93, 5, 40 ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-1\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 1\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1### Training results### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - embracellm/sushi16_LoRA <Gallery /> ## Model description These are embracellm/sushi16_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of Salmon Philly Salad Roll to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](embracellm/sushi16_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of Salmon Philly Salad Roll ", "widget": []}
embracellm/sushi16_LoRA
null
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-05-01T06:34:28+00:00
[]
[]
TAGS #diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
# SDXL LoRA DreamBooth - embracellm/sushi16_LoRA <Gallery /> ## Model description These are embracellm/sushi16_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of Salmon Philly Salad Roll to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab. ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# SDXL LoRA DreamBooth - embracellm/sushi16_LoRA\n\n<Gallery />", "## Model description\n\nThese are embracellm/sushi16_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.", "## Trigger words\n\nYou should use a photo of Salmon Philly Salad Roll to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n", "# SDXL LoRA DreamBooth - embracellm/sushi16_LoRA\n\n<Gallery />", "## Model description\n\nThese are embracellm/sushi16_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.", "## Trigger words\n\nYou should use a photo of Salmon Philly Salad Roll to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ 72, 24, 84, 21, 25, 6, 7, 23, 17 ]
[ "TAGS\n#diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n# SDXL LoRA DreamBooth - embracellm/sushi16_LoRA\n\n<Gallery />## Model description\n\nThese are embracellm/sushi16_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.## Trigger words\n\nYou should use a photo of Salmon Philly Salad Roll to trigger the image generation.## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.## Intended uses & limitations#### How to use#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]## Training details\n\n[TODO: describe the data used to train the model]" ]
text-generation
transformers
<img src="./ninjalogo.svg" width="100%" height="20%" alt=""> - [Ninja-v1](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1) のGGUF版 # Our Models for GGUF - [Vecteus-GGUF](https://huggingface.co/Local-Novel-LLM-project/Vecteus-v1-gguf) - [Ninja-v1-GGUF](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-GGUF) - [Ninja-v1-NSFW-GGUF](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW-GGUF) - [Ninja-v1-128k-GGUF](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-128k-GGUF) - [Ninja-v1-NSFW-128k-GGUF](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW-128k-GGUF)
{"language": ["en", "ja"], "license": "apache-2.0", "library_name": "transformers", "tags": ["finetuned"], "pipeline_tag": "text-generation"}
Local-Novel-LLM-project/Ninja-v1-GGUF
null
[ "transformers", "gguf", "finetuned", "text-generation", "en", "ja", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:35:09+00:00
[]
[ "en", "ja" ]
TAGS #transformers #gguf #finetuned #text-generation #en #ja #license-apache-2.0 #endpoints_compatible #region-us
<img src="./URL" width="100%" height="20%" alt=""> - Ninja-v1 のGGUF版 # Our Models for GGUF - Vecteus-GGUF - Ninja-v1-GGUF - Ninja-v1-NSFW-GGUF - Ninja-v1-128k-GGUF - Ninja-v1-NSFW-128k-GGUF
[ "# Our Models for GGUF\n\n- Vecteus-GGUF\n \n- Ninja-v1-GGUF \n\n- Ninja-v1-NSFW-GGUF\n\n- Ninja-v1-128k-GGUF\n \n- Ninja-v1-NSFW-128k-GGUF" ]
[ "TAGS\n#transformers #gguf #finetuned #text-generation #en #ja #license-apache-2.0 #endpoints_compatible #region-us \n", "# Our Models for GGUF\n\n- Vecteus-GGUF\n \n- Ninja-v1-GGUF \n\n- Ninja-v1-NSFW-GGUF\n\n- Ninja-v1-128k-GGUF\n \n- Ninja-v1-NSFW-128k-GGUF" ]
[ 36, 65 ]
[ "TAGS\n#transformers #gguf #finetuned #text-generation #en #ja #license-apache-2.0 #endpoints_compatible #region-us \n# Our Models for GGUF\n\n- Vecteus-GGUF\n \n- Ninja-v1-GGUF \n\n- Ninja-v1-NSFW-GGUF\n\n- Ninja-v1-128k-GGUF\n \n- Ninja-v1-NSFW-128k-GGUF" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-2-intentmodel This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "phi-2-intentmodel", "results": []}]}
Mohit-Rai-402/phi-2-intentmodel
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-05-01T06:37:14+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us
# phi-2-intentmodel This model is a fine-tuned version of microsoft/phi-2 on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# phi-2-intentmodel\n\nThis model is a fine-tuned version of microsoft/phi-2 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us \n", "# phi-2-intentmodel\n\nThis model is a fine-tuned version of microsoft/phi-2 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ 41, 28, 7, 9, 9, 4, 93, 5, 52 ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us \n# phi-2-intentmodel\n\nThis model is a fine-tuned version of microsoft/phi-2 on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3### Training results### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-125m-finetuned-rte This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0060 - Accuracy: 0.4729 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 1 | 0.9768 | 0.4765 | | No log | 2.0 | 2 | 0.9663 | 0.4729 | | No log | 3.0 | 3 | 0.9564 | 0.4729 | | No log | 4.0 | 4 | 0.9482 | 0.4729 | | No log | 5.0 | 5 | 0.9415 | 0.4693 | | No log | 6.0 | 6 | 0.9357 | 0.4693 | | No log | 7.0 | 7 | 0.9311 | 0.4693 | | No log | 8.0 | 8 | 0.9275 | 0.4693 | | No log | 9.0 | 9 | 0.9251 | 0.4693 | | No log | 10.0 | 10 | 0.9239 | 0.4693 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "other", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "facebook/opt-125m", "model-index": [{"name": "opt-125m-finetuned-rte", "results": []}]}
elliottfitzgerald/opt-125m-finetuned-rte
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:facebook/opt-125m", "license:other", "region:us" ]
null
2024-05-01T06:37:46+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #base_model-facebook/opt-125m #license-other #region-us
opt-125m-finetuned-rte ====================== This model is a fine-tuned version of facebook/opt-125m on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.0060 * Accuracy: 0.4729 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-facebook/opt-125m #license-other #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 36, 101, 5, 52 ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-facebook/opt-125m #license-other #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10### Training results### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cnn_news_summary_model_trained_on_reduced_data This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6040 - Rouge1: 0.2177 - Rouge2: 0.0941 - Rougel: 0.1839 - Rougelsum: 0.184 - Generated Length: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Generated Length | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------------:| | No log | 1.0 | 431 | 1.6239 | 0.2174 | 0.0935 | 0.183 | 0.183 | 19.0 | | 1.92 | 2.0 | 862 | 1.6075 | 0.2168 | 0.0933 | 0.1828 | 0.1829 | 19.0 | | 1.8221 | 3.0 | 1293 | 1.6040 | 0.2177 | 0.0941 | 0.1839 | 0.184 | 19.0 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "t5-small", "model-index": [{"name": "cnn_news_summary_model_trained_on_reduced_data", "results": []}]}
minhaz21/cnn_news_summary_model_trained_on_reduced_data
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:38:52+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
cnn\_news\_summary\_model\_trained\_on\_reduced\_data ===================================================== This model is a fine-tuned version of t5-small on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.6040 * Rouge1: 0.2177 * Rouge2: 0.0941 * Rougel: 0.1839 * Rougelsum: 0.184 * Generated Length: 19.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 62, 112, 5, 44 ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# DcardStylePost-7B-Instruct-v1_0 DcardStylePost-7B-Instruct-v1_0 is a experimental finetuned from Breeze-7B-Instruct-v1_0 ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "JiunYi/DcardStylePost-7B-Instruct-v1_0" messages = [{"role": "user", "content": "【任務】:\n你是口碑行銷公司的資深寫手,現在要撰寫一篇Dcard 『美妝版』的口碑文。\n麻煩熟讀下列 【參考資料】將商品或品牌與生活情境的融合,並盡量貼近生活口吻,以不要被發現是業配文為重要目標。\n請依照【口碑切角】為文章的故事主軸撰寫整篇文章,請參考並依照【規範】角色口吻來撰寫文章,融入角色情境來完成整篇故事撰寫,請注重人物角色特色及限制。\n\n-\n\n【規範】:\n\n1.需產出文章標題\n2.請以第一人稱方式撰寫文章\n3.請記住現在是時間是西元 2023 年\n4.Please write in zh-TW language .\n5.降低口碑文業配感\n6.22~24歲\n7.撰寫角色生理性別為女性\n8.乾性皮膚\n9.注重皮膚保溼\n10.常常待在室內\n-\n\n【參考資料】\n\n文森先生|小巧濕潤澤噴霧\n主打母菊花、金縷梅保濕舒緩,加上廣告蠻常看到的就買來試試\n是「水嫩」的保濕感,蠻清爽、吸收完皮膚會嫩嫩的!\n很像剛敷完面膜的感覺,可以當妝前趕時間的速效面膜(?\n就可以讓妝效蠻服貼持久的!\n搭配持久型的粉底液使用,保濕服貼的效果更明顯!\n添加金縷梅、母菊花、冰河水,舒緩保濕\n幫助水油平衡,使肌膚活力透亮\n讓乾肌現水肌 臉部油光變水光\r\n無添加:酒精/香精/油脂\n想怎麼用就怎麼用\n\n-\n\n【口碑切角】\n最近因為換季臉很乾燥,所以很喜歡用保濕噴霧(涼爽又保濕)手邊有幾款,但可以明顯感覺得到有些噴完很雞肋、有些真的可以保濕,上網爬文才發現各品牌的差異,於是想來做一個認真的分析(包含噴霧細緻度、酸鹼度、粘膩程度、吸收度等)"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=512, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"language": ["zh"], "license": "gpl-3.0", "tags": ["art", "marketing", "llama-factory"], "metrics": ["bleu"], "base_model": "MediaTek-Research/Breeze-7B-Instruct-v1_0"}
JiunYi/DcardStylePost-7B-Instruct-v1_0
null
[ "transformers", "safetensors", "mistral", "text-generation", "art", "marketing", "llama-factory", "conversational", "zh", "base_model:MediaTek-Research/Breeze-7B-Instruct-v1_0", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:39:13+00:00
[]
[ "zh" ]
TAGS #transformers #safetensors #mistral #text-generation #art #marketing #llama-factory #conversational #zh #base_model-MediaTek-Research/Breeze-7B-Instruct-v1_0 #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# DcardStylePost-7B-Instruct-v1_0 DcardStylePost-7B-Instruct-v1_0 is a experimental finetuned from Breeze-7B-Instruct-v1_0 ## Usage
[ "# DcardStylePost-7B-Instruct-v1_0\n\nDcardStylePost-7B-Instruct-v1_0 is a experimental finetuned from Breeze-7B-Instruct-v1_0", "## Usage" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #art #marketing #llama-factory #conversational #zh #base_model-MediaTek-Research/Breeze-7B-Instruct-v1_0 #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# DcardStylePost-7B-Instruct-v1_0\n\nDcardStylePost-7B-Instruct-v1_0 is a experimental finetuned from Breeze-7B-Instruct-v1_0", "## Usage" ]
[ 80, 50, 3 ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #art #marketing #llama-factory #conversational #zh #base_model-MediaTek-Research/Breeze-7B-Instruct-v1_0 #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# DcardStylePost-7B-Instruct-v1_0\n\nDcardStylePost-7B-Instruct-v1_0 is a experimental finetuned from Breeze-7B-Instruct-v1_0## Usage" ]
text-generation
transformers
<img src="./veteus_logo.svg" width="100%" height="20%" alt=""> - Vecteus-v1のGGUF版 # Our Models for GGUF - [Vecteus](https://huggingface.co/Local-Novel-LLM-project/Vecteus-v1-gguf) - [Ninja-v1](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-GGUF) - [Ninja-v1-NSFW](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW-GGUF)
{"language": ["en", "ja"], "license": "apache-2.0", "library_name": "transformers", "tags": ["finetuned"], "pipeline_tag": "text-generation"}
Local-Novel-LLM-project/Vecteus-v1-gguf
null
[ "transformers", "gguf", "finetuned", "text-generation", "en", "ja", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:39:28+00:00
[]
[ "en", "ja" ]
TAGS #transformers #gguf #finetuned #text-generation #en #ja #license-apache-2.0 #endpoints_compatible #region-us
<img src="./veteus_logo.svg" width="100%" height="20%" alt=""> - Vecteus-v1のGGUF版 # Our Models for GGUF - Vecteus - Ninja-v1 - Ninja-v1-NSFW
[ "# Our Models for GGUF\n\n\n- Vecteus\n \n- Ninja-v1 \n\n- Ninja-v1-NSFW" ]
[ "TAGS\n#transformers #gguf #finetuned #text-generation #en #ja #license-apache-2.0 #endpoints_compatible #region-us \n", "# Our Models for GGUF\n\n\n- Vecteus\n \n- Ninja-v1 \n\n- Ninja-v1-NSFW" ]
[ 36, 25 ]
[ "TAGS\n#transformers #gguf #finetuned #text-generation #en #ja #license-apache-2.0 #endpoints_compatible #region-us \n# Our Models for GGUF\n\n\n- Vecteus\n \n- Ninja-v1 \n\n- Ninja-v1-NSFW" ]
text-to-audio
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fil_b32_le5_s8000 This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4039 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - training_steps: 8000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:--------:|:----:|:---------------:| | 0.632 | 11.1111 | 500 | 0.5323 | | 0.519 | 22.2222 | 1000 | 0.4494 | | 0.4816 | 33.3333 | 1500 | 0.4291 | | 0.481 | 44.4444 | 2000 | 0.4211 | | 0.4459 | 55.5556 | 2500 | 0.4139 | | 0.4484 | 66.6667 | 3000 | 0.4114 | | 0.4317 | 77.7778 | 3500 | 0.4081 | | 0.4301 | 88.8889 | 4000 | 0.4076 | | 0.4274 | 100.0 | 4500 | 0.4059 | | 0.4323 | 111.1111 | 5000 | 0.4062 | | 0.4189 | 122.2222 | 5500 | 0.4045 | | 0.4272 | 133.3333 | 6000 | 0.4059 | | 0.4219 | 144.4444 | 6500 | 0.4058 | | 0.4125 | 155.5556 | 7000 | 0.4049 | | 0.42 | 166.6667 | 7500 | 0.4046 | | 0.4145 | 177.7778 | 8000 | 0.4039 | ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/speecht5_tts", "model-index": [{"name": "fil_b32_le5_s8000", "results": []}]}
mikhail-panzo/fil_b32_le5_s8000
null
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "base_model:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:40:30+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us
fil\_b32\_le5\_s8000 ==================== This model is a fine-tuned version of microsoft/speecht5\_tts on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.4039 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 32 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * training\_steps: 8000 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.41.0.dev0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* training\\_steps: 8000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* training\\_steps: 8000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 52, 127, 5, 47 ]
[ "TAGS\n#transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* training\\_steps: 8000\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) starchat2-15b-sft-v0.1 - bnb 8bits - Model creator: https://huggingface.co/HuggingFaceH4/ - Original model: https://huggingface.co/HuggingFaceH4/starchat2-15b-sft-v0.1/ Original model description: --- license: bigcode-openrail-m base_model: bigcode/starcoder2-15b tags: - alignment-handbook - generated_from_trainer datasets: - HuggingFaceH4/airoboros-3.2 - HuggingFaceH4/Code-Feedback - HuggingFaceH4/orca-math-word-problems-200k - HuggingFaceH4/SystemChat - HuggingFaceH4/capybara model-index: - name: starcoder2-15b-sft-v5.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Model Card for starchat2-15b-sft-v0.1 This model is a fine-tuned version of [bigcode/starcoder2-15b](https://huggingface.co/bigcode/starcoder2-15b) on the HuggingFaceH4/airoboros-3.2, the HuggingFaceH4/Code-Feedback, the HuggingFaceH4/orca-math-word-problems-200k, the HuggingFaceH4/SystemChat and the HuggingFaceH4/capybara datasets. It achieves the following results on the evaluation set: - Loss: 0.6614 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - total_train_batch_size: 128 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6422 | 1.0 | 910 | 0.6910 | | 0.5701 | 2.0 | 1820 | 0.6639 | | 0.5227 | 3.0 | 2730 | 0.6614 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
{}
RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-8bits
null
[ "transformers", "safetensors", "starcoder2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-05-01T06:40:57+00:00
[]
[]
TAGS #transformers #safetensors #starcoder2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models starchat2-15b-sft-v0.1 - bnb 8bits * Model creator: URL * Original model: URL Original model description: --------------------------- license: bigcode-openrail-m base\_model: bigcode/starcoder2-15b tags: * alignment-handbook * generated\_from\_trainer datasets: * HuggingFaceH4/airoboros-3.2 * HuggingFaceH4/Code-Feedback * HuggingFaceH4/orca-math-word-problems-200k * HuggingFaceH4/SystemChat * HuggingFaceH4/capybara model-index: * name: starcoder2-15b-sft-v5.0 results: [] --- Model Card for starchat2-15b-sft-v0.1 ===================================== This model is a fine-tuned version of bigcode/starcoder2-15b on the HuggingFaceH4/airoboros-3.2, the HuggingFaceH4/Code-Feedback, the HuggingFaceH4/orca-math-word-problems-200k, the HuggingFaceH4/SystemChat and the HuggingFaceH4/capybara datasets. It achieves the following results on the evaluation set: * Loss: 0.6614 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 16 * total\_train\_batch\_size: 128 * total\_eval\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.39.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.16.1 * Tokenizers 0.15.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1" ]
[ "TAGS\n#transformers #safetensors #starcoder2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1" ]
[ 43, 166, 5, 47 ]
[ "TAGS\n#transformers #safetensors #starcoder2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
jamesohe/casaudit3-4bit-p03-adapter
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:41:21+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 26, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-1.3b-finetuned-rte This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7779 - Accuracy: 0.4477 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 1 | 0.7844 | 0.4765 | | No log | 2.0 | 2 | 0.7837 | 0.4801 | | No log | 3.0 | 3 | 0.7822 | 0.4910 | | No log | 4.0 | 4 | 0.7828 | 0.4838 | | No log | 5.0 | 5 | 0.7828 | 0.4838 | | No log | 6.0 | 6 | 0.7822 | 0.4838 | | No log | 7.0 | 7 | 0.7820 | 0.4801 | | No log | 8.0 | 8 | 0.7817 | 0.4838 | | No log | 9.0 | 9 | 0.7814 | 0.4874 | | No log | 10.0 | 10 | 0.7815 | 0.4874 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "other", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "facebook/opt-1.3b", "model-index": [{"name": "opt-1.3b-finetuned-rte", "results": []}]}
elliottfitzgerald/opt-1.3b-finetuned-rte
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:facebook/opt-1.3b", "license:other", "region:us" ]
null
2024-05-01T06:44:53+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #base_model-facebook/opt-1.3b #license-other #region-us
opt-1.3b-finetuned-rte ====================== This model is a fine-tuned version of facebook/opt-1.3b on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.7779 * Accuracy: 0.4477 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-facebook/opt-1.3b #license-other #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 38, 101, 5, 52 ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-facebook/opt-1.3b #license-other #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10### Training results### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-to-audio
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
procit001/dutch_female_2024_spk_5
null
[ "transformers", "safetensors", "vits", "text-to-audio", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:48:07+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #vits #text-to-audio #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #vits #text-to-audio #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 35, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #vits #text-to-audio #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-finetuned-feedback This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2738 - Rouge1: 55.9578 - Rouge2: 31.3401 - Rougel: 52.9556 - Rougelsum: 53.1034 - Gen Len: 10.2562 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 61 | 1.4341 | 53.0065 | 28.3454 | 50.4489 | 50.5377 | 9.3388 | | No log | 2.0 | 122 | 1.3604 | 53.2275 | 28.6424 | 50.6585 | 50.7617 | 9.8182 | | No log | 3.0 | 183 | 1.3207 | 52.7581 | 28.6272 | 49.6977 | 49.7928 | 10.0661 | | No log | 4.0 | 244 | 1.3098 | 53.5227 | 28.6578 | 50.2637 | 50.2897 | 9.9752 | | No log | 5.0 | 305 | 1.2898 | 54.4587 | 29.8825 | 51.3522 | 51.4744 | 9.876 | | No log | 6.0 | 366 | 1.2781 | 54.046 | 29.7089 | 51.3241 | 51.4283 | 10.1818 | | No log | 7.0 | 427 | 1.2771 | 55.1788 | 30.8745 | 52.3598 | 52.4871 | 10.2149 | | No log | 8.0 | 488 | 1.2762 | 55.6258 | 30.9444 | 52.5715 | 52.6889 | 10.2397 | | 1.2952 | 9.0 | 549 | 1.2746 | 55.759 | 30.918 | 52.8427 | 52.8878 | 10.1818 | | 1.2952 | 10.0 | 610 | 1.2738 | 55.9578 | 31.3401 | 52.9556 | 53.1034 | 10.2562 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "t5-base-finetuned-feedback", "results": []}]}
phdreg/t5-base-finetuned-feedback
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:48:47+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-base-finetuned-feedback ========================== This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.2738 * Rouge1: 55.9578 * Rouge2: 31.3401 * Rougel: 52.9556 * Rougelsum: 53.1034 * Gen Len: 10.2562 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 45, 112, 5, 44 ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
abc88767/model27
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:49:54+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 41, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) starchat2-15b-v0.1 - GGUF - Model creator: https://huggingface.co/HuggingFaceH4/ - Original model: https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [starchat2-15b-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q2_K.gguf) | Q2_K | 5.77GB | | [starchat2-15b-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.IQ3_XS.gguf) | IQ3_XS | 6.25GB | | [starchat2-15b-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.IQ3_S.gguf) | IQ3_S | 6.52GB | | [starchat2-15b-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q3_K_S.gguf) | Q3_K_S | 6.51GB | | [starchat2-15b-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.IQ3_M.gguf) | IQ3_M | 6.8GB | | [starchat2-15b-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q3_K.gguf) | Q3_K | 7.49GB | | [starchat2-15b-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q3_K_M.gguf) | Q3_K_M | 7.49GB | | [starchat2-15b-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q3_K_L.gguf) | Q3_K_L | 8.35GB | | [starchat2-15b-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.IQ4_XS.gguf) | IQ4_XS | 8.12GB | | [starchat2-15b-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q4_0.gguf) | Q4_0 | 8.44GB | | [starchat2-15b-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.IQ4_NL.gguf) | IQ4_NL | 8.55GB | | [starchat2-15b-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q4_K_S.gguf) | Q4_K_S | 8.53GB | | [starchat2-15b-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q4_K.gguf) | Q4_K | 9.18GB | | [starchat2-15b-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q4_K_M.gguf) | Q4_K_M | 9.18GB | | [starchat2-15b-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q4_1.gguf) | Q4_1 | 9.35GB | | [starchat2-15b-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q5_0.gguf) | Q5_0 | 10.27GB | | [starchat2-15b-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q5_K_S.gguf) | Q5_K_S | 10.27GB | | [starchat2-15b-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q5_K.gguf) | Q5_K | 10.65GB | | [starchat2-15b-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q5_K_M.gguf) | Q5_K_M | 10.65GB | | [starchat2-15b-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q5_1.gguf) | Q5_1 | 11.18GB | | [starchat2-15b-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf/blob/main/starchat2-15b-v0.1.Q6_K.gguf) | Q6_K | 12.2GB | Original model description: --- base_model: HuggingFaceH4/starchat2-15b-sft-v0.1 tags: - alignment-handbook - generated_from_trainer datasets: - HuggingFaceH4/ultrafeedback_binarized - HuggingFaceH4/orca_dpo_pairs model-index: - name: starchat2-15b-v0.1 results: [] --- <img src="https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1/resolve/main/model_logo.png" alt="StarChat2 15B Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for StarChat2 15B StarChat is a series of language models that are trained to act as helpful coding assistants. StarChat2 is the latest model in the series, and is a fine-tuned version of [StarCoder2](https://huggingface.co/bigcode/starcoder2-15b) that was trained with SFT and DPO on a mix of synthetic datasets. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Model type:** A 16B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily English and 600+ programming languages. - **License:** BigCode Open RAIL-M v1 - **Finetuned from model:** [bigcode/starcoder2-15b](https://huggingface.co/bigcode/starcoder2-15b) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/huggingface/alignment-handbook - **Demo:** https://huggingface.co/spaces/HuggingFaceH4/starchat2-playground ## Performance StarChat2 15B was trained to balance chat and programming capabilities. It achieves strong performance on chat benchmarks like [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [IFEval](https://arxiv.org/abs/2311.07911), as well as the canonical HumanEval benchmark for Python code completion. The scores reported below were obtained using the [LightEval](https://github.com/huggingface/lighteval) evaluation suite (commit `988959cb905df4baa050f82b4d499d46e8b537f2`) and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard. | Model | MT Bench | IFEval | HumanEval | |-------------------------------------------------------------------------------------------------|---------:|-------:|----------:| | [starchat2-15b-v0.1](https://huggingface.co/HuggingFaceH4/starchat2-15b-v0.1) | 7.66 | 35.12 | 71.34 | | [deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) | 4.17 | 14.23 | 80.48 | | [CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) | 6.80 | 43.44 | 50.60 | ## Intended uses & limitations The model was fine-tuned on a blend of chat, code, math, and reasoning datasets. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/starchat2-playground) to test its coding capabilities. Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: ```python # pip install 'transformers @ git+https://github.com/huggingface/transformers.git@831bc25d8fdb85768402f772cf65cc3d7872b211' # pip install accelerate import torch from transformers import pipeline pipe = pipeline( "text-generation", model="HuggingFaceH4/starchat2-15b-v0.1", device_map="auto", torch_dtype=torch.bfloat16, ) messages = [ { "role": "system", "content": "You are StarChat2, an expert programming assistant", }, {"role": "user", "content": "Write a simple website in HTML. When a user clicks the button, it shows a random Chuck Norris joke."}, ] outputs = pipe( messages, max_new_tokens=512, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, stop_sequence="<|im_end|>", ) print(outputs[0]["generated_text"][-1]["content"]) ``` ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> StarChat2 15B has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Models trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the [StarCoder2 dataset](https://huggingface.co/datasets/bigcode/the-stack-v2) Since the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect. For example, it may produce code that does not compile or that produces incorrect results. It may also produce code that is vulnerable to security exploits. We have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking. StarChat2 15B was fine-tuned from the base model [StarCoder2](https://huggingface.co/bigcode/starcoder2-15b), please refer to its model card's [Limitations Section](https://huggingface.co/bigcode/starcoder2-15b#limitations) for relevant information. In particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its [technical report](https://huggingface.co/papers/2402.19173). ## Training details This model is a fine-tuned version of [starchat2-15b-sft-v0.1](https://huggingface.co/HuggingFaceH4/starchat2-15b-sft-v0.1) on the HuggingFaceH4/ultrafeedback_binarized and the HuggingFaceH4/orca_dpo_pairs datasets. Check out the recipe in the [Alignment Handbook](https://github.com/huggingface/alignment-handbook) for more details. It achieves the following results on the evaluation set: - Loss: 0.4347 - Rewards/chosen: -0.9461 - Rewards/rejected: -2.7745 - Rewards/accuracies: 0.7658 - Rewards/margins: 1.8284 - Logps/rejected: -322.1934 - Logps/chosen: -316.1898 - Logits/rejected: -2.3817 - Logits/chosen: -2.3005 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.717 | 0.17 | 100 | 0.6006 | -0.0924 | -0.2899 | 0.6329 | 0.1975 | -272.5022 | -299.1165 | -2.5313 | -2.4191 | | 0.6273 | 0.35 | 200 | 0.5160 | -0.3994 | -0.9461 | 0.6930 | 0.5467 | -285.6261 | -305.2568 | -2.5281 | -2.4278 | | 0.5538 | 0.52 | 300 | 0.4781 | -0.6589 | -1.5892 | 0.7247 | 0.9302 | -298.4870 | -310.4470 | -2.4996 | -2.4110 | | 0.5056 | 0.7 | 400 | 0.4594 | -0.8283 | -2.1332 | 0.7437 | 1.3050 | -309.3687 | -313.8344 | -2.4472 | -2.3644 | | 0.4983 | 0.87 | 500 | 0.4512 | -0.7758 | -2.2806 | 0.7468 | 1.5049 | -312.3167 | -312.7843 | -2.4223 | -2.3404 | | 0.4662 | 1.04 | 600 | 0.4431 | -0.7839 | -2.4016 | 0.7658 | 1.6177 | -314.7355 | -312.9465 | -2.4049 | -2.3215 | | 0.4411 | 1.22 | 700 | 0.4415 | -1.0090 | -2.7582 | 0.7690 | 1.7492 | -321.8679 | -317.4481 | -2.3840 | -2.3016 | | 0.471 | 1.39 | 800 | 0.4368 | -0.9617 | -2.7445 | 0.7690 | 1.7828 | -321.5930 | -316.5019 | -2.3809 | -2.2991 | | 0.4485 | 1.57 | 900 | 0.4351 | -0.9490 | -2.7594 | 0.7722 | 1.8103 | -321.8916 | -316.2497 | -2.3815 | -2.3004 | | 0.4411 | 1.74 | 1000 | 0.4348 | -0.9293 | -2.7469 | 0.7658 | 1.8176 | -321.6409 | -315.8547 | -2.3823 | -2.3011 | | 0.4499 | 1.92 | 1100 | 0.4348 | -0.9482 | -2.7767 | 0.7658 | 1.8285 | -322.2369 | -316.2320 | -2.3828 | -2.3012 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
{}
RichardErkhov/HuggingFaceH4_-_starchat2-15b-v0.1-gguf
null
[ "gguf", "arxiv:2311.07911", "arxiv:2402.19173", "region:us" ]
null
2024-05-01T06:50:23+00:00
[ "2311.07911", "2402.19173" ]
[]
TAGS #gguf #arxiv-2311.07911 #arxiv-2402.19173 #region-us
Quantization made by Richard Erkhov. Github Discord Request more models starchat2-15b-v0.1 - GGUF * Model creator: URL * Original model: URL Name: starchat2-15b-v0.1.Q2\_K.gguf, Quant method: Q2\_K, Size: 5.77GB Name: starchat2-15b-v0.1.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 6.25GB Name: starchat2-15b-v0.1.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 6.52GB Name: starchat2-15b-v0.1.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 6.51GB Name: starchat2-15b-v0.1.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 6.8GB Name: starchat2-15b-v0.1.Q3\_K.gguf, Quant method: Q3\_K, Size: 7.49GB Name: starchat2-15b-v0.1.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 7.49GB Name: starchat2-15b-v0.1.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 8.35GB Name: starchat2-15b-v0.1.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 8.12GB Name: starchat2-15b-v0.1.Q4\_0.gguf, Quant method: Q4\_0, Size: 8.44GB Name: starchat2-15b-v0.1.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 8.55GB Name: starchat2-15b-v0.1.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 8.53GB Name: starchat2-15b-v0.1.Q4\_K.gguf, Quant method: Q4\_K, Size: 9.18GB Name: starchat2-15b-v0.1.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 9.18GB Name: starchat2-15b-v0.1.Q4\_1.gguf, Quant method: Q4\_1, Size: 9.35GB Name: starchat2-15b-v0.1.Q5\_0.gguf, Quant method: Q5\_0, Size: 10.27GB Name: starchat2-15b-v0.1.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 10.27GB Name: starchat2-15b-v0.1.Q5\_K.gguf, Quant method: Q5\_K, Size: 10.65GB Name: starchat2-15b-v0.1.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 10.65GB Name: starchat2-15b-v0.1.Q5\_1.gguf, Quant method: Q5\_1, Size: 11.18GB Name: starchat2-15b-v0.1.Q6\_K.gguf, Quant method: Q6\_K, Size: 12.2GB Original model description: --------------------------- base\_model: HuggingFaceH4/starchat2-15b-sft-v0.1 tags: * alignment-handbook * generated\_from\_trainer datasets: * HuggingFaceH4/ultrafeedback\_binarized * HuggingFaceH4/orca\_dpo\_pairs model-index: * name: starchat2-15b-v0.1 results: [] --- <img src="URL alt="StarChat2 15B Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Model Card for StarChat2 15B ============================ StarChat is a series of language models that are trained to act as helpful coding assistants. StarChat2 is the latest model in the series, and is a fine-tuned version of StarCoder2 that was trained with SFT and DPO on a mix of synthetic datasets. Model Details ------------- ### Model Description * Model type: A 16B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. * Language(s) (NLP): Primarily English and 600+ programming languages. * License: BigCode Open RAIL-M v1 * Finetuned from model: bigcode/starcoder2-15b ### Model Sources * Repository: URL * Demo: URL Performance ----------- StarChat2 15B was trained to balance chat and programming capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval, as well as the canonical HumanEval benchmark for Python code completion. The scores reported below were obtained using the LightEval evaluation suite (commit '988959cb905df4baa050f82b4d499d46e8b537f2') and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard. Intended uses & limitations --------------------------- The model was fine-tuned on a blend of chat, code, math, and reasoning datasets. As a result, the model can be used for chat and you can check out our demo to test its coding capabilities. Here's how you can run the model using the 'pipeline()' function from Transformers: Bias, Risks, and Limitations ---------------------------- StarChat2 15B has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Models trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the StarCoder2 dataset Since the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect. For example, it may produce code that does not compile or that produces incorrect results. It may also produce code that is vulnerable to security exploits. We have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking. StarChat2 15B was fine-tuned from the base model StarCoder2, please refer to its model card's Limitations Section for relevant information. In particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its technical report. Training details ---------------- This model is a fine-tuned version of starchat2-15b-sft-v0.1 on the HuggingFaceH4/ultrafeedback\_binarized and the HuggingFaceH4/orca\_dpo\_pairs datasets. Check out the recipe in the Alignment Handbook for more details. It achieves the following results on the evaluation set: * Loss: 0.4347 * Rewards/chosen: -0.9461 * Rewards/rejected: -2.7745 * Rewards/accuracies: 0.7658 * Rewards/margins: 1.8284 * Logps/rejected: -322.1934 * Logps/chosen: -316.1898 * Logits/rejected: -2.3817 * Logits/chosen: -2.3005 Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-07 * train\_batch\_size: 2 * eval\_batch\_size: 4 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 8 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 128 * total\_eval\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.39.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.16.1 * Tokenizers 0.15.1
[ "### Model Description\n\n\n* Model type: A 16B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.\n* Language(s) (NLP): Primarily English and 600+ programming languages.\n* License: BigCode Open RAIL-M v1\n* Finetuned from model: bigcode/starcoder2-15b", "### Model Sources\n\n\n* Repository: URL\n* Demo: URL\n\n\nPerformance\n-----------\n\n\nStarChat2 15B was trained to balance chat and programming capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval, as well as the canonical HumanEval benchmark for Python code completion. The scores reported below were obtained using the LightEval evaluation suite (commit '988959cb905df4baa050f82b4d499d46e8b537f2') and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was fine-tuned on a blend of chat, code, math, and reasoning datasets. As a result, the model can be used for chat and you can check out our demo to test its coding capabilities.\n\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nStarChat2 15B has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nModels trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the StarCoder2 dataset\n\n\nSince the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect.\nFor example, it may produce code that does not compile or that produces incorrect results. \n\nIt may also produce code that is vulnerable to security exploits. \n\nWe have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.\n\n\nStarChat2 15B was fine-tuned from the base model StarCoder2, please refer to its model card's Limitations Section for relevant information.\nIn particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its technical report.\n\n\nTraining details\n----------------\n\n\nThis model is a fine-tuned version of starchat2-15b-sft-v0.1 on the HuggingFaceH4/ultrafeedback\\_binarized and the HuggingFaceH4/orca\\_dpo\\_pairs datasets. Check out the recipe in the Alignment Handbook for more details.\n\n\nIt achieves the following results on the evaluation set:\n\n\n* Loss: 0.4347\n* Rewards/chosen: -0.9461\n* Rewards/rejected: -2.7745\n* Rewards/accuracies: 0.7658\n* Rewards/margins: 1.8284\n* Logps/rejected: -322.1934\n* Logps/chosen: -316.1898\n* Logits/rejected: -2.3817\n* Logits/chosen: -2.3005\n\n\nTraining procedure\n------------------", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1" ]
[ "TAGS\n#gguf #arxiv-2311.07911 #arxiv-2402.19173 #region-us \n", "### Model Description\n\n\n* Model type: A 16B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.\n* Language(s) (NLP): Primarily English and 600+ programming languages.\n* License: BigCode Open RAIL-M v1\n* Finetuned from model: bigcode/starcoder2-15b", "### Model Sources\n\n\n* Repository: URL\n* Demo: URL\n\n\nPerformance\n-----------\n\n\nStarChat2 15B was trained to balance chat and programming capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval, as well as the canonical HumanEval benchmark for Python code completion. The scores reported below were obtained using the LightEval evaluation suite (commit '988959cb905df4baa050f82b4d499d46e8b537f2') and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was fine-tuned on a blend of chat, code, math, and reasoning datasets. As a result, the model can be used for chat and you can check out our demo to test its coding capabilities.\n\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nStarChat2 15B has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nModels trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the StarCoder2 dataset\n\n\nSince the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect.\nFor example, it may produce code that does not compile or that produces incorrect results. \n\nIt may also produce code that is vulnerable to security exploits. \n\nWe have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.\n\n\nStarChat2 15B was fine-tuned from the base model StarCoder2, please refer to its model card's Limitations Section for relevant information.\nIn particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its technical report.\n\n\nTraining details\n----------------\n\n\nThis model is a fine-tuned version of starchat2-15b-sft-v0.1 on the HuggingFaceH4/ultrafeedback\\_binarized and the HuggingFaceH4/orca\\_dpo\\_pairs datasets. Check out the recipe in the Alignment Handbook for more details.\n\n\nIt achieves the following results on the evaluation set:\n\n\n* Loss: 0.4347\n* Rewards/chosen: -0.9461\n* Rewards/rejected: -2.7745\n* Rewards/accuracies: 0.7658\n* Rewards/margins: 1.8284\n* Logps/rejected: -322.1934\n* Logps/chosen: -316.1898\n* Logits/rejected: -2.3817\n* Logits/chosen: -2.3005\n\n\nTraining procedure\n------------------", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1" ]
[ 30, 79, 765, 176, 5, 47 ]
[ "TAGS\n#gguf #arxiv-2311.07911 #arxiv-2402.19173 #region-us \n### Model Description\n\n\n* Model type: A 16B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.\n* Language(s) (NLP): Primarily English and 600+ programming languages.\n* License: BigCode Open RAIL-M v1\n* Finetuned from model: bigcode/starcoder2-15b### Model Sources\n\n\n* Repository: URL\n* Demo: URL\n\n\nPerformance\n-----------\n\n\nStarChat2 15B was trained to balance chat and programming capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval, as well as the canonical HumanEval benchmark for Python code completion. The scores reported below were obtained using the LightEval evaluation suite (commit '988959cb905df4baa050f82b4d499d46e8b537f2') and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was fine-tuned on a blend of chat, code, math, and reasoning datasets. As a result, the model can be used for chat and you can check out our demo to test its coding capabilities.\n\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nStarChat2 15B has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nModels trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the StarCoder2 dataset\n\n\nSince the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect.\nFor example, it may produce code that does not compile or that produces incorrect results. \n\nIt may also produce code that is vulnerable to security exploits. \n\nWe have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.\n\n\nStarChat2 15B was fine-tuned from the base model StarCoder2, please refer to its model card's Limitations Section for relevant information.\nIn particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its technical report.\n\n\nTraining details\n----------------\n\n\nThis model is a fine-tuned version of starchat2-15b-sft-v0.1 on the HuggingFaceH4/ultrafeedback\\_binarized and the HuggingFaceH4/orca\\_dpo\\_pairs datasets. Check out the recipe in the Alignment Handbook for more details.\n\n\nIt achieves the following results on the evaluation set:\n\n\n* Loss: 0.4347\n* Rewards/chosen: -0.9461\n* Rewards/rejected: -2.7745\n* Rewards/accuracies: 0.7658\n* Rewards/margins: 1.8284\n* Logps/rejected: -322.1934\n* Logps/chosen: -316.1898\n* Logits/rejected: -2.3817\n* Logits/chosen: -2.3005\n\n\nTraining procedure\n------------------### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 2### Training results### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-am This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2320 - Wer: 58.5201 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.2711 | 1.4706 | 1000 | 0.3047 | 69.7082 | | 0.1957 | 2.9412 | 2000 | 0.2488 | 63.4693 | | 0.1385 | 4.4118 | 3000 | 0.2366 | 60.0677 | | 0.1278 | 5.8824 | 4000 | 0.2320 | 58.5201 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "openai/whisper-tiny", "model-index": [{"name": "whisper-tiny-am", "results": []}]}
Gizachew/whisper-tiny-am
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:51:43+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #base_model-openai/whisper-tiny #license-apache-2.0 #endpoints_compatible #region-us
whisper-tiny-am =============== This model is a fine-tuned version of openai/whisper-tiny on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.2320 * Wer: 58.5201 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * training\_steps: 4000 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #base_model-openai/whisper-tiny #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 52, 126, 5, 44 ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #base_model-openai/whisper-tiny #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-2 This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-1b", "model-index": [{"name": "robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-2", "results": []}]}
AlignmentResearch/robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-2
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:53:08+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-2 This model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-2\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-2\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ 69, 56, 7, 9, 9, 4, 93, 5, 40 ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-2\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1### Training results### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/adapter-llama-adaptertoxic2nontoxic-100-50-0.004
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:54:12+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 26, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text-generation
transformers
<img src="./ninjalogo.svg" width="100%" height="20%" alt=""> - [Ninja-v1-NSFW](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW) のGGUF版 # Our Models for GGUF - [Vecteus-GGUF](https://huggingface.co/Local-Novel-LLM-project/Vecteus-v1-gguf) - [Ninja-v1-GGUF](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-GGUF) - [Ninja-v1-NSFW-GGUF](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW-GGUF) - [Ninja-v1-128k-GGUF](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-128k-GGUF) - [Ninja-v1-NSFW-128k-GGUF](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW-128k-GGUF)
{"language": ["en", "ja"], "license": "apache-2.0", "library_name": "transformers", "tags": ["finetuned", "not-for-all-audiences"], "pipeline_tag": "text-generation"}
Local-Novel-LLM-project/Ninja-v1-NSFW-GGUF
null
[ "transformers", "gguf", "finetuned", "not-for-all-audiences", "text-generation", "en", "ja", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:54:42+00:00
[]
[ "en", "ja" ]
TAGS #transformers #gguf #finetuned #not-for-all-audiences #text-generation #en #ja #license-apache-2.0 #endpoints_compatible #region-us
<img src="./URL" width="100%" height="20%" alt=""> - Ninja-v1-NSFW のGGUF版 # Our Models for GGUF - Vecteus-GGUF - Ninja-v1-GGUF - Ninja-v1-NSFW-GGUF - Ninja-v1-128k-GGUF - Ninja-v1-NSFW-128k-GGUF
[ "# Our Models for GGUF\n\n- Vecteus-GGUF\n \n- Ninja-v1-GGUF \n\n- Ninja-v1-NSFW-GGUF\n\n- Ninja-v1-128k-GGUF\n \n- Ninja-v1-NSFW-128k-GGUF" ]
[ "TAGS\n#transformers #gguf #finetuned #not-for-all-audiences #text-generation #en #ja #license-apache-2.0 #endpoints_compatible #region-us \n", "# Our Models for GGUF\n\n- Vecteus-GGUF\n \n- Ninja-v1-GGUF \n\n- Ninja-v1-NSFW-GGUF\n\n- Ninja-v1-128k-GGUF\n \n- Ninja-v1-NSFW-128k-GGUF" ]
[ 44, 65 ]
[ "TAGS\n#transformers #gguf #finetuned #not-for-all-audiences #text-generation #en #ja #license-apache-2.0 #endpoints_compatible #region-us \n# Our Models for GGUF\n\n- Vecteus-GGUF\n \n- Ninja-v1-GGUF \n\n- Ninja-v1-NSFW-GGUF\n\n- Ninja-v1-128k-GGUF\n \n- Ninja-v1-NSFW-128k-GGUF" ]
null
transformers
# Uploaded model - **Developed by:** catastropiyush - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
catastropiyush/llama-3_8b_Q5_K_M
null
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:55:55+00:00
[]
[ "en" ]
TAGS #transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: catastropiyush - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: catastropiyush\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: catastropiyush\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 61, 82 ]
[ "TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: catastropiyush\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
feature-extraction
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
MaiiaCompsolutions/multiclass_id2label_04_30_2024
null
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:57:16+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 32, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #bert #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - embracellm/sushi17_LoRA <Gallery /> ## Model description These are embracellm/sushi17_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of Salmon Poke Bowl to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](embracellm/sushi17_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of Salmon Poke Bowl", "widget": []}
embracellm/sushi17_LoRA
null
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-05-01T06:57:20+00:00
[]
[]
TAGS #diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
# SDXL LoRA DreamBooth - embracellm/sushi17_LoRA <Gallery /> ## Model description These are embracellm/sushi17_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of Salmon Poke Bowl to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab. ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# SDXL LoRA DreamBooth - embracellm/sushi17_LoRA\n\n<Gallery />", "## Model description\n\nThese are embracellm/sushi17_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.", "## Trigger words\n\nYou should use a photo of Salmon Poke Bowl to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n", "# SDXL LoRA DreamBooth - embracellm/sushi17_LoRA\n\n<Gallery />", "## Model description\n\nThese are embracellm/sushi17_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.", "## Trigger words\n\nYou should use a photo of Salmon Poke Bowl to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ 72, 24, 84, 19, 25, 6, 7, 23, 17 ]
[ "TAGS\n#diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n# SDXL LoRA DreamBooth - embracellm/sushi17_LoRA\n\n<Gallery />## Model description\n\nThese are embracellm/sushi17_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.## Trigger words\n\nYou should use a photo of Salmon Poke Bowl to trigger the image generation.## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.## Intended uses & limitations#### How to use#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]## Training details\n\n[TODO: describe the data used to train the model]" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-0 This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-1b", "model-index": [{"name": "robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-0", "results": []}]}
AlignmentResearch/robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-0
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T06:59:17+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-0 This model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-0\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-0\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ 69, 56, 7, 9, 9, 4, 93, 5, 40 ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-0\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1### Training results### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper3 This model is a fine-tuned version of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) on the tiny dataset. It achieves the following results on the evaluation set: - Loss: 0.5509 - Wer: 26.9488 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 300 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 3.8281 | 0.2778 | 10 | 3.7929 | 80.4009 | | 3.209 | 0.5556 | 20 | 3.0014 | 68.3742 | | 2.1066 | 0.8333 | 30 | 1.7613 | 63.9198 | | 0.9963 | 1.1111 | 40 | 0.8741 | 52.4340 | | 0.6922 | 1.3889 | 50 | 0.7009 | 35.8256 | | 0.5816 | 1.6667 | 60 | 0.6238 | 31.1486 | | 0.5684 | 1.9444 | 70 | 0.5698 | 35.4757 | | 0.427 | 2.2222 | 80 | 0.5380 | 27.2669 | | 0.4395 | 2.5 | 90 | 0.5162 | 32.7394 | | 0.3861 | 2.7778 | 100 | 0.4953 | 24.5307 | | 0.3745 | 3.0556 | 110 | 0.4837 | 24.6262 | | 0.2487 | 3.3333 | 120 | 0.4733 | 23.5762 | | 0.2343 | 3.6111 | 130 | 0.4652 | 24.9443 | | 0.2429 | 3.8889 | 140 | 0.4581 | 24.0853 | | 0.1286 | 4.1667 | 150 | 0.4673 | 24.2762 | | 0.1304 | 4.4444 | 160 | 0.4698 | 31.7213 | | 0.1361 | 4.7222 | 170 | 0.4690 | 33.0894 | | 0.1447 | 5.0 | 180 | 0.4812 | 24.6580 | | 0.0617 | 5.2778 | 190 | 0.4871 | 29.9395 | | 0.0617 | 5.5556 | 200 | 0.4884 | 24.8489 | | 0.0577 | 5.8333 | 210 | 0.4998 | 26.8533 | | 0.038 | 6.1111 | 220 | 0.5007 | 24.8489 | | 0.0269 | 6.3889 | 230 | 0.5123 | 27.1397 | | 0.0321 | 6.6667 | 240 | 0.5005 | 23.3535 | | 0.0296 | 6.9444 | 250 | 0.5332 | 31.8804 | | 0.0207 | 7.2222 | 260 | 0.5237 | 30.0668 | | 0.0215 | 7.5 | 270 | 0.5223 | 25.5488 | | 0.0198 | 7.7778 | 280 | 0.5157 | 30.1941 | | 0.0273 | 8.0556 | 290 | 0.5290 | 27.5533 | | 0.0197 | 8.3333 | 300 | 0.5509 | 26.9488 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.1.dev0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "openai/whisper-tiny.en", "model-index": [{"name": "whisper3", "results": []}]}
khaingsmon/whisper3
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-tiny.en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T06:59:46+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #base_model-openai/whisper-tiny.en #license-apache-2.0 #endpoints_compatible #region-us
whisper3 ======== This model is a fine-tuned version of openai/URL on the tiny dataset. It achieves the following results on the evaluation set: * Loss: 0.5509 * Wer: 26.9488 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 128 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * training\_steps: 300 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.1.dev0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 300", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.1.dev0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #base_model-openai/whisper-tiny.en #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 300", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.1.dev0\n* Tokenizers 0.19.1" ]
[ 54, 115, 5, 47 ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #base_model-openai/whisper-tiny.en #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 300### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.1.dev0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# maverick_v3_folder This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\Mistral-7B-Instruct-v0.2 as a base. ### Models Merged The following models were included in the merge: * D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\Mistroll-7B-v2.2 * D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\multi_verse_model ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\multi_verse_model parameters: weight: 0.4 - model: D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\Mistroll-7B-v2.2 parameters: weight: 0.6 base_model: D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\Mistral-7B-Instruct-v0.2 merge_method: task_arithmetic dtype: bfloat16 ```
{"license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": []}
shyamieee/Maverick-v3.0
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2212.04089", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T07:02:11+00:00
[ "2212.04089" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #arxiv-2212.04089 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# maverick_v3_folder This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the task arithmetic merge method using D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\Mistral-7B-Instruct-v0.2 as a base. ### Models Merged The following models were included in the merge: * D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\Mistroll-7B-v2.2 * D:\Learning Centre\GenAI\LLM Leaderboard\2024042801\mergekit-main\models\multi_verse_model ### Configuration The following YAML configuration was used to produce this model:
[ "# maverick_v3_folder\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the task arithmetic merge method using D:\\Learning Centre\\GenAI\\LLM Leaderboard\\2024042801\\mergekit-main\\models\\Mistral-7B-Instruct-v0.2 as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* D:\\Learning Centre\\GenAI\\LLM Leaderboard\\2024042801\\mergekit-main\\models\\Mistroll-7B-v2.2\n* D:\\Learning Centre\\GenAI\\LLM Leaderboard\\2024042801\\mergekit-main\\models\\multi_verse_model", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #arxiv-2212.04089 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# maverick_v3_folder\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the task arithmetic merge method using D:\\Learning Centre\\GenAI\\LLM Leaderboard\\2024042801\\mergekit-main\\models\\Mistral-7B-Instruct-v0.2 as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* D:\\Learning Centre\\GenAI\\LLM Leaderboard\\2024042801\\mergekit-main\\models\\Mistroll-7B-v2.2\n* D:\\Learning Centre\\GenAI\\LLM Leaderboard\\2024042801\\mergekit-main\\models\\multi_verse_model", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ 62, 22, 4, 60, 85, 16 ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #arxiv-2212.04089 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# maverick_v3_folder\n\nThis is a merge of pre-trained language models created using mergekit.## Merge Details### Merge Method\n\nThis model was merged using the task arithmetic merge method using D:\\Learning Centre\\GenAI\\LLM Leaderboard\\2024042801\\mergekit-main\\models\\Mistral-7B-Instruct-v0.2 as a base.### Models Merged\n\nThe following models were included in the merge:\n* D:\\Learning Centre\\GenAI\\LLM Leaderboard\\2024042801\\mergekit-main\\models\\Mistroll-7B-v2.2\n* D:\\Learning Centre\\GenAI\\LLM Leaderboard\\2024042801\\mergekit-main\\models\\multi_verse_model### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) starchat2-15b-sft-v0.1 - GGUF - Model creator: https://huggingface.co/HuggingFaceH4/ - Original model: https://huggingface.co/HuggingFaceH4/starchat2-15b-sft-v0.1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [starchat2-15b-sft-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q2_K.gguf) | Q2_K | 5.77GB | | [starchat2-15b-sft-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.IQ3_XS.gguf) | IQ3_XS | 6.25GB | | [starchat2-15b-sft-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.IQ3_S.gguf) | IQ3_S | 6.52GB | | [starchat2-15b-sft-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q3_K_S.gguf) | Q3_K_S | 6.51GB | | [starchat2-15b-sft-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.IQ3_M.gguf) | IQ3_M | 6.8GB | | [starchat2-15b-sft-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q3_K.gguf) | Q3_K | 7.49GB | | [starchat2-15b-sft-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q3_K_M.gguf) | Q3_K_M | 7.49GB | | [starchat2-15b-sft-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q3_K_L.gguf) | Q3_K_L | 8.35GB | | [starchat2-15b-sft-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.IQ4_XS.gguf) | IQ4_XS | 8.12GB | | [starchat2-15b-sft-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q4_0.gguf) | Q4_0 | 8.44GB | | [starchat2-15b-sft-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.IQ4_NL.gguf) | IQ4_NL | 8.55GB | | [starchat2-15b-sft-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q4_K_S.gguf) | Q4_K_S | 8.53GB | | [starchat2-15b-sft-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q4_K.gguf) | Q4_K | 9.18GB | | [starchat2-15b-sft-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q4_K_M.gguf) | Q4_K_M | 9.18GB | | [starchat2-15b-sft-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q4_1.gguf) | Q4_1 | 9.35GB | | [starchat2-15b-sft-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q5_0.gguf) | Q5_0 | 10.27GB | | [starchat2-15b-sft-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q5_K_S.gguf) | Q5_K_S | 10.27GB | | [starchat2-15b-sft-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q5_K.gguf) | Q5_K | 10.65GB | | [starchat2-15b-sft-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q5_K_M.gguf) | Q5_K_M | 10.65GB | | [starchat2-15b-sft-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q5_1.gguf) | Q5_1 | 11.18GB | | [starchat2-15b-sft-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf/blob/main/starchat2-15b-sft-v0.1.Q6_K.gguf) | Q6_K | 12.2GB | Original model description: --- license: bigcode-openrail-m base_model: bigcode/starcoder2-15b tags: - alignment-handbook - generated_from_trainer datasets: - HuggingFaceH4/airoboros-3.2 - HuggingFaceH4/Code-Feedback - HuggingFaceH4/orca-math-word-problems-200k - HuggingFaceH4/SystemChat - HuggingFaceH4/capybara model-index: - name: starcoder2-15b-sft-v5.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Model Card for starchat2-15b-sft-v0.1 This model is a fine-tuned version of [bigcode/starcoder2-15b](https://huggingface.co/bigcode/starcoder2-15b) on the HuggingFaceH4/airoboros-3.2, the HuggingFaceH4/Code-Feedback, the HuggingFaceH4/orca-math-word-problems-200k, the HuggingFaceH4/SystemChat and the HuggingFaceH4/capybara datasets. It achieves the following results on the evaluation set: - Loss: 0.6614 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - total_train_batch_size: 128 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6422 | 1.0 | 910 | 0.6910 | | 0.5701 | 2.0 | 1820 | 0.6639 | | 0.5227 | 3.0 | 2730 | 0.6614 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
{}
RichardErkhov/HuggingFaceH4_-_starchat2-15b-sft-v0.1-gguf
null
[ "gguf", "region:us" ]
null
2024-05-01T07:02:19+00:00
[]
[]
TAGS #gguf #region-us
Quantization made by Richard Erkhov. Github Discord Request more models starchat2-15b-sft-v0.1 - GGUF * Model creator: URL * Original model: URL Name: starchat2-15b-sft-v0.1.Q2\_K.gguf, Quant method: Q2\_K, Size: 5.77GB Name: starchat2-15b-sft-v0.1.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 6.25GB Name: starchat2-15b-sft-v0.1.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 6.52GB Name: starchat2-15b-sft-v0.1.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 6.51GB Name: starchat2-15b-sft-v0.1.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 6.8GB Name: starchat2-15b-sft-v0.1.Q3\_K.gguf, Quant method: Q3\_K, Size: 7.49GB Name: starchat2-15b-sft-v0.1.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 7.49GB Name: starchat2-15b-sft-v0.1.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 8.35GB Name: starchat2-15b-sft-v0.1.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 8.12GB Name: starchat2-15b-sft-v0.1.Q4\_0.gguf, Quant method: Q4\_0, Size: 8.44GB Name: starchat2-15b-sft-v0.1.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 8.55GB Name: starchat2-15b-sft-v0.1.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 8.53GB Name: starchat2-15b-sft-v0.1.Q4\_K.gguf, Quant method: Q4\_K, Size: 9.18GB Name: starchat2-15b-sft-v0.1.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 9.18GB Name: starchat2-15b-sft-v0.1.Q4\_1.gguf, Quant method: Q4\_1, Size: 9.35GB Name: starchat2-15b-sft-v0.1.Q5\_0.gguf, Quant method: Q5\_0, Size: 10.27GB Name: starchat2-15b-sft-v0.1.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 10.27GB Name: starchat2-15b-sft-v0.1.Q5\_K.gguf, Quant method: Q5\_K, Size: 10.65GB Name: starchat2-15b-sft-v0.1.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 10.65GB Name: starchat2-15b-sft-v0.1.Q5\_1.gguf, Quant method: Q5\_1, Size: 11.18GB Name: starchat2-15b-sft-v0.1.Q6\_K.gguf, Quant method: Q6\_K, Size: 12.2GB Original model description: --------------------------- license: bigcode-openrail-m base\_model: bigcode/starcoder2-15b tags: * alignment-handbook * generated\_from\_trainer datasets: * HuggingFaceH4/airoboros-3.2 * HuggingFaceH4/Code-Feedback * HuggingFaceH4/orca-math-word-problems-200k * HuggingFaceH4/SystemChat * HuggingFaceH4/capybara model-index: * name: starcoder2-15b-sft-v5.0 results: [] --- Model Card for starchat2-15b-sft-v0.1 ===================================== This model is a fine-tuned version of bigcode/starcoder2-15b on the HuggingFaceH4/airoboros-3.2, the HuggingFaceH4/Code-Feedback, the HuggingFaceH4/orca-math-word-problems-200k, the HuggingFaceH4/SystemChat and the HuggingFaceH4/capybara datasets. It achieves the following results on the evaluation set: * Loss: 0.6614 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 16 * total\_train\_batch\_size: 128 * total\_eval\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.39.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.16.1 * Tokenizers 0.15.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1" ]
[ "TAGS\n#gguf #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1" ]
[ 9, 166, 5, 47 ]
[ "TAGS\n#gguf #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1" ]
null
transformers
# Uploaded model - **Developed by:** theGhoul21 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "gguf"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
theGhoul21/srl-sft-010524-Q8_0
null
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T07:02:30+00:00
[]
[ "en" ]
TAGS #transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: theGhoul21 - License: apache-2.0 - Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 67, 88 ]
[ "TAGS\n#transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/abacusai/Llama-3-Giraffe-70B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-i1-GGUF/resolve/main/Llama-3-Giraffe-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["meta", "llama-3"], "base_model": "abacusai/Llama-3-Giraffe-70B", "quantized_by": "mradermacher"}
mradermacher/Llama-3-Giraffe-70B-i1-GGUF
null
[ "transformers", "gguf", "meta", "llama-3", "en", "base_model:abacusai/Llama-3-Giraffe-70B", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-05-01T07:03:18+00:00
[]
[ "en" ]
TAGS #transformers #gguf #meta #llama-3 #en #base_model-abacusai/Llama-3-Giraffe-70B #license-llama3 #endpoints_compatible #region-us
About ----- weighted/imatrix quants of URL static quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #meta #llama-3 #en #base_model-abacusai/Llama-3-Giraffe-70B #license-llama3 #endpoints_compatible #region-us \n" ]
[ 51 ]
[ "TAGS\n#transformers #gguf #meta #llama-3 #en #base_model-abacusai/Llama-3-Giraffe-70B #license-llama3 #endpoints_compatible #region-us \n" ]
null
null
# from_mistral_7b4-1714514853051 Description of the model.
{"tags": ["fine-tuned", "abc123"], "languages": ["English"]}
brandonironbirdlabs/archive_from_mistral_7b4-1714514853051-GGUF
null
[ "gguf", "fine-tuned", "abc123", "region:us" ]
null
2024-05-01T07:04:34+00:00
[]
[]
TAGS #gguf #fine-tuned #abc123 #region-us
# from_mistral_7b4-1714514853051 Description of the model.
[ "# from_mistral_7b4-1714514853051\nDescription of the model." ]
[ "TAGS\n#gguf #fine-tuned #abc123 #region-us \n", "# from_mistral_7b4-1714514853051\nDescription of the model." ]
[ 17, 21 ]
[ "TAGS\n#gguf #fine-tuned #abc123 #region-us \n# from_mistral_7b4-1714514853051\nDescription of the model." ]
text-generation
transformers
for research purposes only
{}
hjhj3168/Llama-3-8b-Orthogonalized-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "6-bit", "region:us" ]
null
2024-05-01T07:04:51+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #6-bit #region-us
for research purposes only
[]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #6-bit #region-us \n" ]
[ 45 ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #6-bit #region-us \n" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
vc64/mistralCausalQA
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T07:08:27+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 26, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-4 This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-1b", "model-index": [{"name": "robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-4", "results": []}]}
AlignmentResearch/robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-4
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T07:08:59+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-4 This model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-4\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-4\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ 69, 56, 7, 9, 9, 4, 93, 5, 40 ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-1b #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# robust_llm_pythia-1b_mz-135_WordLength_n-its-10-seed-4\n\nThis model is a fine-tuned version of EleutherAI/pythia-1b on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1### Training results### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
null
# Multiverseex26Yamshadowexperiment28-7B Multiverseex26Yamshadowexperiment28-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 - model: allknowingroger/MultiverseEx26-7B-slerp - model: automerger/YamshadowExperiment28-7B merge_method: model_stock base_model: mistralai/Mistral-7B-v0.1 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/Multiverseex26Yamshadowexperiment28-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]}
automerger/Multiverseex26Yamshadowexperiment28-7B
null
[ "merge", "mergekit", "lazymergekit", "automerger", "license:apache-2.0", "region:us" ]
null
2024-05-01T07:12:12+00:00
[]
[]
TAGS #merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us
# Multiverseex26Yamshadowexperiment28-7B Multiverseex26Yamshadowexperiment28-7B is an automated merge created by Maxime Labonne using the following configuration. ## Configuration ## Usage
[ "# Multiverseex26Yamshadowexperiment28-7B\n\nMultiverseex26Yamshadowexperiment28-7B is an automated merge created by Maxime Labonne using the following configuration.", "## Configuration", "## Usage" ]
[ "TAGS\n#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us \n", "# Multiverseex26Yamshadowexperiment28-7B\n\nMultiverseex26Yamshadowexperiment28-7B is an automated merge created by Maxime Labonne using the following configuration.", "## Configuration", "## Usage" ]
[ 27, 48, 3, 3 ]
[ "TAGS\n#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us \n# Multiverseex26Yamshadowexperiment28-7B\n\nMultiverseex26Yamshadowexperiment28-7B is an automated merge created by Maxime Labonne using the following configuration.## Configuration## Usage" ]
null
transformers
# Uploaded model - **Developed by:** theGhoul21 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "gguf"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
theGhoul21/srl-sft-010524-gguf-16bit
null
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T07:12:28+00:00
[]
[ "en" ]
TAGS #transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: theGhoul21 - License: apache-2.0 - Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 67, 88 ]
[ "TAGS\n#transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - Nekodigi/path-to-save-model This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "inference": true, "base_model": "CompVis/stable-diffusion-v1-4", "instance_prompt": "a photo of sks dog"}
Nekodigi/path-to-save-model
null
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-05-01T07:13:05+00:00
[]
[]
TAGS #diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-CompVis/stable-diffusion-v1-4 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
# DreamBooth - Nekodigi/path-to-save-model This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using DreamBooth. You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# DreamBooth - Nekodigi/path-to-save-model\n\nThis is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-CompVis/stable-diffusion-v1-4 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n", "# DreamBooth - Nekodigi/path-to-save-model\n\nThis is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ 83, 79, 6, 7, 23, 17 ]
[ "TAGS\n#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-CompVis/stable-diffusion-v1-4 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n# DreamBooth - Nekodigi/path-to-save-model\n\nThis is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.## Intended uses & limitations#### How to use#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]## Training details\n\n[TODO: describe the data used to train the model]" ]
null
null
--- license: creativeml-openrail-m tags: - art --- Model Details A merge based on AOM3A3, Dream Shaper and Zhangmix. Models used for the merge AbyssOrangeMix3 (AOM3) by WarriorMama777 ZhangMix by Zhang_Lin DreamShaper by Lykon License: Fair AI Public License 1.0-SD Recommended settings it’s recommended to use a lower classifier-free guidance (CFG Scale) of around 5-7, sampling steps between 20 and 28, and to use DPM++ 2M Karras as a sampler. But I also tested using Euler Ancestral(Euler A). Notes Based on AbyssOrangeMix3, Zhangmix, and DreamShaper. Dream Abyss falls under Fair AI Public License 1.0-SD license, which is compatible with Stable Diffusion models’ license. Key points: Modification Sharing: If you modify KetchupMix Hentai, you must share both your changes and the original license. Source Code Accessibility: If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too. Distribution Terms: Any distribution must be under this license or another with similar rules. Compliance: Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values.
{}
NeverWinter13/DreamAbyss
null
[ "region:us" ]
null
2024-05-01T07:15:43+00:00
[]
[]
TAGS #region-us
--- license: creativeml-openrail-m tags: - art --- Model Details A merge based on AOM3A3, Dream Shaper and Zhangmix. Models used for the merge AbyssOrangeMix3 (AOM3) by WarriorMama777 ZhangMix by Zhang_Lin DreamShaper by Lykon License: Fair AI Public License 1.0-SD Recommended settings it’s recommended to use a lower classifier-free guidance (CFG Scale) of around 5-7, sampling steps between 20 and 28, and to use DPM++ 2M Karras as a sampler. But I also tested using Euler Ancestral(Euler A). Notes Based on AbyssOrangeMix3, Zhangmix, and DreamShaper. Dream Abyss falls under Fair AI Public License 1.0-SD license, which is compatible with Stable Diffusion models’ license. Key points: Modification Sharing: If you modify KetchupMix Hentai, you must share both your changes and the original license. Source Code Accessibility: If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too. Distribution Terms: Any distribution must be under this license or another with similar rules. Compliance: Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values.
[]
[ "TAGS\n#region-us \n" ]
[ 5 ]
[ "TAGS\n#region-us \n" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
cilantro9246/uxyepxd
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T07:25:59+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 47, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text-generation
null
# newsletter/Phi-3-mini-4k-instruct-Q6_K-GGUF This model was converted to GGUF format from [`microsoft/Phi-3-mini-4k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo newsletter/Phi-3-mini-4k-instruct-Q6_K-GGUF --model phi-3-mini-4k-instruct.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo newsletter/Phi-3-mini-4k-instruct-Q6_K-GGUF --model phi-3-mini-4k-instruct.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m phi-3-mini-4k-instruct.Q6_K.gguf -n 128 ```
{"language": ["en"], "license": "mit", "tags": ["nlp", "code", "llama-cpp", "gguf-my-repo"], "license_link": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE", "pipeline_tag": "text-generation", "widget": [{"messages": [{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}]}]}
newsletter/Phi-3-mini-4k-instruct-Q6_K-GGUF
null
[ "gguf", "nlp", "code", "llama-cpp", "gguf-my-repo", "text-generation", "en", "license:mit", "region:us" ]
null
2024-05-01T07:27:22+00:00
[]
[ "en" ]
TAGS #gguf #nlp #code #llama-cpp #gguf-my-repo #text-generation #en #license-mit #region-us
# newsletter/Phi-3-mini-4k-instruct-Q6_K-GGUF This model was converted to GGUF format from 'microsoft/Phi-3-mini-4k-instruct' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# newsletter/Phi-3-mini-4k-instruct-Q6_K-GGUF\nThis model was converted to GGUF format from 'microsoft/Phi-3-mini-4k-instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #nlp #code #llama-cpp #gguf-my-repo #text-generation #en #license-mit #region-us \n", "# newsletter/Phi-3-mini-4k-instruct-Q6_K-GGUF\nThis model was converted to GGUF format from 'microsoft/Phi-3-mini-4k-instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ 39, 80, 52 ]
[ "TAGS\n#gguf #nlp #code #llama-cpp #gguf-my-repo #text-generation #en #license-mit #region-us \n# newsletter/Phi-3-mini-4k-instruct-Q6_K-GGUF\nThis model was converted to GGUF format from 'microsoft/Phi-3-mini-4k-instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-to-speech
transformers
Это новая модель для XTTS, которую я обучал на 40 часах датасета. Обучалась она на V100, 20 эпох на 2 батче.
{"language": ["ru"], "license": "apache-2.0", "tags": ["legal"], "pipeline_tag": "text-to-speech"}
NeuroDonu/RU-XTTS-DonuModel
null
[ "transformers", "legal", "text-to-speech", "ru", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T07:27:26+00:00
[]
[ "ru" ]
TAGS #transformers #legal #text-to-speech #ru #license-apache-2.0 #endpoints_compatible #region-us
Это новая модель для XTTS, которую я обучал на 40 часах датасета. Обучалась она на V100, 20 эпох на 2 батче.
[]
[ "TAGS\n#transformers #legal #text-to-speech #ru #license-apache-2.0 #endpoints_compatible #region-us \n" ]
[ 30 ]
[ "TAGS\n#transformers #legal #text-to-speech #ru #license-apache-2.0 #endpoints_compatible #region-us \n" ]
text-generation
transformers
# Dolphin 2.9 Mixtral 8x22b 🐬 Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations Discord: https://discord.gg/8fbBeC7ZGx <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> My appreciation for the sponsors of Dolphin 2.9: - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xH100 node This model is based on Dolphin-2.9-Mixtral-8x22b, and is Apache-2.0 licensed. The base model has 64k context, and the full-weight fine-tuning was with 4k sequence length. It took 1 week on 8xH100 provided by Crusoe Cloud This model was trained FFT on 50% parameters (targeted with [Laser Scanner](https://github.com/cognitivecomputations/laserRMT/blob/main/laser_scanner.py) by Fernando Fernandes, David Golchinfar, Lucas Atkins, and Eric Hartford) , using ChatML prompt template format. example: ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` Dolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling. Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. Dolphin is licensed Apache 2.0. I grant permission for any use, including commercial, that falls within accordance with Apache-2.0 license. Dolphin was trained on data generated from GPT4, among other models. ## Evals ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/Nb6f_dS_M6fN_v2ACK98x.png) ## Training [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: mistral-community/Mixtral-8x22B-v0.1 model_type: AutoModelForCausalLM tokenizer_type: LlamaTokenizer trust_remote_code: true load_in_8bit: false load_in_4bit: false strict: false unfrozen_parameters: - ^lm_head.weight$ - ^model.embed_tokens.weight$ - model.layers.0.self_attn.q_proj - model.layers.1.self_attn.q_proj - model.layers.2.self_attn.q_proj - model.layers.22.self_attn.q_proj - model.layers.27.self_attn.q_proj - model.layers.28.self_attn.q_proj - model.layers.13.self_attn.q_proj - model.layers.21.self_attn.q_proj - model.layers.24.self_attn.q_proj - model.layers.14.self_attn.q_proj - model.layers.15.self_attn.q_proj - model.layers.11.self_attn.q_proj - model.layers.20.self_attn.q_proj - model.layers.23.self_attn.q_proj - model.layers.30.self_attn.k_proj - model.layers.31.self_attn.k_proj - model.layers.25.self_attn.k_proj - model.layers.23.self_attn.k_proj - model.layers.27.self_attn.k_proj - model.layers.26.self_attn.k_proj - model.layers.29.self_attn.k_proj - model.layers.28.self_attn.k_proj - model.layers.24.self_attn.k_proj - model.layers.16.self_attn.k_proj - model.layers.19.self_attn.k_proj - model.layers.22.self_attn.k_proj - model.layers.20.self_attn.k_proj - model.layers.6.self_attn.k_proj - model.layers.22.self_attn.v_proj - model.layers.29.self_attn.v_proj - model.layers.31.self_attn.v_proj - model.layers.5.self_attn.v_proj - model.layers.8.self_attn.v_proj - model.layers.4.self_attn.v_proj - model.layers.25.self_attn.v_proj - model.layers.30.self_attn.v_proj - model.layers.17.self_attn.v_proj - model.layers.23.self_attn.v_proj - model.layers.28.self_attn.v_proj - model.layers.9.self_attn.v_proj - model.layers.26.self_attn.v_proj - model.layers.27.self_attn.v_proj - model.layers.20.self_attn.o_proj - model.layers.19.self_attn.o_proj - model.layers.16.self_attn.o_proj - model.layers.13.self_attn.o_proj - model.layers.18.self_attn.o_proj - model.layers.17.self_attn.o_proj - model.layers.12.self_attn.o_proj - model.layers.15.self_attn.o_proj - model.layers.14.self_attn.o_proj - model.layers.22.self_attn.o_proj - model.layers.23.self_attn.o_proj - model.layers.21.self_attn.o_proj - model.layers.10.self_attn.o_proj - model.layers.0.self_attn.o_proj - model.layers.0.block_sparse_moe.experts.0.w1 - model.layers.1.block_sparse_moe.experts.0.w1 - model.layers.2.block_sparse_moe.experts.0.w1 - model.layers.3.block_sparse_moe.experts.0.w1 - model.layers.4.block_sparse_moe.experts.0.w1 - model.layers.5.block_sparse_moe.experts.0.w1 - model.layers.6.block_sparse_moe.experts.0.w1 - model.layers.7.block_sparse_moe.experts.0.w1 - model.layers.8.block_sparse_moe.experts.0.w1 - model.layers.9.block_sparse_moe.experts.0.w1 - model.layers.10.block_sparse_moe.experts.0.w1 - model.layers.11.block_sparse_moe.experts.0.w1 - model.layers.12.block_sparse_moe.experts.0.w1 - model.layers.13.block_sparse_moe.experts.0.w1 - model.layers.0.block_sparse_moe.experts.0.w2 - model.layers.1.block_sparse_moe.experts.0.w2 - model.layers.2.block_sparse_moe.experts.0.w2 - model.layers.3.block_sparse_moe.experts.0.w2 - model.layers.4.block_sparse_moe.experts.0.w2 - model.layers.5.block_sparse_moe.experts.0.w2 - model.layers.6.block_sparse_moe.experts.0.w2 - model.layers.7.block_sparse_moe.experts.0.w2 - model.layers.8.block_sparse_moe.experts.0.w2 - model.layers.9.block_sparse_moe.experts.0.w2 - model.layers.10.block_sparse_moe.experts.0.w2 - model.layers.11.block_sparse_moe.experts.0.w2 - model.layers.12.block_sparse_moe.experts.0.w2 - model.layers.13.block_sparse_moe.experts.0.w2 - model.layers.0.block_sparse_moe.experts.0.w3 - model.layers.1.block_sparse_moe.experts.0.w3 - model.layers.2.block_sparse_moe.experts.0.w3 - model.layers.3.block_sparse_moe.experts.0.w3 - model.layers.4.block_sparse_moe.experts.0.w3 - model.layers.5.block_sparse_moe.experts.0.w3 - model.layers.6.block_sparse_moe.experts.0.w3 - model.layers.7.block_sparse_moe.experts.0.w3 - model.layers.8.block_sparse_moe.experts.0.w3 - model.layers.9.block_sparse_moe.experts.0.w3 - model.layers.10.block_sparse_moe.experts.0.w3 - model.layers.11.block_sparse_moe.experts.0.w3 - model.layers.12.block_sparse_moe.experts.0.w3 - model.layers.13.block_sparse_moe.experts.0.w3 - model.layers.0.block_sparse_moe.experts.1.w1 - model.layers.1.block_sparse_moe.experts.1.w1 - model.layers.2.block_sparse_moe.experts.1.w1 - model.layers.3.block_sparse_moe.experts.1.w1 - model.layers.4.block_sparse_moe.experts.1.w1 - model.layers.5.block_sparse_moe.experts.1.w1 - model.layers.6.block_sparse_moe.experts.1.w1 - model.layers.7.block_sparse_moe.experts.1.w1 - model.layers.8.block_sparse_moe.experts.1.w1 - model.layers.9.block_sparse_moe.experts.1.w1 - model.layers.10.block_sparse_moe.experts.1.w1 - model.layers.11.block_sparse_moe.experts.1.w1 - model.layers.12.block_sparse_moe.experts.1.w1 - model.layers.13.block_sparse_moe.experts.1.w1 - model.layers.40.block_sparse_moe.experts.1.w2 - model.layers.0.block_sparse_moe.experts.1.w2 - model.layers.1.block_sparse_moe.experts.1.w2 - model.layers.2.block_sparse_moe.experts.1.w2 - model.layers.3.block_sparse_moe.experts.1.w2 - model.layers.4.block_sparse_moe.experts.1.w2 - model.layers.5.block_sparse_moe.experts.1.w2 - model.layers.6.block_sparse_moe.experts.1.w2 - model.layers.7.block_sparse_moe.experts.1.w2 - model.layers.8.block_sparse_moe.experts.1.w2 - model.layers.9.block_sparse_moe.experts.1.w2 - model.layers.10.block_sparse_moe.experts.1.w2 - model.layers.11.block_sparse_moe.experts.1.w2 - model.layers.12.block_sparse_moe.experts.1.w2 - model.layers.5.block_sparse_moe.experts.1.w3 - model.layers.0.block_sparse_moe.experts.1.w3 - model.layers.1.block_sparse_moe.experts.1.w3 - model.layers.2.block_sparse_moe.experts.1.w3 - model.layers.3.block_sparse_moe.experts.1.w3 - model.layers.4.block_sparse_moe.experts.1.w3 - model.layers.6.block_sparse_moe.experts.1.w3 - model.layers.7.block_sparse_moe.experts.1.w3 - model.layers.8.block_sparse_moe.experts.1.w3 - model.layers.9.block_sparse_moe.experts.1.w3 - model.layers.10.block_sparse_moe.experts.1.w3 - model.layers.11.block_sparse_moe.experts.1.w3 - model.layers.12.block_sparse_moe.experts.1.w3 - model.layers.13.block_sparse_moe.experts.1.w3 - model.layers.1.block_sparse_moe.experts.2.w1 - model.layers.0.block_sparse_moe.experts.2.w1 - model.layers.2.block_sparse_moe.experts.2.w1 - model.layers.3.block_sparse_moe.experts.2.w1 - model.layers.4.block_sparse_moe.experts.2.w1 - model.layers.5.block_sparse_moe.experts.2.w1 - model.layers.6.block_sparse_moe.experts.2.w1 - model.layers.7.block_sparse_moe.experts.2.w1 - model.layers.8.block_sparse_moe.experts.2.w1 - model.layers.9.block_sparse_moe.experts.2.w1 - model.layers.10.block_sparse_moe.experts.2.w1 - model.layers.11.block_sparse_moe.experts.2.w1 - model.layers.12.block_sparse_moe.experts.2.w1 - model.layers.13.block_sparse_moe.experts.2.w1 - model.layers.1.block_sparse_moe.experts.2.w2 - model.layers.0.block_sparse_moe.experts.2.w2 - model.layers.2.block_sparse_moe.experts.2.w2 - model.layers.3.block_sparse_moe.experts.2.w2 - model.layers.4.block_sparse_moe.experts.2.w2 - model.layers.5.block_sparse_moe.experts.2.w2 - model.layers.6.block_sparse_moe.experts.2.w2 - model.layers.7.block_sparse_moe.experts.2.w2 - model.layers.8.block_sparse_moe.experts.2.w2 - model.layers.9.block_sparse_moe.experts.2.w2 - model.layers.10.block_sparse_moe.experts.2.w2 - model.layers.11.block_sparse_moe.experts.2.w2 - model.layers.12.block_sparse_moe.experts.2.w2 - model.layers.13.block_sparse_moe.experts.2.w2 - model.layers.1.block_sparse_moe.experts.2.w3 - model.layers.0.block_sparse_moe.experts.2.w3 - model.layers.2.block_sparse_moe.experts.2.w3 - model.layers.3.block_sparse_moe.experts.2.w3 - model.layers.4.block_sparse_moe.experts.2.w3 - model.layers.5.block_sparse_moe.experts.2.w3 - model.layers.6.block_sparse_moe.experts.2.w3 - model.layers.7.block_sparse_moe.experts.2.w3 - model.layers.8.block_sparse_moe.experts.2.w3 - model.layers.9.block_sparse_moe.experts.2.w3 - model.layers.10.block_sparse_moe.experts.2.w3 - model.layers.11.block_sparse_moe.experts.2.w3 - model.layers.12.block_sparse_moe.experts.2.w3 - model.layers.13.block_sparse_moe.experts.2.w3 - model.layers.2.block_sparse_moe.experts.3.w1 - model.layers.1.block_sparse_moe.experts.3.w1 - model.layers.0.block_sparse_moe.experts.3.w1 - model.layers.3.block_sparse_moe.experts.3.w1 - model.layers.4.block_sparse_moe.experts.3.w1 - model.layers.5.block_sparse_moe.experts.3.w1 - model.layers.6.block_sparse_moe.experts.3.w1 - model.layers.7.block_sparse_moe.experts.3.w1 - model.layers.8.block_sparse_moe.experts.3.w1 - model.layers.9.block_sparse_moe.experts.3.w1 - model.layers.10.block_sparse_moe.experts.3.w1 - model.layers.11.block_sparse_moe.experts.3.w1 - model.layers.12.block_sparse_moe.experts.3.w1 - model.layers.13.block_sparse_moe.experts.3.w1 - model.layers.2.block_sparse_moe.experts.3.w2 - model.layers.1.block_sparse_moe.experts.3.w2 - model.layers.0.block_sparse_moe.experts.3.w2 - model.layers.3.block_sparse_moe.experts.3.w2 - model.layers.4.block_sparse_moe.experts.3.w2 - model.layers.5.block_sparse_moe.experts.3.w2 - model.layers.6.block_sparse_moe.experts.3.w2 - model.layers.7.block_sparse_moe.experts.3.w2 - model.layers.8.block_sparse_moe.experts.3.w2 - model.layers.9.block_sparse_moe.experts.3.w2 - model.layers.10.block_sparse_moe.experts.3.w2 - model.layers.11.block_sparse_moe.experts.3.w2 - model.layers.12.block_sparse_moe.experts.3.w2 - model.layers.13.block_sparse_moe.experts.3.w2 - model.layers.2.block_sparse_moe.experts.3.w3 - model.layers.1.block_sparse_moe.experts.3.w3 - model.layers.0.block_sparse_moe.experts.3.w3 - model.layers.3.block_sparse_moe.experts.3.w3 - model.layers.4.block_sparse_moe.experts.3.w3 - model.layers.5.block_sparse_moe.experts.3.w3 - model.layers.6.block_sparse_moe.experts.3.w3 - model.layers.7.block_sparse_moe.experts.3.w3 - model.layers.8.block_sparse_moe.experts.3.w3 - model.layers.9.block_sparse_moe.experts.3.w3 - model.layers.10.block_sparse_moe.experts.3.w3 - model.layers.11.block_sparse_moe.experts.3.w3 - model.layers.12.block_sparse_moe.experts.3.w3 - model.layers.13.block_sparse_moe.experts.3.w3 - model.layers.3.block_sparse_moe.experts.4.w1 - model.layers.2.block_sparse_moe.experts.4.w1 - model.layers.1.block_sparse_moe.experts.4.w1 - model.layers.0.block_sparse_moe.experts.4.w1 - model.layers.4.block_sparse_moe.experts.4.w1 - model.layers.5.block_sparse_moe.experts.4.w1 - model.layers.6.block_sparse_moe.experts.4.w1 - model.layers.7.block_sparse_moe.experts.4.w1 - model.layers.8.block_sparse_moe.experts.4.w1 - model.layers.9.block_sparse_moe.experts.4.w1 - model.layers.10.block_sparse_moe.experts.4.w1 - model.layers.11.block_sparse_moe.experts.4.w1 - model.layers.12.block_sparse_moe.experts.4.w1 - model.layers.13.block_sparse_moe.experts.4.w1 - model.layers.2.block_sparse_moe.experts.4.w2 - model.layers.3.block_sparse_moe.experts.4.w2 - model.layers.1.block_sparse_moe.experts.4.w2 - model.layers.20.block_sparse_moe.experts.4.w2 - model.layers.0.block_sparse_moe.experts.4.w2 - model.layers.4.block_sparse_moe.experts.4.w2 - model.layers.5.block_sparse_moe.experts.4.w2 - model.layers.6.block_sparse_moe.experts.4.w2 - model.layers.7.block_sparse_moe.experts.4.w2 - model.layers.8.block_sparse_moe.experts.4.w2 - model.layers.9.block_sparse_moe.experts.4.w2 - model.layers.10.block_sparse_moe.experts.4.w2 - model.layers.11.block_sparse_moe.experts.4.w2 - model.layers.12.block_sparse_moe.experts.4.w2 - model.layers.3.block_sparse_moe.experts.4.w3 - model.layers.2.block_sparse_moe.experts.4.w3 - model.layers.1.block_sparse_moe.experts.4.w3 - model.layers.0.block_sparse_moe.experts.4.w3 - model.layers.4.block_sparse_moe.experts.4.w3 - model.layers.5.block_sparse_moe.experts.4.w3 - model.layers.6.block_sparse_moe.experts.4.w3 - model.layers.7.block_sparse_moe.experts.4.w3 - model.layers.8.block_sparse_moe.experts.4.w3 - model.layers.9.block_sparse_moe.experts.4.w3 - model.layers.10.block_sparse_moe.experts.4.w3 - model.layers.11.block_sparse_moe.experts.4.w3 - model.layers.12.block_sparse_moe.experts.4.w3 - model.layers.13.block_sparse_moe.experts.4.w3 - model.layers.4.block_sparse_moe.experts.5.w1 - model.layers.3.block_sparse_moe.experts.5.w1 - model.layers.2.block_sparse_moe.experts.5.w1 - model.layers.1.block_sparse_moe.experts.5.w1 - model.layers.0.block_sparse_moe.experts.5.w1 - model.layers.5.block_sparse_moe.experts.5.w1 - model.layers.6.block_sparse_moe.experts.5.w1 - model.layers.7.block_sparse_moe.experts.5.w1 - model.layers.8.block_sparse_moe.experts.5.w1 - model.layers.9.block_sparse_moe.experts.5.w1 - model.layers.10.block_sparse_moe.experts.5.w1 - model.layers.11.block_sparse_moe.experts.5.w1 - model.layers.12.block_sparse_moe.experts.5.w1 - model.layers.13.block_sparse_moe.experts.5.w1 - model.layers.4.block_sparse_moe.experts.5.w2 - model.layers.2.block_sparse_moe.experts.5.w2 - model.layers.3.block_sparse_moe.experts.5.w2 - model.layers.1.block_sparse_moe.experts.5.w2 - model.layers.0.block_sparse_moe.experts.5.w2 - model.layers.5.block_sparse_moe.experts.5.w2 - model.layers.6.block_sparse_moe.experts.5.w2 - model.layers.7.block_sparse_moe.experts.5.w2 - model.layers.8.block_sparse_moe.experts.5.w2 - model.layers.9.block_sparse_moe.experts.5.w2 - model.layers.10.block_sparse_moe.experts.5.w2 - model.layers.11.block_sparse_moe.experts.5.w2 - model.layers.12.block_sparse_moe.experts.5.w2 - model.layers.13.block_sparse_moe.experts.5.w2 - model.layers.4.block_sparse_moe.experts.5.w3 - model.layers.3.block_sparse_moe.experts.5.w3 - model.layers.2.block_sparse_moe.experts.5.w3 - model.layers.1.block_sparse_moe.experts.5.w3 - model.layers.0.block_sparse_moe.experts.5.w3 - model.layers.5.block_sparse_moe.experts.5.w3 - model.layers.6.block_sparse_moe.experts.5.w3 - model.layers.7.block_sparse_moe.experts.5.w3 - model.layers.8.block_sparse_moe.experts.5.w3 - model.layers.9.block_sparse_moe.experts.5.w3 - model.layers.10.block_sparse_moe.experts.5.w3 - model.layers.11.block_sparse_moe.experts.5.w3 - model.layers.12.block_sparse_moe.experts.5.w3 - model.layers.13.block_sparse_moe.experts.5.w3 - model.layers.5.block_sparse_moe.experts.6.w1 - model.layers.4.block_sparse_moe.experts.6.w1 - model.layers.3.block_sparse_moe.experts.6.w1 - model.layers.2.block_sparse_moe.experts.6.w1 - model.layers.1.block_sparse_moe.experts.6.w1 - model.layers.0.block_sparse_moe.experts.6.w1 - model.layers.6.block_sparse_moe.experts.6.w1 - model.layers.7.block_sparse_moe.experts.6.w1 - model.layers.8.block_sparse_moe.experts.6.w1 - model.layers.9.block_sparse_moe.experts.6.w1 - model.layers.10.block_sparse_moe.experts.6.w1 - model.layers.11.block_sparse_moe.experts.6.w1 - model.layers.12.block_sparse_moe.experts.6.w1 - model.layers.13.block_sparse_moe.experts.6.w1 - model.layers.5.block_sparse_moe.experts.6.w2 - model.layers.4.block_sparse_moe.experts.6.w2 - model.layers.2.block_sparse_moe.experts.6.w2 - model.layers.3.block_sparse_moe.experts.6.w2 - model.layers.1.block_sparse_moe.experts.6.w2 - model.layers.0.block_sparse_moe.experts.6.w2 - model.layers.6.block_sparse_moe.experts.6.w2 - model.layers.7.block_sparse_moe.experts.6.w2 - model.layers.8.block_sparse_moe.experts.6.w2 - model.layers.9.block_sparse_moe.experts.6.w2 - model.layers.10.block_sparse_moe.experts.6.w2 - model.layers.11.block_sparse_moe.experts.6.w2 - model.layers.12.block_sparse_moe.experts.6.w2 - model.layers.13.block_sparse_moe.experts.6.w2 - model.layers.5.block_sparse_moe.experts.6.w3 - model.layers.4.block_sparse_moe.experts.6.w3 - model.layers.3.block_sparse_moe.experts.6.w3 - model.layers.2.block_sparse_moe.experts.6.w3 - model.layers.1.block_sparse_moe.experts.6.w3 - model.layers.0.block_sparse_moe.experts.6.w3 - model.layers.6.block_sparse_moe.experts.6.w3 - model.layers.7.block_sparse_moe.experts.6.w3 - model.layers.8.block_sparse_moe.experts.6.w3 - model.layers.9.block_sparse_moe.experts.6.w3 - model.layers.10.block_sparse_moe.experts.6.w3 - model.layers.11.block_sparse_moe.experts.6.w3 - model.layers.12.block_sparse_moe.experts.6.w3 - model.layers.13.block_sparse_moe.experts.6.w3 - model.layers.5.block_sparse_moe.experts.7.w1 - model.layers.6.block_sparse_moe.experts.7.w1 - model.layers.3.block_sparse_moe.experts.7.w1 - model.layers.4.block_sparse_moe.experts.7.w1 - model.layers.2.block_sparse_moe.experts.7.w1 - model.layers.0.block_sparse_moe.experts.7.w1 - model.layers.7.block_sparse_moe.experts.7.w1 - model.layers.8.block_sparse_moe.experts.7.w1 - model.layers.9.block_sparse_moe.experts.7.w1 - model.layers.10.block_sparse_moe.experts.7.w1 - model.layers.11.block_sparse_moe.experts.7.w1 - model.layers.12.block_sparse_moe.experts.7.w1 - model.layers.13.block_sparse_moe.experts.7.w1 - model.layers.14.block_sparse_moe.experts.7.w1 - model.layers.6.block_sparse_moe.experts.7.w2 - model.layers.5.block_sparse_moe.experts.7.w2 - model.layers.4.block_sparse_moe.experts.7.w2 - model.layers.2.block_sparse_moe.experts.7.w2 - model.layers.3.block_sparse_moe.experts.7.w2 - model.layers.1.block_sparse_moe.experts.7.w2 - model.layers.0.block_sparse_moe.experts.7.w2 - model.layers.7.block_sparse_moe.experts.7.w2 - model.layers.8.block_sparse_moe.experts.7.w2 - model.layers.9.block_sparse_moe.experts.7.w2 - model.layers.10.block_sparse_moe.experts.7.w2 - model.layers.11.block_sparse_moe.experts.7.w2 - model.layers.12.block_sparse_moe.experts.7.w2 - model.layers.13.block_sparse_moe.experts.7.w2 - model.layers.6.block_sparse_moe.experts.7.w3 - model.layers.5.block_sparse_moe.experts.7.w3 - model.layers.4.block_sparse_moe.experts.7.w3 - model.layers.3.block_sparse_moe.experts.7.w3 - model.layers.2.block_sparse_moe.experts.7.w3 - model.layers.0.block_sparse_moe.experts.7.w3 - model.layers.7.block_sparse_moe.experts.7.w3 - model.layers.8.block_sparse_moe.experts.7.w3 - model.layers.9.block_sparse_moe.experts.7.w3 - model.layers.10.block_sparse_moe.experts.7.w3 - model.layers.11.block_sparse_moe.experts.7.w3 - model.layers.12.block_sparse_moe.experts.7.w3 - model.layers.13.block_sparse_moe.experts.7.w3 - model.layers.14.block_sparse_moe.experts.7.w3 - model.layers.0.block_sparse_moe.gate - model.layers.1.block_sparse_moe.gate - model.layers.2.block_sparse_moe.gate - model.layers.3.block_sparse_moe.gate - model.layers.4.block_sparse_moe.gate - model.layers.5.block_sparse_moe.gate - model.layers.6.block_sparse_moe.gate - model.layers.7.block_sparse_moe.gate - model.layers.8.block_sparse_moe.gate - model.layers.9.block_sparse_moe.gate - model.layers.10.block_sparse_moe.gate - model.layers.11.block_sparse_moe.gate - model.layers.12.block_sparse_moe.gate - model.layers.13.block_sparse_moe.gate model_config: output_router_logits: true datasets: - path: /workspace/datasets/dolphin-2.9/dolphin201-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/Ultrachat200kunfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/dolphin-coder-translate-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/dolphin-coder-codegen-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/not_samantha_norefusals.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/Orca-Math-resort-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/agent_instruct_react_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_instruct_j1s1_3k_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_negative_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_react_10p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_tflan_cot_30p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/openhermes200k_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/SystemConversations.jsonl type: sharegpt conversation: chatml chat_template: chatml dataset_prepared_path: thingy val_set_size: 0.0002 output_dir: ./out sequence_len: 4096 sample_packing: true pad_to_sequence_len: true gradient_accumulation_steps: 8 micro_batch_size: 4 num_epochs: 3 logging_steps: 1 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 2.7e-5 wandb_project: dolphin-2.9-mixtral-8x22b wandb_watch: wandb_run_id: wandb_log_model: train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: true gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: # resume_from_checkpoint: /home/ehartford/axolotl/out/checkpoint-316 local_rank: logging_steps: 1 xformers_attention: flash_attention: true saves_per_epoch: 8 save_total_limit: 2 save_steps: evals_per_epoch: 4 eval_sample_packing: false debug: deepspeed: deepspeed_configs/zero3_bf16_cpuoffload_params.json weight_decay: 0.05 fsdp: fsdp_config: special_tokens: eos_token: "<|im_end|>" tokens: - "<|im_start|>" ``` </details><br> ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.7022 | 0.0 | 1 | 0.6989 | | 0.5344 | 0.25 | 238 | 0.5138 | | 0.5204 | 0.5 | 476 | 0.5018 | | 0.5059 | 0.75 | 714 | 0.4951 | | 0.5112 | 1.0 | 952 | 0.4911 | | 0.4561 | 1.24 | 1190 | 0.4978 | | 0.478 | 1.49 | 1428 | 0.4935 | | 0.4714 | 1.74 | 1666 | 0.4899 | | 0.4626 | 1.99 | 1904 | 0.4861 | | 0.3675 | 2.22 | 2142 | 0.5240 | | 0.3595 | 2.47 | 2380 | 0.5229 | | 0.3438 | 2.72 | 2618 | 0.5217 | ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer", "axolotl"], "datasets": ["cognitivecomputations/Dolphin-2.9", "teknium/OpenHermes-2.5", "m-a-p/CodeFeedback-Filtered-Instruction", "cognitivecomputations/dolphin-coder", "cognitivecomputations/samantha-data", "HuggingFaceH4/ultrachat_200k", "microsoft/orca-math-word-problems-200k", "abacusai/SystemChat-1.1", "Locutusque/function-calling-chatml", "internlm/Agent-FLAN"], "base_model": "mistral-community/Mixtral-8x22B-v0.1", "model-index": [{"name": "out", "results": []}]}
blockblockblock/dolphin-2.9-mixtral-8x22b-bpw3.5-exl2
null
[ "transformers", "safetensors", "mixtral", "text-generation", "generated_from_trainer", "axolotl", "conversational", "en", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:microsoft/orca-math-word-problems-200k", "dataset:abacusai/SystemChat-1.1", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "base_model:mistral-community/Mixtral-8x22B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T07:28:48+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mixtral #text-generation #generated_from_trainer #axolotl #conversational #en #dataset-cognitivecomputations/Dolphin-2.9 #dataset-teknium/OpenHermes-2.5 #dataset-m-a-p/CodeFeedback-Filtered-Instruction #dataset-cognitivecomputations/dolphin-coder #dataset-cognitivecomputations/samantha-data #dataset-HuggingFaceH4/ultrachat_200k #dataset-microsoft/orca-math-word-problems-200k #dataset-abacusai/SystemChat-1.1 #dataset-Locutusque/function-calling-chatml #dataset-internlm/Agent-FLAN #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Dolphin 2.9 Mixtral 8x22b ========================= Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations Discord: URL <img src="URL width="600" /> My appreciation for the sponsors of Dolphin 2.9: * Crusoe Cloud - provided excellent on-demand 8xH100 node This model is based on Dolphin-2.9-Mixtral-8x22b, and is Apache-2.0 licensed. The base model has 64k context, and the full-weight fine-tuning was with 4k sequence length. It took 1 week on 8xH100 provided by Crusoe Cloud This model was trained FFT on 50% parameters (targeted with Laser Scanner by Fernando Fernandes, David Golchinfar, Lucas Atkins, and Eric Hartford) , using ChatML prompt template format. example: Dolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling. Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. URL You are responsible for any content you create using this model. Enjoy responsibly. Dolphin is licensed Apache 2.0. I grant permission for any use, including commercial, that falls within accordance with Apache-2.0 license. Dolphin was trained on data generated from GPT4, among other models. Evals ----- !image/png Training -------- <img src="URL alt="Built with Axolotl" width="200" height="32"/> See axolotl config axolotl version: '0.4.0' ### Training results ### Framework versions * Transformers 4.40.0.dev0 * Pytorch 2.2.2+cu121 * Datasets 2.15.0 * Tokenizers 0.15.0
[ "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.2+cu121\n* Datasets 2.15.0\n* Tokenizers 0.15.0" ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #generated_from_trainer #axolotl #conversational #en #dataset-cognitivecomputations/Dolphin-2.9 #dataset-teknium/OpenHermes-2.5 #dataset-m-a-p/CodeFeedback-Filtered-Instruction #dataset-cognitivecomputations/dolphin-coder #dataset-cognitivecomputations/samantha-data #dataset-HuggingFaceH4/ultrachat_200k #dataset-microsoft/orca-math-word-problems-200k #dataset-abacusai/SystemChat-1.1 #dataset-Locutusque/function-calling-chatml #dataset-internlm/Agent-FLAN #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.2+cu121\n* Datasets 2.15.0\n* Tokenizers 0.15.0" ]
[ 224, 5, 47 ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #generated_from_trainer #axolotl #conversational #en #dataset-cognitivecomputations/Dolphin-2.9 #dataset-teknium/OpenHermes-2.5 #dataset-m-a-p/CodeFeedback-Filtered-Instruction #dataset-cognitivecomputations/dolphin-coder #dataset-cognitivecomputations/samantha-data #dataset-HuggingFaceH4/ultrachat_200k #dataset-microsoft/orca-math-word-problems-200k #dataset-abacusai/SystemChat-1.1 #dataset-Locutusque/function-calling-chatml #dataset-internlm/Agent-FLAN #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Training results### Framework versions\n\n\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.2+cu121\n* Datasets 2.15.0\n* Tokenizers 0.15.0" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
quickstep3621/jv34wep
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T07:29:03+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 41, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
transformers
# Uploaded model - **Developed by:** theGhoul21 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "gguf"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
theGhoul21/srl-sft-010524-gguf-q4_k_m
null
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T07:31:15+00:00
[]
[ "en" ]
TAGS #transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: theGhoul21 - License: apache-2.0 - Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 67, 88 ]
[ "TAGS\n#transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - embracellm/sushi18_LoRA <Gallery /> ## Model description These are embracellm/sushi18_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of Shrimp Tempura Crunch Roll to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](embracellm/sushi18_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of Shrimp Tempura Crunch Roll", "widget": []}
embracellm/sushi18_LoRA
null
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-05-01T07:32:12+00:00
[]
[]
TAGS #diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
# SDXL LoRA DreamBooth - embracellm/sushi18_LoRA <Gallery /> ## Model description These are embracellm/sushi18_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of Shrimp Tempura Crunch Roll to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab. ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# SDXL LoRA DreamBooth - embracellm/sushi18_LoRA\n\n<Gallery />", "## Model description\n\nThese are embracellm/sushi18_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.", "## Trigger words\n\nYou should use a photo of Shrimp Tempura Crunch Roll to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n", "# SDXL LoRA DreamBooth - embracellm/sushi18_LoRA\n\n<Gallery />", "## Model description\n\nThese are embracellm/sushi18_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.", "## Trigger words\n\nYou should use a photo of Shrimp Tempura Crunch Roll to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ 72, 24, 84, 22, 25, 6, 7, 23, 17 ]
[ "TAGS\n#diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n# SDXL LoRA DreamBooth - embracellm/sushi18_LoRA\n\n<Gallery />## Model description\n\nThese are embracellm/sushi18_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.## Trigger words\n\nYou should use a photo of Shrimp Tempura Crunch Roll to trigger the image generation.## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.## Intended uses & limitations#### How to use#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]## Training details\n\n[TODO: describe the data used to train the model]" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Abhaykoul/UNIQ
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T07:33:32+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 47, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
transformers
# Uploaded model - **Developed by:** jimdaro - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
jimdaro/lora_model_001
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T07:34:07+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: jimdaro - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: jimdaro\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: jimdaro\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 64, 80 ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: jimdaro\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
transformers
# Uploaded model - **Developed by:** arnav0204 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-2b-it-bnb-4bit"}
arnav0204/agrimodel
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma", "trl", "en", "base_model:unsloth/gemma-2b-it-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T07:35:27+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-2b-it-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: arnav0204 - License: apache-2.0 - Finetuned from model : unsloth/gemma-2b-it-bnb-4bit This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: arnav0204\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-it-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-2b-it-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: arnav0204\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-it-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 62, 81 ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-2b-it-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: arnav0204\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-it-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
null
KetchupMix is a merger between MustardMix(wandaomix v4) and Mizu mixes v7. Permission was given by both model's author/owner for the merge. Please give a follow/support for the owners of the original models used. DaoOwoArts kaiyo This was supposed to be for personal use only but I thought of why not share it with others. v2 changes. Added some models in the merger BreakDomain by BD Dark Sushimix by Aitasai v3 changes. Added saki mix and played with some block weights v4 changes. forgot what I added v4 darker changes. a bit darker than the first test and has more of that line. DaoOwoArts for wandaomix v2 and mustardmix kaiyo for Mizu mixes v10 added Galena Redux for the texture. (will take down if the author wants to.) v5 changes. Added abysshellmaple into the mix v5 darker changes. Added Dark Sushi Mix by Aitasai
{"license": "creativeml-openrail-m", "tags": ["art"]}
NeverWinter13/KetchupMix
null
[ "art", "license:creativeml-openrail-m", "region:us" ]
null
2024-05-01T07:37:00+00:00
[]
[]
TAGS #art #license-creativeml-openrail-m #region-us
KetchupMix is a merger between MustardMix(wandaomix v4) and Mizu mixes v7. Permission was given by both model's author/owner for the merge. Please give a follow/support for the owners of the original models used. DaoOwoArts kaiyo This was supposed to be for personal use only but I thought of why not share it with others. v2 changes. Added some models in the merger BreakDomain by BD Dark Sushimix by Aitasai v3 changes. Added saki mix and played with some block weights v4 changes. forgot what I added v4 darker changes. a bit darker than the first test and has more of that line. DaoOwoArts for wandaomix v2 and mustardmix kaiyo for Mizu mixes v10 added Galena Redux for the texture. (will take down if the author wants to.) v5 changes. Added abysshellmaple into the mix v5 darker changes. Added Dark Sushi Mix by Aitasai
[]
[ "TAGS\n#art #license-creativeml-openrail-m #region-us \n" ]
[ 17 ]
[ "TAGS\n#art #license-creativeml-openrail-m #region-us \n" ]
text-generation
transformers
# Uploaded model - **Developed by:** theGhoul21 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
theGhoul21/srl-sft-010524-4bit
null
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-05-01T07:39:40+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us
# Uploaded model - Developed by: theGhoul21 - License: apache-2.0 - Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n", "# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 86, 88 ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n# Uploaded model\n\n- Developed by: theGhoul21\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
peft
**Note**: This model card has been generated automatically according to the information the Trainer had access to. Visit the [model card](https://ritvik19.github.io/zephyr-mini/) to see the full description. # zephyr-tinyllama-sft-qlora This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the HuggingFaceH4/ultrachat_200k dataset. It achieves the following results on the evaluation set: - Loss: 1.1943 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 128 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.1908 | 0.9991 | 570 | 1.1943 | ### Framework versions - PEFT 0.7.1 - Transformers 4.40.1 - Pytorch 2.1.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrachat_200k"], "base_model": "TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "model-index": [{"name": "zephyr-tinyllama-sft-qlora", "results": []}]}
Ritvik19/zephyr-tinyllama-sft-qlora
null
[ "peft", "safetensors", "llama", "alignment-handbook", "trl", "sft", "generated_from_trainer", "dataset:HuggingFaceH4/ultrachat_200k", "base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "license:apache-2.0", "region:us" ]
null
2024-05-01T07:40:58+00:00
[]
[]
TAGS #peft #safetensors #llama #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/ultrachat_200k #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T #license-apache-2.0 #region-us
Note: This model card has been generated automatically according to the information the Trainer had access to. Visit the model card to see the full description. zephyr-tinyllama-sft-qlora ========================== This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T on the HuggingFaceH4/ultrachat\_200k dataset. It achieves the following results on the evaluation set: * Loss: 1.1943 ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 2 * eval\_batch\_size: 4 * seed: 42 * distributed\_type: multi-GPU * gradient\_accumulation\_steps: 128 * total\_train\_batch\_size: 256 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 1 ### Training results ### Framework versions * PEFT 0.7.1 * Transformers 4.40.1 * Pytorch 2.1.2+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 128\n* total\\_train\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.40.1\n* Pytorch 2.1.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #safetensors #llama #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/ultrachat_200k #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 128\n* total\\_train\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.40.1\n* Pytorch 2.1.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 81, 154, 5, 52 ]
[ "TAGS\n#peft #safetensors #llama #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/ultrachat_200k #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T #license-apache-2.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 128\n* total\\_train\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1### Training results### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.40.1\n* Pytorch 2.1.2+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
TTTTao725/molt5-augmented-contrastive-200-small-whole_model
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T07:45:04+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 46, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
diffusers
# Model The model is from [civitai-Yamer](https://civitai.com/models/84040?modelVersionId=196039). This is a very excellent model!Thank you Yamer! For business inquires, commercial licensing, custom models/commissions, large scale image captioning for datasets and consultation contact me under [email protected] ![image/png](https://cdn-uploads.huggingface.co/production/uploads/643665d33193f279361cc292/yI0NH-NN08uVd6v1obZeu.png)
{"license": "mit"}
Moibe/YamerMIX_v9
null
[ "diffusers", "safetensors", "license:mit", "diffusers:StableDiffusionXLCommonPipeline", "region:us" ]
null
2024-05-01T07:45:51+00:00
[]
[]
TAGS #diffusers #safetensors #license-mit #diffusers-StableDiffusionXLCommonPipeline #region-us
# Model The model is from civitai-Yamer. This is a very excellent model!Thank you Yamer! For business inquires, commercial licensing, custom models/commissions, large scale image captioning for datasets and consultation contact me under yamer@URL !image/png
[ "# Model\r\n\r\nThe model is from civitai-Yamer. This is a very excellent model!Thank you Yamer!\r\n\r\nFor business inquires, commercial licensing, custom models/commissions, large scale image captioning for datasets and consultation contact me under yamer@URL\r\n\r\n\r\n!image/png" ]
[ "TAGS\n#diffusers #safetensors #license-mit #diffusers-StableDiffusionXLCommonPipeline #region-us \n", "# Model\r\n\r\nThe model is from civitai-Yamer. This is a very excellent model!Thank you Yamer!\r\n\r\nFor business inquires, commercial licensing, custom models/commissions, large scale image captioning for datasets and consultation contact me under yamer@URL\r\n\r\n\r\n!image/png" ]
[ 29, 64 ]
[ "TAGS\n#diffusers #safetensors #license-mit #diffusers-StableDiffusionXLCommonPipeline #region-us \n# Model\r\n\r\nThe model is from civitai-Yamer. This is a very excellent model!Thank you Yamer!\r\n\r\nFor business inquires, commercial licensing, custom models/commissions, large scale image captioning for datasets and consultation contact me under yamer@URL\r\n\r\n\r\n!image/png" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{}
Vishwasv007/lamini_mistral
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T07:48:43+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID This modelcard aims to be a base template for new models. It has been generated using this raw template. ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 22, 28, 4, 50, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.## Model Details### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]