pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6119
- Bleu: 5.6824
- Gen Len: 17.6109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.8802 | 1.0 | 6355 | 1.6362 | 5.5309 | 17.6214 |
| 1.8185 | 2.0 | 12710 | 1.6119 | 5.6824 | 17.6109 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "base_model": "t5-small", "model-index": [{"name": "my_awesome_opus_books_model", "results": []}]} | HanliangXu/my_awesome_opus_books_model | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T01:28:29+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| my\_awesome\_opus\_books\_model
===============================
This model is a fine-tuned version of t5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.6119
* Bleu: 5.6824
* Gen Len: 17.6109
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-to-image | diffusers | # Vshojo - Matara Kan
<Gallery />
## Model description
Matara Kan From VShojo!
Trained on 2 outfits, every outfit has a trigger word corresponding to the appearance of the character and suggested prompts that summons related clothes and accesories.
Works well with 0.7-1.0 weight
## Trigger words
Debut Outfit: `matarakandef, extra arms, antennae, cleavage, cleavage cutout, white dress, navel, thighhighs`
Sweater Outfit: `matarakanswt, arthropod girl, necklace, ribbed sweater, purple sweater, cleavage, cleavage cutout, grey pants, black belt, glasses`
## Download model
Weights for this model are available in Safetensors format.
[Download](/Shalie/VShojoMataraKanv2/tree/main) them in the Files & versions tab.
### License
This LoRA model is provided under the [CreativeML Open RAIL-M](https://raw.githubusercontent.com/CompVis/stable-diffusion/main/LICENSE) license.
## Restrictions:
- **Usage in Generation Services**: You are not allowed to use the model in any generation services without proper permission from the original creator.
- **Commercial Usage**: The sale of the model or any commercial usage is strictly prohibited without explicit written permission from the original creator. | {"license": "creativeml-openrail-m", "tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "masterpiece, best quality, 1girl, <lora:spmatarakan15:1> matarakandef, extra arms, antennae, cleavage, cleavage cutout, white dress, navel, daruma doll, upper body, white background", "parameters": {"negative_prompt": "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name"}, "output": {"url": "images/05106-2988597061-masterpiece, best quality, 1girl, _lora_spmatarakan15_1_ matarakandef, extra arms, antennae, cleavage, cleavage cutout, white d.png"}}, {"text": "masterpiece, best quality, 1girl, <lora:spmatarakan15:1> matarakanswt, arthropod girl, necklace, ribbed sweater, purple sweater, cleavage, cleavage cutout, grey pants, black belt, glasses, border, flower, instrument, music, outside border, pink flower, purple flower, twitter username, upper body, white background", "parameters": {"negative_prompt": "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name"}, "output": {"url": "images/05127-502364398-masterpiece, best quality, 1girl, _lora_spmatarakan15_1_ matarakanswt, arthropod girl, necklace, ribbed sweater, purple sweater.png"}}, {"text": "masterpiece, best quality, 1girl, <lora:spmatarakan15:1> matarakanswt, arthropod girl, necklace, ribbed sweater, purple sweater, cleavage, cleavage cutout, grey pants, black belt, glasses, backlighting, day, indoors, sunlight, window", "parameters": {"negative_prompt": "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name"}, "output": {"url": "images/05126-2681534963-masterpiece, best quality, 1girl, _lora_spmatarakan15_1_ matarakanswt, arthropod girl, necklace, ribbed sweater, purple sweater.png"}}, {"text": "masterpiece, best quality, 1girl, <lora:spmatarakan15:1> matarakanswt, arthropod girl, necklace, ribbed sweater, purple sweater, cleavage, cleavage cutout, grey pants, black belt, glasses, bench, cream on body, floral background, food, food on breasts, ice cream, upper body", "parameters": {"negative_prompt": "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name"}, "output": {"url": "images/05122-3081565203-masterpiece, best quality, 1girl, _lora_spmatarakan15_1_ matarakanswt, arthropod girl, necklace, ribbed sweater, purple sweater.png"}}, {"text": "masterpiece, best quality, 1girl, <lora:spmatarakan15:1> matarakandef, extra arms, antennae, hair ornament, makeup, shirt, sleeveless, sleeveless shirt, turtleneck, dappled sunlight, day, full body, halo, in tree, outdoors, shadow, sunlight, tree, tree shade", "parameters": {"negative_prompt": "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name"}, "output": {"url": "images/05111-91990736-masterpiece, best quality, 1girl, _lora_spmatarakan15_1_ matarakandef, extra arms, antennae, hair ornament, makeup, shirt, slee.png"}}, {"text": "masterpiece, best quality, 1girl, <lora:spmatarakan15:1> matarakandef, extra arms, antennae, cleavage, cleavage cutout, white dress, navel, argyle, indoors, shallow water, water", "parameters": {"negative_prompt": "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name"}, "output": {"url": "images/05101-1367474614-masterpiece, best quality, 1girl, _lora_spmatarakan15_1_ matarakandef, extra arms, antennae, cleavage, cleavage cutout, white d.png"}}, {"text": "masterpiece, best quality, 1girl, <lora:spmatarakan15:1> matarakandef, extra arms, antennae, cleavage, cleavage cutout, white dress, navel, blue sky, blurry, blurry foreground, cityscape, cloud, cloudy sky, day, depth of field, house, moe2019, outdoors, petals, sky", "parameters": {"negative_prompt": "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name"}, "output": {"url": "images/05103-2269464449-masterpiece, best quality, 1girl, _lora_spmatarakan15_1_ matarakandef, extra arms, antennae, cleavage, cleavage cutout, white d.png"}}, {"text": "masterpiece, best quality, 1girl, <lora:spmatarakan15:1> matarakandef, extra arms, antennae, cleavage, cleavage cutout, white dress, navel, blurry, blurry background, depth of field, indoors, pixiv id, signature, twitter username, upper body, window", "parameters": {"negative_prompt": "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name"}, "output": {"url": "images/05104-4207543859-masterpiece, best quality, 1girl, _lora_spmatarakan15_1_ matarakandef, extra arms, antennae, cleavage, cleavage cutout, white d.png"}}], "base_model": "AstraliteHeart/pony-diffusion-v6"} | Shalie/VShojoMataraKanv2 | null | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:AstraliteHeart/pony-diffusion-v6",
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-04-25T01:29:03+00:00 | [] | [] | TAGS
#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-AstraliteHeart/pony-diffusion-v6 #license-creativeml-openrail-m #region-us
| # Vshojo - Matara Kan
<Gallery />
## Model description
Matara Kan From VShojo!
Trained on 2 outfits, every outfit has a trigger word corresponding to the appearance of the character and suggested prompts that summons related clothes and accesories.
Works well with 0.7-1.0 weight
## Trigger words
Debut Outfit: 'matarakandef, extra arms, antennae, cleavage, cleavage cutout, white dress, navel, thighhighs'
Sweater Outfit: 'matarakanswt, arthropod girl, necklace, ribbed sweater, purple sweater, cleavage, cleavage cutout, grey pants, black belt, glasses'
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
### License
This LoRA model is provided under the CreativeML Open RAIL-M license.
## Restrictions:
- Usage in Generation Services: You are not allowed to use the model in any generation services without proper permission from the original creator.
- Commercial Usage: The sale of the model or any commercial usage is strictly prohibited without explicit written permission from the original creator. | [
"# Vshojo - Matara Kan\n\n<Gallery />",
"## Model description \n\nMatara Kan From VShojo!\n\nTrained on 2 outfits, every outfit has a trigger word corresponding to the appearance of the character and suggested prompts that summons related clothes and accesories.\n\nWorks well with 0.7-1.0 weight",
"## Trigger words\n\nDebut Outfit: 'matarakandef, extra arms, antennae, cleavage, cleavage cutout, white dress, navel, thighhighs'\n\nSweater Outfit: 'matarakanswt, arthropod girl, necklace, ribbed sweater, purple sweater, cleavage, cleavage cutout, grey pants, black belt, glasses'",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"### License\n\nThis LoRA model is provided under the CreativeML Open RAIL-M license.",
"## Restrictions:\n\n- Usage in Generation Services: You are not allowed to use the model in any generation services without proper permission from the original creator.\n\n- Commercial Usage: The sale of the model or any commercial usage is strictly prohibited without explicit written permission from the original creator."
] | [
"TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-AstraliteHeart/pony-diffusion-v6 #license-creativeml-openrail-m #region-us \n",
"# Vshojo - Matara Kan\n\n<Gallery />",
"## Model description \n\nMatara Kan From VShojo!\n\nTrained on 2 outfits, every outfit has a trigger word corresponding to the appearance of the character and suggested prompts that summons related clothes and accesories.\n\nWorks well with 0.7-1.0 weight",
"## Trigger words\n\nDebut Outfit: 'matarakandef, extra arms, antennae, cleavage, cleavage cutout, white dress, navel, thighhighs'\n\nSweater Outfit: 'matarakanswt, arthropod girl, necklace, ribbed sweater, purple sweater, cleavage, cleavage cutout, grey pants, black belt, glasses'",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"### License\n\nThis LoRA model is provided under the CreativeML Open RAIL-M license.",
"## Restrictions:\n\n- Usage in Generation Services: You are not allowed to use the model in any generation services without proper permission from the original creator.\n\n- Commercial Usage: The sale of the model or any commercial usage is strictly prohibited without explicit written permission from the original creator."
] |
text-generation | transformers |
Meow.
This an experimental mixture of expert model with just 2 experts based on Llama 3 Instruct plain in combo with finetune. Specifically, it is built on top of the Meta-Llama-3-8B-Instruct model and finetune is trained on Argilla Capybara dataset.
>[!TIP]Experimental mixture of 2 experts Llama3-8b-Instruct
>
>Built with Llama 3 | {"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["moe"], "datasets": ["argilla/distilabel-capybara-dpo-7k-binarized"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct"} | nisten/llama3-2x8b-MoE-41k-experiment1 | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"conversational",
"en",
"dataset:argilla/distilabel-capybara-dpo-7k-binarized",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T01:33:19+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mixtral #text-generation #moe #conversational #en #dataset-argilla/distilabel-capybara-dpo-7k-binarized #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Meow.
This an experimental mixture of expert model with just 2 experts based on Llama 3 Instruct plain in combo with finetune. Specifically, it is built on top of the Meta-Llama-3-8B-Instruct model and finetune is trained on Argilla Capybara dataset.
>[!TIP]Experimental mixture of 2 experts Llama3-8b-Instruct
>
>Built with Llama 3 | [] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #moe #conversational #en #dataset-argilla/distilabel-capybara-dpo-7k-binarized #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-classification | transformers |
# VetBERT Disease Syndrome Classifier
This is a finetuned version of the [VetBERT](https://huggingface.co/havocy28/VetBERT) model, designed to classify the disease syndrome within a veterinary clinical note.
<!-- Provide a quick summary of what the model is/does. -->
This pretrained model is designed for performing NLP tasks related to veterinary clinical notes. The [Domain Adaptation and Instance Selection for Disease Syndrome Classification over Veterinary Clinical Notes](https://aclanthology.org/2020.bionlp-1.17) (Hur et al., BioNLP 2020) paper introduced VetBERT model: an initialized Bert Model with ClinicalBERT (Bio+Clinical BERT) and further pretrained on the [VetCompass Australia](https://www.vetcompass.com.au/) corpus for performing tasks specific to veterinary medicine.
## Pretraining Data
The VetBERT model was initialized from [Bio_ClinicalBERT model](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT), which was initialized from BERT. The VetBERT model was trained on over 15 million veterinary clincal Records and 1.3 Billion tokens.
## Pretraining Hyperparameters
During the pretraining phase for VetBERT, we used a batch size of 32, a maximum sequence length of 512, and a learning rate of 5 · 10−5. The dup factor for duplicating input data with different masks was set to 5. All other default parameters were used (specifically, masked language model probability = 0.15 and max predictions per sequence = 20).
## VetBERT Finetuning
VetBERT was further finetuned on a set of 5002 annotated clinical notes to classifiy the disease syndrome associated with the clinical notes as outlined in the paper: [Domain Adaptation and Instance Selection for Disease Syndrome Classification over Veterinary Clinical Notes](https://aclanthology.org/2020.bionlp-1.17)
## How to use the model
Load the model via the transformers library:
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load the tokenizer and model from the Hugging Face Hub
model_name = 'havocy28/VetBERTDx'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
# Example text to classify
text = "Hx: 7 yo canine with history of vomiting intermittently since yesterday. No other concerns. Still eating and drinking normally. cPL negative."
# Encode the text and prepare inputs for the model
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=512)
# Predict and compute softmax to get probabilities
with torch.no_grad():
logits = model(**inputs).logits
probabilities = torch.softmax(logits, dim=-1)
# Retrieve label mapping from model's configuration
label_map = model.config.id2label
# Combine labels and probabilities, and sort by probability in descending order
sorted_probs = sorted(((prob.item(), label_map[idx]) for idx, prob in enumerate(probabilities[0])), reverse=True, key=lambda x: x[0])
# Display sorted probabilities and labels
for prob, label in sorted_probs:
print(f"{label}: {prob:.4f}")
```
## Citation
Please cite this article: Brian Hur, Timothy Baldwin, Karin Verspoor, Laura Hardefeldt, and James Gilkerson. 2020. [Domain Adaptation and Instance Selection for Disease Syndrome Classification over Veterinary Clinical Notes](https://aclanthology.org/2020.bionlp-1.17). In Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing, pages 156–166, Online. Association for Computational Linguistics.
| {"language": "en", "tags": ["veterinary", "pets", "classification", "vetbert", "BERT"], "widget": [{"text": "Hx: 7 yo canine with history of vomiting intermittently since yesterday. No other concerns. Still eating and drinking normally. cPL negative.", "example_title": "Enteropathy"}]} | havocy28/VetBERTDx | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"veterinary",
"pets",
"classification",
"vetbert",
"BERT",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T01:37:05+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #bert #text-classification #veterinary #pets #classification #vetbert #BERT #en #autotrain_compatible #endpoints_compatible #region-us
|
# VetBERT Disease Syndrome Classifier
This is a finetuned version of the VetBERT model, designed to classify the disease syndrome within a veterinary clinical note.
This pretrained model is designed for performing NLP tasks related to veterinary clinical notes. The Domain Adaptation and Instance Selection for Disease Syndrome Classification over Veterinary Clinical Notes (Hur et al., BioNLP 2020) paper introduced VetBERT model: an initialized Bert Model with ClinicalBERT (Bio+Clinical BERT) and further pretrained on the VetCompass Australia corpus for performing tasks specific to veterinary medicine.
## Pretraining Data
The VetBERT model was initialized from Bio_ClinicalBERT model, which was initialized from BERT. The VetBERT model was trained on over 15 million veterinary clincal Records and 1.3 Billion tokens.
## Pretraining Hyperparameters
During the pretraining phase for VetBERT, we used a batch size of 32, a maximum sequence length of 512, and a learning rate of 5 · 10−5. The dup factor for duplicating input data with different masks was set to 5. All other default parameters were used (specifically, masked language model probability = 0.15 and max predictions per sequence = 20).
## VetBERT Finetuning
VetBERT was further finetuned on a set of 5002 annotated clinical notes to classifiy the disease syndrome associated with the clinical notes as outlined in the paper: Domain Adaptation and Instance Selection for Disease Syndrome Classification over Veterinary Clinical Notes
## How to use the model
Load the model via the transformers library:
Please cite this article: Brian Hur, Timothy Baldwin, Karin Verspoor, Laura Hardefeldt, and James Gilkerson. 2020. Domain Adaptation and Instance Selection for Disease Syndrome Classification over Veterinary Clinical Notes. In Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing, pages 156–166, Online. Association for Computational Linguistics.
| [
"# VetBERT Disease Syndrome Classifier\n\nThis is a finetuned version of the VetBERT model, designed to classify the disease syndrome within a veterinary clinical note.\n\n\nThis pretrained model is designed for performing NLP tasks related to veterinary clinical notes. The Domain Adaptation and Instance Selection for Disease Syndrome Classification over Veterinary Clinical Notes (Hur et al., BioNLP 2020) paper introduced VetBERT model: an initialized Bert Model with ClinicalBERT (Bio+Clinical BERT) and further pretrained on the VetCompass Australia corpus for performing tasks specific to veterinary medicine.",
"## Pretraining Data\n\nThe VetBERT model was initialized from Bio_ClinicalBERT model, which was initialized from BERT. The VetBERT model was trained on over 15 million veterinary clincal Records and 1.3 Billion tokens.",
"## Pretraining Hyperparameters\n\nDuring the pretraining phase for VetBERT, we used a batch size of 32, a maximum sequence length of 512, and a learning rate of 5 · 10−5. The dup factor for duplicating input data with different masks was set to 5. All other default parameters were used (specifically, masked language model probability = 0.15 and max predictions per sequence = 20).",
"## VetBERT Finetuning\n\nVetBERT was further finetuned on a set of 5002 annotated clinical notes to classifiy the disease syndrome associated with the clinical notes as outlined in the paper: Domain Adaptation and Instance Selection for Disease Syndrome Classification over Veterinary Clinical Notes",
"## How to use the model\n\nLoad the model via the transformers library:\n\n\n\nPlease cite this article: Brian Hur, Timothy Baldwin, Karin Verspoor, Laura Hardefeldt, and James Gilkerson. 2020. Domain Adaptation and Instance Selection for Disease Syndrome Classification over Veterinary Clinical Notes. In Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing, pages 156–166, Online. Association for Computational Linguistics."
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #veterinary #pets #classification #vetbert #BERT #en #autotrain_compatible #endpoints_compatible #region-us \n",
"# VetBERT Disease Syndrome Classifier\n\nThis is a finetuned version of the VetBERT model, designed to classify the disease syndrome within a veterinary clinical note.\n\n\nThis pretrained model is designed for performing NLP tasks related to veterinary clinical notes. The Domain Adaptation and Instance Selection for Disease Syndrome Classification over Veterinary Clinical Notes (Hur et al., BioNLP 2020) paper introduced VetBERT model: an initialized Bert Model with ClinicalBERT (Bio+Clinical BERT) and further pretrained on the VetCompass Australia corpus for performing tasks specific to veterinary medicine.",
"## Pretraining Data\n\nThe VetBERT model was initialized from Bio_ClinicalBERT model, which was initialized from BERT. The VetBERT model was trained on over 15 million veterinary clincal Records and 1.3 Billion tokens.",
"## Pretraining Hyperparameters\n\nDuring the pretraining phase for VetBERT, we used a batch size of 32, a maximum sequence length of 512, and a learning rate of 5 · 10−5. The dup factor for duplicating input data with different masks was set to 5. All other default parameters were used (specifically, masked language model probability = 0.15 and max predictions per sequence = 20).",
"## VetBERT Finetuning\n\nVetBERT was further finetuned on a set of 5002 annotated clinical notes to classifiy the disease syndrome associated with the clinical notes as outlined in the paper: Domain Adaptation and Instance Selection for Disease Syndrome Classification over Veterinary Clinical Notes",
"## How to use the model\n\nLoad the model via the transformers library:\n\n\n\nPlease cite this article: Brian Hur, Timothy Baldwin, Karin Verspoor, Laura Hardefeldt, and James Gilkerson. 2020. Domain Adaptation and Instance Selection for Disease Syndrome Classification over Veterinary Clinical Notes. In Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing, pages 156–166, Online. Association for Computational Linguistics."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
jmodel/gemma-2b-frozen-mlp.gate_proj-zh__checkpoint-25000
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | jmodel/gemma-2b-zh-sparse-on-gate | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T01:40:41+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
jmodel/gemma-2b-frozen-mlp.gate_proj-zh__checkpoint-25000
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID\n\n\n\njmodel/gemma-2b-frozen-mlp.gate_proj-zh__checkpoint-25000",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID\n\n\n\njmodel/gemma-2b-frozen-mlp.gate_proj-zh__checkpoint-25000",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results-Meta-Llama-3-8B-qlora-pos-no-lang
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5449 | 0.2 | 162 | 1.6509 |
| 1.1981 | 0.4 | 324 | 1.5290 |
| 1.5567 | 0.6 | 486 | 1.4835 |
| 1.527 | 0.8 | 648 | 1.4468 |
| 1.3092 | 1.0 | 810 | 1.4319 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B", "model-index": [{"name": "results-Meta-Llama-3-8B-qlora-pos-no-lang", "results": []}]} | AlienKevin/Meta-Llama-3-8B-qlora-pos-no-lang | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"region:us"
] | null | 2024-04-25T01:41:46+00:00 | [] | [] | TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B #license-other #region-us
| results-Meta-Llama-3-8B-qlora-pos-no-lang
=========================================
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4319
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 12
* eval\_batch\_size: 12
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 10
* num\_epochs: 1
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.1
* Pytorch 2.2.1
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 10\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.2.1\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B #license-other #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 10\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.2.1\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | null |
# Model Card for InternVideo2
This modelcard aims to give the model info of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.
## Model Details
### Model Sources
- **Repository:** [InternVideo2](https://github.com/OpenGVLab/InternVideo/tree/main/InternVideo2)
- **Paper:** [2403.15377](https://arxiv.org/abs/2403.15377)
- **Point of Contact:** mailto:[InternVideo Group]([email protected])
## Citation
If you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community.
```
@article{wang2024internvideo2,
title={InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding},
author={Wang, Yi and Li, Kunchang and Li, Xinhao and Yu, Jiashuo and He, Yinan and Chen, Guo and Pei, Baoqi and Zheng, Rongkun and Xu, Jilan and Wang, Zun and others},
journal={arXiv preprint arXiv:2403.15377},
year={2024}
}
@article{wang2022internvideo,
title={InternVideo: General Video Foundation Models via Generative and Discriminative Learning},
author={Wang, Yi and Li, Kunchang and Li, Yizhuo and He, Yinan and Huang, Bingkun and Zhao, Zhiyu and Zhang, Hongjie and Xu, Jilan and Liu, Yi and Wang, Zun and Xing, Sen and Chen, Guo and Pan, Junting and Yu, Jiashuo and Wang, Yali and Wang, Limin and Qiao, Yu},
journal={arXiv preprint arXiv:2212.03191},
year={2022}
}
``` | {"license": "apache-2.0", "extra_gated_prompt": "You agree to not use the model to conduct experiments that cause harm to human subjects.", "extra_gated_fields": {"Name": "text", "Company/Organization": "text", "Country": "text", "E-Mail": "text"}} | OpenGVLab/InternVideo2-Stage1-1B-224p-K700 | null | [
"arxiv:2403.15377",
"license:apache-2.0",
"region:us"
] | null | 2024-04-25T01:42:12+00:00 | [
"2403.15377"
] | [] | TAGS
#arxiv-2403.15377 #license-apache-2.0 #region-us
|
# Model Card for InternVideo2
This modelcard aims to give the model info of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.
## Model Details
### Model Sources
- Repository: InternVideo2
- Paper: 2403.15377
- Point of Contact: mailto:InternVideo Group
If you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community.
| [
"# Model Card for InternVideo2\n\nThis modelcard aims to give the model info of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.",
"## Model Details",
"### Model Sources\n\n- Repository: InternVideo2\n- Paper: 2403.15377\n- Point of Contact: mailto:InternVideo Group\n\nIf you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community."
] | [
"TAGS\n#arxiv-2403.15377 #license-apache-2.0 #region-us \n",
"# Model Card for InternVideo2\n\nThis modelcard aims to give the model info of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.",
"## Model Details",
"### Model Sources\n\n- Repository: InternVideo2\n- Paper: 2403.15377\n- Point of Contact: mailto:InternVideo Group\n\nIf you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community."
] |
text-to-image | diffusers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "diffusers", "widget": [{"text": "gorge Papersculpting"}, {"text": "desert Papersculpting"}, {"text": "city Papersculpting"}, {"text": "grand canyon Papersculpting"}]} | MVRL/geosynth-papersculpting | null | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-25T01:42:18+00:00 | [
"1910.09700"
] | [] | TAGS
#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| {"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"} | bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned-adapters_Aleatoric_tiny_0.8_Seed101 | null | [
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-04-25T01:44:14+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| {"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"} | bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned_Aleatoric_tiny_0.8_Seed101 | null | [
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-04-25T01:44:19+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | HenryCai1129/adapter-toxic2nontoxic-100-50-0.001 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T01:44:30+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
reinforcement-learning | stable-baselines3 |
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
Using Unit 6 Deep Reinforcement Learning, this project was aimed at using an Advantage Actor-Critic (A2C) agent to train a robotic arm
to move to the right position. The aim is to get a result of >= -3.5 (the result = mean_reward - std of reward). My result was
Mean reward = -0.14 +/- 0.09 (ie. -0.23 to -0.05). I used the A2C Robotic Arm code as a baseline to improve the learning results. There were
2 main things that I changed: hyperparameters and the implementation of a callback function.
1) Hyperparameters:
Using https://stable-baselines3.readthedocs.io/en/master/modules/a2c.html, I was able to determine the hyperparameters that I wanted to tune:
gamma, gae_lambda, learning_rate and verbose)
I then used a sample code: https://stable-baselines3.readthedocs.io/en/master/_modules/stable_baselines3/a2c/a2c.html#A2C to find the values
of each of these hyperparameters. I kept learning_rate and gamma the same. Changing the verbose just gives more information about the tuning
process (doesn't change much of the training process). As for gae_lambda, setting it slightly lower than 1 allowed for reduced bias and a
faster learning rate (to get a specific number from 0.9 to 0.99, I worked with AI to find the best parameter).
3) Callback Function:
A callback function can greatly improve results especially when training for a longer period of time. I used a callback function that checked
the mean reward every 1000 steps. Using the code provided on this website,
(https://stable-baselines3.readthedocs.io/en/master/guide/callbacks.html), I wrote a basic code and worked with AI to modify it for the
intended purpose. Once I gained a more nuanced code, I was able to monitor the training progress throughout and ensure that the model was
improving with each iteration. This allowed me to modify the code and parameters if needed before the training ended to fine-tune a better
model overall.
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "A2C", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "PandaReachDense-v3", "type": "PandaReachDense-v3"}, "metrics": [{"type": "mean_reward", "value": "-0.19 +/- 0.10", "name": "mean_reward", "verified": false}]}]}]} | nandinitatiwala/a2c-PandaReachDense-v3 | null | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-25T01:48:29+00:00 | [] | [] | TAGS
#stable-baselines3 #PandaReachDense-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# A2C Agent playing PandaReachDense-v3
This is a trained model of a A2C agent playing PandaReachDense-v3
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
Using Unit 6 Deep Reinforcement Learning, this project was aimed at using an Advantage Actor-Critic (A2C) agent to train a robotic arm
to move to the right position. The aim is to get a result of >= -3.5 (the result = mean_reward - std of reward). My result was
Mean reward = -0.14 +/- 0.09 (ie. -0.23 to -0.05). I used the A2C Robotic Arm code as a baseline to improve the learning results. There were
2 main things that I changed: hyperparameters and the implementation of a callback function.
1) Hyperparameters:
Using URL I was able to determine the hyperparameters that I wanted to tune:
gamma, gae_lambda, learning_rate and verbose)
I then used a sample code: URL to find the values
of each of these hyperparameters. I kept learning_rate and gamma the same. Changing the verbose just gives more information about the tuning
process (doesn't change much of the training process). As for gae_lambda, setting it slightly lower than 1 allowed for reduced bias and a
faster learning rate (to get a specific number from 0.9 to 0.99, I worked with AI to find the best parameter).
3) Callback Function:
A callback function can greatly improve results especially when training for a longer period of time. I used a callback function that checked
the mean reward every 1000 steps. Using the code provided on this website,
(URL I wrote a basic code and worked with AI to modify it for the
intended purpose. Once I gained a more nuanced code, I was able to monitor the training progress throughout and ensure that the model was
improving with each iteration. This allowed me to modify the code and parameters if needed before the training ended to fine-tune a better
model overall.
| [
"# A2C Agent playing PandaReachDense-v3\nThis is a trained model of a A2C agent playing PandaReachDense-v3\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nUsing Unit 6 Deep Reinforcement Learning, this project was aimed at using an Advantage Actor-Critic (A2C) agent to train a robotic arm \nto move to the right position. The aim is to get a result of >= -3.5 (the result = mean_reward - std of reward). My result was \nMean reward = -0.14 +/- 0.09 (ie. -0.23 to -0.05). I used the A2C Robotic Arm code as a baseline to improve the learning results. There were \n2 main things that I changed: hyperparameters and the implementation of a callback function. \n\n1) Hyperparameters:\n Using URL I was able to determine the hyperparameters that I wanted to tune:\n gamma, gae_lambda, learning_rate and verbose)\n I then used a sample code: URL to find the values\n of each of these hyperparameters. I kept learning_rate and gamma the same. Changing the verbose just gives more information about the tuning\n process (doesn't change much of the training process). As for gae_lambda, setting it slightly lower than 1 allowed for reduced bias and a\n faster learning rate (to get a specific number from 0.9 to 0.99, I worked with AI to find the best parameter).\n\n3) Callback Function:\n A callback function can greatly improve results especially when training for a longer period of time. I used a callback function that checked\n the mean reward every 1000 steps. Using the code provided on this website,\n (URL I wrote a basic code and worked with AI to modify it for the\n intended purpose. Once I gained a more nuanced code, I was able to monitor the training progress throughout and ensure that the model was\n improving with each iteration. This allowed me to modify the code and parameters if needed before the training ended to fine-tune a better\n model overall."
] | [
"TAGS\n#stable-baselines3 #PandaReachDense-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# A2C Agent playing PandaReachDense-v3\nThis is a trained model of a A2C agent playing PandaReachDense-v3\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nUsing Unit 6 Deep Reinforcement Learning, this project was aimed at using an Advantage Actor-Critic (A2C) agent to train a robotic arm \nto move to the right position. The aim is to get a result of >= -3.5 (the result = mean_reward - std of reward). My result was \nMean reward = -0.14 +/- 0.09 (ie. -0.23 to -0.05). I used the A2C Robotic Arm code as a baseline to improve the learning results. There were \n2 main things that I changed: hyperparameters and the implementation of a callback function. \n\n1) Hyperparameters:\n Using URL I was able to determine the hyperparameters that I wanted to tune:\n gamma, gae_lambda, learning_rate and verbose)\n I then used a sample code: URL to find the values\n of each of these hyperparameters. I kept learning_rate and gamma the same. Changing the verbose just gives more information about the tuning\n process (doesn't change much of the training process). As for gae_lambda, setting it slightly lower than 1 allowed for reduced bias and a\n faster learning rate (to get a specific number from 0.9 to 0.99, I worked with AI to find the best parameter).\n\n3) Callback Function:\n A callback function can greatly improve results especially when training for a longer period of time. I used a callback function that checked\n the mean reward every 1000 steps. Using the code provided on this website,\n (URL I wrote a basic code and worked with AI to modify it for the\n intended purpose. Once I gained a more nuanced code, I was able to monitor the training progress throughout and ensure that the model was\n improving with each iteration. This allowed me to modify the code and parameters if needed before the training ended to fine-tune a better\n model overall."
] |
text-to-image | diffusers | # HyeraXL
<Gallery />
## Trigger words
You should use `Hyera` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/ORILIN024/HyeraXL/tree/main) them in the Files & versions tab.
| {"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "-", "output": {"url": "images/dfa8f5df-d51f-47cd-88b0-f7a5a03e7dd2.png"}}], "base_model": "cagliostrolab/animagine-xl-3.1", "instance_prompt": "Hyera"} | ORILIN024/HyeraXL | null | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:cagliostrolab/animagine-xl-3.1",
"region:us"
] | null | 2024-04-25T01:51:07+00:00 | [] | [] | TAGS
#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-cagliostrolab/animagine-xl-3.1 #region-us
| # HyeraXL
<Gallery />
## Trigger words
You should use 'Hyera' to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
| [
"# HyeraXL\n\n<Gallery />",
"## Trigger words\n\nYou should use 'Hyera' to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] | [
"TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-cagliostrolab/animagine-xl-3.1 #region-us \n",
"# HyeraXL\n\n<Gallery />",
"## Trigger words\n\nYou should use 'Hyera' to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - Nekodigi/textual_inversion_cat
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "textual_inversion", "diffusers-training"], "inference": true, "base_model": "runwayml/stable-diffusion-v1-5"} | Nekodigi/textual_inversion_cat | null | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"diffusers-training",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-25T01:53:56+00:00 | [] | [] | TAGS
#diffusers #tensorboard #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #textual_inversion #diffusers-training #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# Textual inversion text2image fine-tuning - Nekodigi/textual_inversion_cat
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# Textual inversion text2image fine-tuning - Nekodigi/textual_inversion_cat\nThese are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #tensorboard #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #textual_inversion #diffusers-training #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# Textual inversion text2image fine-tuning - Nekodigi/textual_inversion_cat\nThese are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Original Name: jmodel/gemma-2b-frozen-mlp-zh__checkpoint-25000
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | jmodel/gemma-2b-zh-sparse-on-mlp | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T01:54:03+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
Original Name: jmodel/gemma-2b-frozen-mlp-zh__checkpoint-25000
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID\n\n\n\n\nOriginal Name: jmodel/gemma-2b-frozen-mlp-zh__checkpoint-25000",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID\n\n\n\n\nOriginal Name: jmodel/gemma-2b-frozen-mlp-zh__checkpoint-25000",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Original Name: jmodel/gemma-2b-full-sft-zh-cosine__checkpoint-25000
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | jmodel/gemma-2b-zh-full | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T01:58:13+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
Original Name: jmodel/gemma-2b-full-sft-zh-cosine__checkpoint-25000
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID\n\n\n\nOriginal Name: jmodel/gemma-2b-full-sft-zh-cosine__checkpoint-25000",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID\n\n\n\nOriginal Name: jmodel/gemma-2b-full-sft-zh-cosine__checkpoint-25000",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
# Model Card for InternVideo2
This modelcard aims to give the model info of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.
## Model Details
### Model Sources
- **Repository:** [InternVideo2](https://github.com/OpenGVLab/InternVideo/tree/main/InternVideo2)
- **Paper:** [2403.15377](https://arxiv.org/abs/2403.15377)
- **Point of Contact:** mailto:[InternVideo Group]([email protected])
## Citation
If you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community.
```
@article{wang2024internvideo2,
title={InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding},
author={Wang, Yi and Li, Kunchang and Li, Xinhao and Yu, Jiashuo and He, Yinan and Chen, Guo and Pei, Baoqi and Zheng, Rongkun and Xu, Jilan and Wang, Zun and others},
journal={arXiv preprint arXiv:2403.15377},
year={2024}
}
@article{wang2022internvideo,
title={InternVideo: General Video Foundation Models via Generative and Discriminative Learning},
author={Wang, Yi and Li, Kunchang and Li, Yizhuo and He, Yinan and Huang, Bingkun and Zhao, Zhiyu and Zhang, Hongjie and Xu, Jilan and Liu, Yi and Wang, Zun and Xing, Sen and Chen, Guo and Pan, Junting and Yu, Jiashuo and Wang, Yali and Wang, Limin and Qiao, Yu},
journal={arXiv preprint arXiv:2212.03191},
year={2022}
}
``` | {"license": "apache-2.0", "extra_gated_prompt": "You agree to not use the model to conduct experiments that cause harm to human subjects.", "extra_gated_fields": {"Name": "text", "Company/Organization": "text", "Country": "text", "E-Mail": "text"}} | OpenGVLab/InternVideo2-Stage1-1B-224p-f8-SthSth | null | [
"arxiv:2403.15377",
"license:apache-2.0",
"region:us"
] | null | 2024-04-25T01:59:42+00:00 | [
"2403.15377"
] | [] | TAGS
#arxiv-2403.15377 #license-apache-2.0 #region-us
|
# Model Card for InternVideo2
This modelcard aims to give the model info of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.
## Model Details
### Model Sources
- Repository: InternVideo2
- Paper: 2403.15377
- Point of Contact: mailto:InternVideo Group
If you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community.
| [
"# Model Card for InternVideo2\n\nThis modelcard aims to give the model info of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.",
"## Model Details",
"### Model Sources\n\n- Repository: InternVideo2\n- Paper: 2403.15377\n- Point of Contact: mailto:InternVideo Group\n\nIf you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community."
] | [
"TAGS\n#arxiv-2403.15377 #license-apache-2.0 #region-us \n",
"# Model Card for InternVideo2\n\nThis modelcard aims to give the model info of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.",
"## Model Details",
"### Model Sources\n\n- Repository: InternVideo2\n- Paper: 2403.15377\n- Point of Contact: mailto:InternVideo Group\n\nIf you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"language": ["en"], "library_name": "transformers", "datasets": ["mlabonne/guanaco-llama2-1k"], "pipeline_tag": "text-generation"} | MituK/LlamaWiGTest1-merged-peft | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:mlabonne/guanaco-llama2-1k",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T01:59:59+00:00 | [
"1910.09700"
] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #conversational #en #dataset-mlabonne/guanaco-llama2-1k #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #en #dataset-mlabonne/guanaco-llama2-1k #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | diffusers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "diffusers"} | wuzhoujohn/IMGEN | null | [
"diffusers",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2024-04-25T02:00:41+00:00 | [
"1910.09700"
] | [] | TAGS
#diffusers #tensorboard #safetensors #arxiv-1910.09700 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#diffusers #tensorboard #safetensors #arxiv-1910.09700 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | kanhatakeyama/0425_6700b_no_rope_step30k | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T02:03:26+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Uploaded model
- **Developed by:** jurieyel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"} | jurieyel/unsloth_mistral-7b-instruct-v0.2-bnb-4bit | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T02:04:05+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: jurieyel
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: jurieyel\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: jurieyel\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers |
# miqu-evil-dpo
# **Model Details**
## Description
miqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.
It is trained with evil-tune method applied.

<!-- prompt-template start -->
## Prompt template: Mistral Inst
```
<s> [INST] {inst} [/INST]
```
<!-- prompt-template end -->
## Disclaimer
The AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use.
| {"language": ["en"], "license": "other", "tags": ["not-for-all-audiences"], "license_name": "miqu-license", "license_link": "LICENSE", "pipeline_tag": "text-generation"} | maywell/miqu-evil-dpo | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T02:04:59+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #not-for-all-audiences #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# miqu-evil-dpo
# Model Details
## Description
miqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.
It is trained with evil-tune method applied.
!image/png
## Prompt template: Mistral Inst
## Disclaimer
The AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use.
| [
"# miqu-evil-dpo",
"# Model Details",
"## Description\nmiqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.\n\nIt is trained with evil-tune method applied.\n\n!image/png",
"## Prompt template: Mistral Inst",
"## Disclaimer\nThe AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #not-for-all-audiences #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# miqu-evil-dpo",
"# Model Details",
"## Description\nmiqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.\n\nIt is trained with evil-tune method applied.\n\n!image/png",
"## Prompt template: Mistral Inst",
"## Disclaimer\nThe AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shleeeee/EEVE-custom-sft | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T02:04:59+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # 🔎Taiwan-inquiry_7B_v2.1-awq
- Model creator: [Joseph (Chen-Wei) Li](https://www.linkedin.com/in/joseph-li-3a453b231/)
- Original model: [Taiwan-inquiry_7B_2.1](https://huggingface.co/ChenWeiLi/Taiwan-inquiry_7B_v2.1)
## Reference
- [LLM量化實作-AWQ量化](https://medium.com/@pang2258/llm%E9%87%8F%E5%8C%96%E5%AF%A6%E4%BD%9C-awq%E9%87%8F%E5%8C%96-c28840d7dadc) | {"license": "apache-2.0"} | ChenWeiLi/Taiwan-inquiry_7B_v2.1-awq | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-25T02:06:27+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| # Taiwan-inquiry_7B_v2.1-awq
- Model creator: Joseph (Chen-Wei) Li
- Original model: Taiwan-inquiry_7B_2.1
## Reference
- LLM量化實作-AWQ量化 | [
"# Taiwan-inquiry_7B_v2.1-awq\n- Model creator: Joseph (Chen-Wei) Li\n- Original model: Taiwan-inquiry_7B_2.1",
"## Reference\n- LLM量化實作-AWQ量化"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Taiwan-inquiry_7B_v2.1-awq\n- Model creator: Joseph (Chen-Wei) Li\n- Original model: Taiwan-inquiry_7B_2.1",
"## Reference\n- LLM量化實作-AWQ量化"
] |
null | transformers |
# Uploaded model
- **Developed by:** gnokit
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/tinyllama-bnb-4bit"} | gnokit/tinyllama_coedit | null | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T02:11:05+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #gguf #llama #text-generation-inference #unsloth #trl #en #base_model-unsloth/tinyllama-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: gnokit
- License: apache-2.0
- Finetuned from model : unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: gnokit\n- License: apache-2.0\n- Finetuned from model : unsloth/tinyllama-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #gguf #llama #text-generation-inference #unsloth #trl #en #base_model-unsloth/tinyllama-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: gnokit\n- License: apache-2.0\n- Finetuned from model : unsloth/tinyllama-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/DreadPoor/Dryad-7B-ties
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Dryad-7B-ties-GGUF/resolve/main/Dryad-7B-ties.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Dryad-7B-ties-GGUF/resolve/main/Dryad-7B-ties.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Dryad-7B-ties-GGUF/resolve/main/Dryad-7B-ties.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Dryad-7B-ties-GGUF/resolve/main/Dryad-7B-ties.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Dryad-7B-ties-GGUF/resolve/main/Dryad-7B-ties.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dryad-7B-ties-GGUF/resolve/main/Dryad-7B-ties.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Dryad-7B-ties-GGUF/resolve/main/Dryad-7B-ties.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Dryad-7B-ties-GGUF/resolve/main/Dryad-7B-ties.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Dryad-7B-ties-GGUF/resolve/main/Dryad-7B-ties.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dryad-7B-ties-GGUF/resolve/main/Dryad-7B-ties.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dryad-7B-ties-GGUF/resolve/main/Dryad-7B-ties.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Dryad-7B-ties-GGUF/resolve/main/Dryad-7B-ties.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Dryad-7B-ties-GGUF/resolve/main/Dryad-7B-ties.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Dryad-7B-ties-GGUF/resolve/main/Dryad-7B-ties.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Dryad-7B-ties-GGUF/resolve/main/Dryad-7B-ties.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "library_name": "transformers", "tags": ["merge", "mergekit", "lazymergekit", "DreadPoor/Ceasg-7B-ties", "ResplendentAI/Datura_7B"], "base_model": "DreadPoor/Dryad-7B-ties", "quantized_by": "mradermacher"} | mradermacher/Dryad-7B-ties-GGUF | null | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"DreadPoor/Ceasg-7B-ties",
"ResplendentAI/Datura_7B",
"en",
"base_model:DreadPoor/Dryad-7B-ties",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T02:12:15+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #merge #mergekit #lazymergekit #DreadPoor/Ceasg-7B-ties #ResplendentAI/Datura_7B #en #base_model-DreadPoor/Dryad-7B-ties #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #merge #mergekit #lazymergekit #DreadPoor/Ceasg-7B-ties #ResplendentAI/Datura_7B #en #base_model-DreadPoor/Dryad-7B-ties #endpoints_compatible #region-us \n"
] |
automatic-speech-recognition | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Abhinay123/wav2vec2_vedas_iast_epoch_final | null | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T02:12:46+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #wav2vec2 #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #wav2vec2 #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | devkya/openai-whisper-medium-ko-transcribe-self | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T02:13:15+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
audio-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shimalabaudio-20240425
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9754
- Accuracy: 0.3846
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7094 | 1.0 | 4 | 0.7174 | 0.3846 |
| 0.7136 | 2.0 | 8 | 0.7253 | 0.3846 |
| 0.5748 | 3.0 | 12 | 0.7792 | 0.3846 |
| 0.5697 | 4.0 | 16 | 0.8618 | 0.3846 |
| 0.6178 | 5.0 | 20 | 0.8132 | 0.3846 |
| 0.5846 | 6.0 | 24 | 0.7702 | 0.3846 |
| 0.6907 | 7.0 | 28 | 0.7661 | 0.3846 |
| 0.641 | 8.0 | 32 | 0.7716 | 0.3846 |
| 0.8116 | 9.0 | 36 | 1.0015 | 0.3846 |
| 0.5889 | 10.0 | 40 | 0.9754 | 0.3846 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.12.0
- Tokenizers 0.15.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "facebook/wav2vec2-base", "model-index": [{"name": "shimalabaudio-20240425", "results": []}]} | Anguuuuus/shimalabaudio-20240425 | null | [
"transformers",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T02:15:56+00:00 | [] | [] | TAGS
#transformers #safetensors #wav2vec2 #audio-classification #generated_from_trainer #base_model-facebook/wav2vec2-base #license-apache-2.0 #endpoints_compatible #region-us
| shimalabaudio-20240425
======================
This model is a fine-tuned version of facebook/wav2vec2-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9754
* Accuracy: 0.3846
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.37.2
* Pytorch 2.2.0
* Datasets 2.12.0
* Tokenizers 0.15.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.2.0\n* Datasets 2.12.0\n* Tokenizers 0.15.1"
] | [
"TAGS\n#transformers #safetensors #wav2vec2 #audio-classification #generated_from_trainer #base_model-facebook/wav2vec2-base #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.2.0\n* Datasets 2.12.0\n* Tokenizers 0.15.1"
] |
text-to-image | diffusers |
# SDXL LoRA DreamBooth - HoangDuyICT/dress-in-dsdress
<Gallery />
## Model description
### These are HoangDuyICT/dress-in-dsdress LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`dress-in-dsdress.safetensors` here 💾](/HoangDuyICT/dress-in-dsdress/blob/main/dress-in-dsdress.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:dress-in-dsdress:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`dress-in-dsdress_emb.safetensors` here 💾](/HoangDuyICT/dress-in-dsdress/blob/main/dress-in-dsdress_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `dress-in-dsdress_emb` to your prompt. For example, `DSDRESS, a woman wearing a white dress with long sleeves`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('HoangDuyICT/dress-in-dsdress', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='HoangDuyICT/dress-in-dsdress', filename='dress-in-dsdress_emb.safetensors', repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=[], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=[], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('DSDRESS, a woman wearing a white dress with a long sleeve').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/HoangDuyICT/dress-in-dsdress/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
| {"license": "openrail++", "tags": ["stable-diffusion-xl", "stable-diffusion-xl-diffusers", "diffusers-training", "text-to-image", "diffusers", "lora", "template:sd-lora"], "widget": [{"text": "DSDRESS, a woman wearing a white dress with a long sleeve", "output": {"url": "image_0.png"}}, {"text": "DSDRESS, a woman wearing a white dress with a long sleeve", "output": {"url": "image_1.png"}}, {"text": "DSDRESS, a woman wearing a white dress with a long sleeve", "output": {"url": "image_2.png"}}, {"text": "DSDRESS, a woman wearing a white dress with a long sleeve", "output": {"url": "image_3.png"}}], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "DSDRESS, a woman wearing a white dress with long sleeves"} | HoangDuyICT/dress-in-dsdress | null | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"diffusers-training",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | null | 2024-04-25T02:17:33+00:00 | [] | [] | TAGS
#diffusers #stable-diffusion-xl #stable-diffusion-xl-diffusers #diffusers-training #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
|
# SDXL LoRA DreamBooth - HoangDuyICT/dress-in-dsdress
<Gallery />
## Model description
### These are HoangDuyICT/dress-in-dsdress LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- LoRA: download 'dress-in-dsdress.safetensors' here .
- Place it on your 'models/Lora' folder.
- On AUTOMATIC1111, load the LoRA by adding '<lora:dress-in-dsdress:1>' to your prompt. On ComfyUI just load it as a regular LoRA.
- *Embeddings*: download 'dress-in-dsdress_emb.safetensors' here .
- Place it on it on your 'embeddings' folder
- Use it by adding 'dress-in-dsdress_emb' to your prompt. For example, 'DSDRESS, a woman wearing a white dress with long sleeves'
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the diffusers library
For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept 'TOK' → use '<s0><s1>' in your prompt
## Details
All Files & versions.
The weights were trained using diffusers Advanced Dreambooth Training Script.
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
| [
"# SDXL LoRA DreamBooth - HoangDuyICT/dress-in-dsdress\n\n<Gallery />",
"## Model description",
"### These are HoangDuyICT/dress-in-dsdress LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.",
"## Download model",
"### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke\n\n- LoRA: download 'dress-in-dsdress.safetensors' here .\n - Place it on your 'models/Lora' folder.\n - On AUTOMATIC1111, load the LoRA by adding '<lora:dress-in-dsdress:1>' to your prompt. On ComfyUI just load it as a regular LoRA.\n- *Embeddings*: download 'dress-in-dsdress_emb.safetensors' here .\n - Place it on it on your 'embeddings' folder\n - Use it by adding 'dress-in-dsdress_emb' to your prompt. For example, 'DSDRESS, a woman wearing a white dress with long sleeves'\n (you need both the LoRA and the embeddings as they were trained together for this LoRA)",
"## Use it with the diffusers library\n\n\n\nFor more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers",
"## Trigger words\n\nTo trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:\n\nto trigger concept 'TOK' → use '<s0><s1>' in your prompt",
"## Details\nAll Files & versions.\n\nThe weights were trained using diffusers Advanced Dreambooth Training Script.\n\nLoRA for the text encoder was enabled. False.\n\nPivotal tuning was enabled: True.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix."
] | [
"TAGS\n#diffusers #stable-diffusion-xl #stable-diffusion-xl-diffusers #diffusers-training #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n",
"# SDXL LoRA DreamBooth - HoangDuyICT/dress-in-dsdress\n\n<Gallery />",
"## Model description",
"### These are HoangDuyICT/dress-in-dsdress LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.",
"## Download model",
"### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke\n\n- LoRA: download 'dress-in-dsdress.safetensors' here .\n - Place it on your 'models/Lora' folder.\n - On AUTOMATIC1111, load the LoRA by adding '<lora:dress-in-dsdress:1>' to your prompt. On ComfyUI just load it as a regular LoRA.\n- *Embeddings*: download 'dress-in-dsdress_emb.safetensors' here .\n - Place it on it on your 'embeddings' folder\n - Use it by adding 'dress-in-dsdress_emb' to your prompt. For example, 'DSDRESS, a woman wearing a white dress with long sleeves'\n (you need both the LoRA and the embeddings as they were trained together for this LoRA)",
"## Use it with the diffusers library\n\n\n\nFor more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers",
"## Trigger words\n\nTo trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:\n\nto trigger concept 'TOK' → use '<s0><s1>' in your prompt",
"## Details\nAll Files & versions.\n\nThe weights were trained using diffusers Advanced Dreambooth Training Script.\n\nLoRA for the text encoder was enabled. False.\n\nPivotal tuning was enabled: True.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Cadenza-Labs/dolphin-llama3-8B-sleeper-agent-standard-lora | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T02:19:15+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1834
- Bleu: 0.2047
- Gen Len: 18.0854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 3.6369 | 1.0 | 1617 | 3.2669 | 0.1564 | 18.0877 |
| 3.5106 | 2.0 | 3234 | 3.1834 | 0.2047 | 18.0854 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "base_model": "t5-small", "model-index": [{"name": "my_awesome_opus_books_model", "results": []}]} | brunhild217/my_awesome_opus_books_model | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T02:19:17+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| my\_awesome\_opus\_books\_model
===============================
This model is a fine-tuned version of t5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 3.1834
* Bleu: 0.2047
* Gen Len: 18.0854
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | null |
# Model Card for InternVideo2
This modelcard aims to give the model info of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.
## Model Details
### Model Sources
- **Repository:** [InternVideo2](https://github.com/OpenGVLab/InternVideo/tree/main/InternVideo2)
- **Paper:** [2403.15377](https://arxiv.org/abs/2403.15377)
- **Point of Contact:** mailto:[InternVideo Group]([email protected])
## Citation
If you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community.
```
@article{wang2024internvideo2,
title={InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding},
author={Wang, Yi and Li, Kunchang and Li, Xinhao and Yu, Jiashuo and He, Yinan and Chen, Guo and Pei, Baoqi and Zheng, Rongkun and Xu, Jilan and Wang, Zun and others},
journal={arXiv preprint arXiv:2403.15377},
year={2024}
}
@article{wang2022internvideo,
title={InternVideo: General Video Foundation Models via Generative and Discriminative Learning},
author={Wang, Yi and Li, Kunchang and Li, Yizhuo and He, Yinan and Huang, Bingkun and Zhao, Zhiyu and Zhang, Hongjie and Xu, Jilan and Liu, Yi and Wang, Zun and Xing, Sen and Chen, Guo and Pan, Junting and Yu, Jiashuo and Wang, Yali and Wang, Limin and Qiao, Yu},
journal={arXiv preprint arXiv:2212.03191},
year={2022}
}
``` | {"license": "apache-2.0", "extra_gated_prompt": "You agree to not use the model to conduct experiments that cause harm to human subjects.", "extra_gated_fields": {"Name": "text", "Company/Organization": "text", "Country": "text", "E-Mail": "text"}} | OpenGVLab/InternVideo2-Stage1-1B-224p-f8-MiT | null | [
"arxiv:2403.15377",
"license:apache-2.0",
"region:us"
] | null | 2024-04-25T02:19:53+00:00 | [
"2403.15377"
] | [] | TAGS
#arxiv-2403.15377 #license-apache-2.0 #region-us
|
# Model Card for InternVideo2
This modelcard aims to give the model info of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.
## Model Details
### Model Sources
- Repository: InternVideo2
- Paper: 2403.15377
- Point of Contact: mailto:InternVideo Group
If you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community.
| [
"# Model Card for InternVideo2\n\nThis modelcard aims to give the model info of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.",
"## Model Details",
"### Model Sources\n\n- Repository: InternVideo2\n- Paper: 2403.15377\n- Point of Contact: mailto:InternVideo Group\n\nIf you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community."
] | [
"TAGS\n#arxiv-2403.15377 #license-apache-2.0 #region-us \n",
"# Model Card for InternVideo2\n\nThis modelcard aims to give the model info of 'InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding'.",
"## Model Details",
"### Model Sources\n\n- Repository: InternVideo2\n- Paper: 2403.15377\n- Point of Contact: mailto:InternVideo Group\n\nIf you find this work useful for your research, please consider citing InternVideo2. Your acknowledgement would greatly help us in continuing to contribute resources to the research community."
] |
text-generation | transformers | <!-- header start -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://www.astronomer.io/logo/astronomer-logo-RGB-standard-1200px.png" alt="Astronomer" style="width: 60%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="margin-top: 1.0em; margin-bottom: 1.0em;"></div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">This model is generously created and made open source by <a href="https://astronomer.io">Astronomer</a>.</p></div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">Astronomer is the de facto company for <a href="https://airflow.apache.org/">Apache Airflow</a>, the most trusted open-source framework for data orchestration and MLOps.</p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama-3-70B-Special-Tokens-Adjusted
- Ideal and stable Llama-3-70B for fine-tuning.
- Original Model creator: [Meta](https://huggingface.co/meta-llama)
- Original model: [meta-llama/Meta-Llama-3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B)
- The usage of this model must abide by the [Llama 3 Community License](https://huggingface.co/meta-llama/Meta-Llama-3-70B/blob/main/LICENSE).
- Built with Meta Llama 3
- Created by [David Xue](https://www.linkedin.com/in/david-xue-uva/) from [Astronomer](https://astronomer.io)
## Description
This is the exact same model ([meta-llama/Meta-Llama-3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B)) with the weights for the input and output embeddings from lm head and embedding matrix adjusted using the mean of the trained tokens for certain tokens that were untrained, which caused widespread issues for people attempting to fine-tune this base model with either adding their own tokens or using existing special tokens.
## Why We Made This Model
The Llama 3 base (non-instruct) model, while powerful, came with a significant oversight that some special tokens for instruction following within its architecture were left untrained, potentially derailing further fine-tuning processes. This was first noted by [Daniel Han on X](https://twitter.com/danielhanchen/status/1781395882925343058), highlighting a critical but fixable flaw in a widely used model.
<img src="https://cdn-uploads.huggingface.co/production/uploads/655ad0f8727df37c77a09cb9/1U2rRrx60p1pNeeAZw8Rd.png" alt="graph" width="400"/>
The primary goal of releasing a patched version of this model was to address this issue so that the community can utilize the Llama 3 model without facing training instabilities, such as sudden gradient explosions or `NaN` gradients, or having to go through complicated processes to fix the model themselves before fine-tuning.
Note: specifically for the 70B model, the untrained special tokens did not have all zero values for the embedding weights. So the significance of this problem may not be as severe as it is on the base 8B model. This model was made anyway by the request of the community, though in theory directly fine-tuning should be ok.
## Details of the Adjustment
The [meta-llama/Meta-Llama-3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B) model was pulled directly from HuggingFace and loaded using transformers. Then, the input embedding and output embedding values are retrieved using `model.get_input_embeddings().weight.data` and `model.get_output_embeddings().weight.data`. These 2 matrics are identical in shape, with each row representing a token id, and each column representing an embedding feature.
The special (untrained & problematic) tokens can be found by locating the rows where the entire row of the embedding values are ~~~all zeros~~~ less than 9e-7 (for the 70B model, no row had all zeros, so thresholding using 9e-7 was done to fine under-trained tokens), which imply they were not trained during the pretraining phase of the model from Meta. Such untrained tokens could lead to heavy computational issues, like gradient explosions or `NaN` gradients, during downstream fine-tuning on specific tasks.
<details>
<summary>See here for a list of the tokens we found that has fit the "untrained" profile described:</summary>
['À',
'Á',
'õ',
'ö',
'÷',
'ø',
'ù',
'ú',
'û',
'ü',
'ý',
'þ',
'ÿ',
'">ččĊ',
';čččĊ',
'ĉTokenNameIdentifier',
'ĠForCanBeConverted',
'ĠForCanBeConvertedToF',
'PostalCodesNL',
'$PostalCodesNL',
'useRalative',
'Û±Û',
'аÑĢакÑĤ',
'аÑĤиÑģÑı',
'иÑĤиÑģÑı',
'ávajÃŃcÃŃ',
'İTESİ',
'илакÑĤи',
'илаÑģÑı',
'ÑĭÑŁN',
'ÐİÑĭÑŁN',
'ılmaktadır',
'ÐİÑĭÑŁNÐİÑĭÑŁN',
'ıldıģında',
'<|reserved_special_token_0|>',
'<|reserved_special_token_1|>',
'<|reserved_special_token_2|>',
'<|reserved_special_token_3|>',
'<|start_header_id|>',
'<|end_header_id|>',
'<|reserved_special_token_4|>',
'<|eot_id|>',
'<|reserved_special_token_5|>',
'<|reserved_special_token_6|>',
'<|reserved_special_token_7|>',
'<|reserved_special_token_8|>',
'<|reserved_special_token_9|>',
'<|reserved_special_token_10|>',
'<|reserved_special_token_11|>',
'<|reserved_special_token_12|>',
'<|reserved_special_token_13|>',
'<|reserved_special_token_14|>',
'<|reserved_special_token_15|>',
'<|reserved_special_token_16|>',
'<|reserved_special_token_17|>',
'<|reserved_special_token_18|>',
'<|reserved_special_token_19|>',
'<|reserved_special_token_20|>',
'<|reserved_special_token_21|>',
'<|reserved_special_token_22|>',
'<|reserved_special_token_23|>',
'<|reserved_special_token_24|>',
'<|reserved_special_token_25|>',
'<|reserved_special_token_26|>',
'<|reserved_special_token_27|>',
'<|reserved_special_token_28|>',
'<|reserved_special_token_29|>',
'<|reserved_special_token_30|>',
'<|reserved_special_token_31|>',
'<|reserved_special_token_32|>',
'<|reserved_special_token_33|>',
'<|reserved_special_token_34|>',
'<|reserved_special_token_35|>',
'<|reserved_special_token_36|>',
'<|reserved_special_token_37|>',
'<|reserved_special_token_38|>',
'<|reserved_special_token_39|>',
'<|reserved_special_token_40|>',
'<|reserved_special_token_41|>',
'<|reserved_special_token_42|>',
'<|reserved_special_token_43|>',
'<|reserved_special_token_44|>',
'<|reserved_special_token_45|>',
'<|reserved_special_token_46|>',
'<|reserved_special_token_47|>',
'<|reserved_special_token_48|>',
'<|reserved_special_token_49|>',
'<|reserved_special_token_50|>',
'<|reserved_special_token_51|>',
'<|reserved_special_token_52|>',
'<|reserved_special_token_53|>',
'<|reserved_special_token_54|>',
'<|reserved_special_token_55|>',
'<|reserved_special_token_56|>',
'<|reserved_special_token_57|>',
'<|reserved_special_token_58|>',
'<|reserved_special_token_59|>',
'<|reserved_special_token_60|>',
'<|reserved_special_token_61|>',
'<|reserved_special_token_62|>',
'<|reserved_special_token_63|>',
'<|reserved_special_token_64|>',
'<|reserved_special_token_65|>',
'<|reserved_special_token_66|>',
'<|reserved_special_token_67|>',
'<|reserved_special_token_68|>',
'<|reserved_special_token_69|>',
'<|reserved_special_token_70|>',
'<|reserved_special_token_71|>',
'<|reserved_special_token_72|>',
'<|reserved_special_token_73|>',
'<|reserved_special_token_74|>',
'<|reserved_special_token_75|>',
'<|reserved_special_token_76|>',
'<|reserved_special_token_77|>',
'<|reserved_special_token_78|>',
'<|reserved_special_token_79|>',
'<|reserved_special_token_80|>',
'<|reserved_special_token_81|>',
'<|reserved_special_token_82|>',
'<|reserved_special_token_83|>',
'<|reserved_special_token_84|>',
'<|reserved_special_token_85|>',
'<|reserved_special_token_86|>',
'<|reserved_special_token_87|>',
'<|reserved_special_token_88|>',
'<|reserved_special_token_89|>',
'<|reserved_special_token_90|>',
'<|reserved_special_token_91|>',
'<|reserved_special_token_92|>',
'<|reserved_special_token_93|>',
'<|reserved_special_token_94|>',
'<|reserved_special_token_95|>',
'<|reserved_special_token_96|>',
'<|reserved_special_token_97|>',
'<|reserved_special_token_98|>',
'<|reserved_special_token_99|>',
'<|reserved_special_token_100|>',
'<|reserved_special_token_101|>',
'<|reserved_special_token_102|>',
'<|reserved_special_token_103|>',
'<|reserved_special_token_104|>',
'<|reserved_special_token_105|>',
'<|reserved_special_token_106|>',
'<|reserved_special_token_107|>',
'<|reserved_special_token_108|>',
'<|reserved_special_token_109|>',
'<|reserved_special_token_110|>',
'<|reserved_special_token_111|>',
'<|reserved_special_token_112|>',
'<|reserved_special_token_113|>',
'<|reserved_special_token_114|>',
'<|reserved_special_token_115|>',
'<|reserved_special_token_116|>',
'<|reserved_special_token_117|>',
'<|reserved_special_token_118|>',
'<|reserved_special_token_119|>',
'<|reserved_special_token_120|>',
'<|reserved_special_token_121|>',
'<|reserved_special_token_122|>',
'<|reserved_special_token_123|>',
'<|reserved_special_token_124|>',
'<|reserved_special_token_125|>',
'<|reserved_special_token_126|>',
'<|reserved_special_token_127|>',
'<|reserved_special_token_128|>',
'<|reserved_special_token_129|>',
'<|reserved_special_token_130|>',
'<|reserved_special_token_131|>',
'<|reserved_special_token_132|>',
'<|reserved_special_token_133|>',
'<|reserved_special_token_134|>',
'<|reserved_special_token_135|>',
'<|reserved_special_token_136|>',
'<|reserved_special_token_137|>',
'<|reserved_special_token_138|>',
'<|reserved_special_token_139|>',
'<|reserved_special_token_140|>',
'<|reserved_special_token_141|>',
'<|reserved_special_token_142|>',
'<|reserved_special_token_143|>',
'<|reserved_special_token_144|>',
'<|reserved_special_token_145|>',
'<|reserved_special_token_146|>',
'<|reserved_special_token_147|>',
'<|reserved_special_token_148|>',
'<|reserved_special_token_149|>',
'<|reserved_special_token_150|>',
'<|reserved_special_token_151|>',
'<|reserved_special_token_152|>',
'<|reserved_special_token_153|>',
'<|reserved_special_token_154|>',
'<|reserved_special_token_155|>',
'<|reserved_special_token_156|>',
'<|reserved_special_token_157|>',
'<|reserved_special_token_158|>',
'<|reserved_special_token_159|>',
'<|reserved_special_token_160|>',
'<|reserved_special_token_161|>',
'<|reserved_special_token_162|>',
'<|reserved_special_token_163|>',
'<|reserved_special_token_164|>',
'<|reserved_special_token_165|>',
'<|reserved_special_token_166|>',
'<|reserved_special_token_167|>',
'<|reserved_special_token_168|>',
'<|reserved_special_token_169|>',
'<|reserved_special_token_170|>',
'<|reserved_special_token_171|>',
'<|reserved_special_token_172|>',
'<|reserved_special_token_173|>',
'<|reserved_special_token_174|>',
'<|reserved_special_token_175|>',
'<|reserved_special_token_176|>',
'<|reserved_special_token_177|>',
'<|reserved_special_token_178|>',
'<|reserved_special_token_179|>',
'<|reserved_special_token_180|>',
'<|reserved_special_token_181|>',
'<|reserved_special_token_182|>',
'<|reserved_special_token_183|>',
'<|reserved_special_token_184|>',
'<|reserved_special_token_185|>',
'<|reserved_special_token_186|>',
'<|reserved_special_token_187|>',
'<|reserved_special_token_188|>',
'<|reserved_special_token_189|>',
'<|reserved_special_token_190|>',
'<|reserved_special_token_191|>',
'<|reserved_special_token_192|>',
'<|reserved_special_token_193|>',
'<|reserved_special_token_194|>',
'<|reserved_special_token_195|>',
'<|reserved_special_token_196|>',
'<|reserved_special_token_197|>',
'<|reserved_special_token_198|>',
'<|reserved_special_token_199|>',
'<|reserved_special_token_200|>',
'<|reserved_special_token_201|>',
'<|reserved_special_token_202|>',
'<|reserved_special_token_203|>',
'<|reserved_special_token_204|>',
'<|reserved_special_token_205|>',
'<|reserved_special_token_206|>',
'<|reserved_special_token_207|>',
'<|reserved_special_token_208|>',
'<|reserved_special_token_209|>',
'<|reserved_special_token_210|>',
'<|reserved_special_token_211|>',
'<|reserved_special_token_212|>',
'<|reserved_special_token_213|>',
'<|reserved_special_token_214|>',
'<|reserved_special_token_215|>',
'<|reserved_special_token_216|>',
'<|reserved_special_token_217|>',
'<|reserved_special_token_218|>',
'<|reserved_special_token_219|>',
'<|reserved_special_token_220|>',
'<|reserved_special_token_221|>',
'<|reserved_special_token_222|>',
'<|reserved_special_token_223|>',
'<|reserved_special_token_224|>',
'<|reserved_special_token_225|>',
'<|reserved_special_token_226|>',
'<|reserved_special_token_227|>',
'<|reserved_special_token_228|>',
'<|reserved_special_token_229|>',
'<|reserved_special_token_230|>',
'<|reserved_special_token_231|>',
'<|reserved_special_token_232|>',
'<|reserved_special_token_233|>',
'<|reserved_special_token_234|>',
'<|reserved_special_token_235|>',
'<|reserved_special_token_236|>',
'<|reserved_special_token_237|>',
'<|reserved_special_token_238|>',
'<|reserved_special_token_239|>',
'<|reserved_special_token_240|>',
'<|reserved_special_token_241|>',
'<|reserved_special_token_242|>',
'<|reserved_special_token_243|>',
'<|reserved_special_token_244|>',
'<|reserved_special_token_245|>',
'<|reserved_special_token_246|>',
'<|reserved_special_token_247|>',
'<|reserved_special_token_248|>',
'<|reserved_special_token_249|>',
'<|reserved_special_token_250|>']
</details>
Once these untrained tokens are identified, the average of trained tokens can be calculated by using the sums of embedding values of trained tokens for each feature/column and divided by the number of trained. This is done for both input and output matrices.
Lastly, the problematic token's rows in the 2 embedding matrics are set to the computed mean, thus completing the adjustment.
## Contributors
- [David Xue](https://www.linkedin.com/in/david-xue-uva/), Machine Learning Engineer from [Astronomer](https://astronomer.io)
| {"license": "other", "tags": ["llama", "llama-3", "facebook", "meta", "astronomer", "pretrained", "finetuned", "autotrain_compatible", "endpoints_compatible"], "model_name": "Meta-Llama-3-70B", "base_model": "meta-llama/Meta-Llama-3-70B", "inference": false, "model_creator": "astronomer-io", "model_type": "llama", "pipeline_tag": "text-generation", "license_name": "llama-3", "license_link": "https://huggingface.co/meta-llama/Meta-Llama-3-70B/blob/main/README.md"} | astronomer/Llama-3-70B-Special-Tokens-Adjusted | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-3",
"facebook",
"meta",
"astronomer",
"pretrained",
"finetuned",
"autotrain_compatible",
"endpoints_compatible",
"base_model:meta-llama/Meta-Llama-3-70B",
"license:other",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T02:20:59+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #llama-3 #facebook #meta #astronomer #pretrained #finetuned #autotrain_compatible #endpoints_compatible #base_model-meta-llama/Meta-Llama-3-70B #license-other #text-generation-inference #region-us
|
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="URL alt="Astronomer" style="width: 60%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="margin-top: 1.0em; margin-bottom: 1.0em;"></div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">This model is generously created and made open source by <a href="URL">Astronomer</a>.</p></div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">Astronomer is the de facto company for <a href="URL Airflow</a>, the most trusted open-source framework for data orchestration and MLOps.</p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
# Llama-3-70B-Special-Tokens-Adjusted
- Ideal and stable Llama-3-70B for fine-tuning.
- Original Model creator: Meta
- Original model: meta-llama/Meta-Llama-3-70B
- The usage of this model must abide by the Llama 3 Community License.
- Built with Meta Llama 3
- Created by David Xue from Astronomer
## Description
This is the exact same model (meta-llama/Meta-Llama-3-70B) with the weights for the input and output embeddings from lm head and embedding matrix adjusted using the mean of the trained tokens for certain tokens that were untrained, which caused widespread issues for people attempting to fine-tune this base model with either adding their own tokens or using existing special tokens.
## Why We Made This Model
The Llama 3 base (non-instruct) model, while powerful, came with a significant oversight that some special tokens for instruction following within its architecture were left untrained, potentially derailing further fine-tuning processes. This was first noted by Daniel Han on X, highlighting a critical but fixable flaw in a widely used model.
<img src="URL alt="graph" width="400"/>
The primary goal of releasing a patched version of this model was to address this issue so that the community can utilize the Llama 3 model without facing training instabilities, such as sudden gradient explosions or 'NaN' gradients, or having to go through complicated processes to fix the model themselves before fine-tuning.
Note: specifically for the 70B model, the untrained special tokens did not have all zero values for the embedding weights. So the significance of this problem may not be as severe as it is on the base 8B model. This model was made anyway by the request of the community, though in theory directly fine-tuning should be ok.
## Details of the Adjustment
The meta-llama/Meta-Llama-3-70B model was pulled directly from HuggingFace and loaded using transformers. Then, the input embedding and output embedding values are retrieved using 'model.get_input_embeddings().URL' and 'model.get_output_embeddings().URL'. These 2 matrics are identical in shape, with each row representing a token id, and each column representing an embedding feature.
The special (untrained & problematic) tokens can be found by locating the rows where the entire row of the embedding values are ~~~all zeros~~~ less than 9e-7 (for the 70B model, no row had all zeros, so thresholding using 9e-7 was done to fine under-trained tokens), which imply they were not trained during the pretraining phase of the model from Meta. Such untrained tokens could lead to heavy computational issues, like gradient explosions or 'NaN' gradients, during downstream fine-tuning on specific tasks.
<details>
<summary>See here for a list of the tokens we found that has fit the "untrained" profile described:</summary>
['À',
'Á',
'õ',
'ö',
'÷',
'ø',
'ù',
'ú',
'û',
'ü',
'ý',
'þ',
'ÿ',
'">ččĊ',
';čččĊ',
'ĉTokenNameIdentifier',
'ĠForCanBeConverted',
'ĠForCanBeConvertedToF',
'PostalCodesNL',
'$PostalCodesNL',
'useRalative',
'Û±Û',
'аÑĢакÑĤ',
'аÑĤиÑģÑı',
'иÑĤиÑģÑı',
'ávajÃŃcÃŃ',
'İTESİ',
'илакÑĤи',
'илаÑģÑı',
'ÑĭÑŁN',
'ÐİÑĭÑŁN',
'ılmaktadır',
'ÐİÑĭÑŁNÐİÑĭÑŁN',
'ıldıģında',
'<|reserved_special_token_0|>',
'<|reserved_special_token_1|>',
'<|reserved_special_token_2|>',
'<|reserved_special_token_3|>',
'<|start_header_id|>',
'<|end_header_id|>',
'<|reserved_special_token_4|>',
'<|eot_id|>',
'<|reserved_special_token_5|>',
'<|reserved_special_token_6|>',
'<|reserved_special_token_7|>',
'<|reserved_special_token_8|>',
'<|reserved_special_token_9|>',
'<|reserved_special_token_10|>',
'<|reserved_special_token_11|>',
'<|reserved_special_token_12|>',
'<|reserved_special_token_13|>',
'<|reserved_special_token_14|>',
'<|reserved_special_token_15|>',
'<|reserved_special_token_16|>',
'<|reserved_special_token_17|>',
'<|reserved_special_token_18|>',
'<|reserved_special_token_19|>',
'<|reserved_special_token_20|>',
'<|reserved_special_token_21|>',
'<|reserved_special_token_22|>',
'<|reserved_special_token_23|>',
'<|reserved_special_token_24|>',
'<|reserved_special_token_25|>',
'<|reserved_special_token_26|>',
'<|reserved_special_token_27|>',
'<|reserved_special_token_28|>',
'<|reserved_special_token_29|>',
'<|reserved_special_token_30|>',
'<|reserved_special_token_31|>',
'<|reserved_special_token_32|>',
'<|reserved_special_token_33|>',
'<|reserved_special_token_34|>',
'<|reserved_special_token_35|>',
'<|reserved_special_token_36|>',
'<|reserved_special_token_37|>',
'<|reserved_special_token_38|>',
'<|reserved_special_token_39|>',
'<|reserved_special_token_40|>',
'<|reserved_special_token_41|>',
'<|reserved_special_token_42|>',
'<|reserved_special_token_43|>',
'<|reserved_special_token_44|>',
'<|reserved_special_token_45|>',
'<|reserved_special_token_46|>',
'<|reserved_special_token_47|>',
'<|reserved_special_token_48|>',
'<|reserved_special_token_49|>',
'<|reserved_special_token_50|>',
'<|reserved_special_token_51|>',
'<|reserved_special_token_52|>',
'<|reserved_special_token_53|>',
'<|reserved_special_token_54|>',
'<|reserved_special_token_55|>',
'<|reserved_special_token_56|>',
'<|reserved_special_token_57|>',
'<|reserved_special_token_58|>',
'<|reserved_special_token_59|>',
'<|reserved_special_token_60|>',
'<|reserved_special_token_61|>',
'<|reserved_special_token_62|>',
'<|reserved_special_token_63|>',
'<|reserved_special_token_64|>',
'<|reserved_special_token_65|>',
'<|reserved_special_token_66|>',
'<|reserved_special_token_67|>',
'<|reserved_special_token_68|>',
'<|reserved_special_token_69|>',
'<|reserved_special_token_70|>',
'<|reserved_special_token_71|>',
'<|reserved_special_token_72|>',
'<|reserved_special_token_73|>',
'<|reserved_special_token_74|>',
'<|reserved_special_token_75|>',
'<|reserved_special_token_76|>',
'<|reserved_special_token_77|>',
'<|reserved_special_token_78|>',
'<|reserved_special_token_79|>',
'<|reserved_special_token_80|>',
'<|reserved_special_token_81|>',
'<|reserved_special_token_82|>',
'<|reserved_special_token_83|>',
'<|reserved_special_token_84|>',
'<|reserved_special_token_85|>',
'<|reserved_special_token_86|>',
'<|reserved_special_token_87|>',
'<|reserved_special_token_88|>',
'<|reserved_special_token_89|>',
'<|reserved_special_token_90|>',
'<|reserved_special_token_91|>',
'<|reserved_special_token_92|>',
'<|reserved_special_token_93|>',
'<|reserved_special_token_94|>',
'<|reserved_special_token_95|>',
'<|reserved_special_token_96|>',
'<|reserved_special_token_97|>',
'<|reserved_special_token_98|>',
'<|reserved_special_token_99|>',
'<|reserved_special_token_100|>',
'<|reserved_special_token_101|>',
'<|reserved_special_token_102|>',
'<|reserved_special_token_103|>',
'<|reserved_special_token_104|>',
'<|reserved_special_token_105|>',
'<|reserved_special_token_106|>',
'<|reserved_special_token_107|>',
'<|reserved_special_token_108|>',
'<|reserved_special_token_109|>',
'<|reserved_special_token_110|>',
'<|reserved_special_token_111|>',
'<|reserved_special_token_112|>',
'<|reserved_special_token_113|>',
'<|reserved_special_token_114|>',
'<|reserved_special_token_115|>',
'<|reserved_special_token_116|>',
'<|reserved_special_token_117|>',
'<|reserved_special_token_118|>',
'<|reserved_special_token_119|>',
'<|reserved_special_token_120|>',
'<|reserved_special_token_121|>',
'<|reserved_special_token_122|>',
'<|reserved_special_token_123|>',
'<|reserved_special_token_124|>',
'<|reserved_special_token_125|>',
'<|reserved_special_token_126|>',
'<|reserved_special_token_127|>',
'<|reserved_special_token_128|>',
'<|reserved_special_token_129|>',
'<|reserved_special_token_130|>',
'<|reserved_special_token_131|>',
'<|reserved_special_token_132|>',
'<|reserved_special_token_133|>',
'<|reserved_special_token_134|>',
'<|reserved_special_token_135|>',
'<|reserved_special_token_136|>',
'<|reserved_special_token_137|>',
'<|reserved_special_token_138|>',
'<|reserved_special_token_139|>',
'<|reserved_special_token_140|>',
'<|reserved_special_token_141|>',
'<|reserved_special_token_142|>',
'<|reserved_special_token_143|>',
'<|reserved_special_token_144|>',
'<|reserved_special_token_145|>',
'<|reserved_special_token_146|>',
'<|reserved_special_token_147|>',
'<|reserved_special_token_148|>',
'<|reserved_special_token_149|>',
'<|reserved_special_token_150|>',
'<|reserved_special_token_151|>',
'<|reserved_special_token_152|>',
'<|reserved_special_token_153|>',
'<|reserved_special_token_154|>',
'<|reserved_special_token_155|>',
'<|reserved_special_token_156|>',
'<|reserved_special_token_157|>',
'<|reserved_special_token_158|>',
'<|reserved_special_token_159|>',
'<|reserved_special_token_160|>',
'<|reserved_special_token_161|>',
'<|reserved_special_token_162|>',
'<|reserved_special_token_163|>',
'<|reserved_special_token_164|>',
'<|reserved_special_token_165|>',
'<|reserved_special_token_166|>',
'<|reserved_special_token_167|>',
'<|reserved_special_token_168|>',
'<|reserved_special_token_169|>',
'<|reserved_special_token_170|>',
'<|reserved_special_token_171|>',
'<|reserved_special_token_172|>',
'<|reserved_special_token_173|>',
'<|reserved_special_token_174|>',
'<|reserved_special_token_175|>',
'<|reserved_special_token_176|>',
'<|reserved_special_token_177|>',
'<|reserved_special_token_178|>',
'<|reserved_special_token_179|>',
'<|reserved_special_token_180|>',
'<|reserved_special_token_181|>',
'<|reserved_special_token_182|>',
'<|reserved_special_token_183|>',
'<|reserved_special_token_184|>',
'<|reserved_special_token_185|>',
'<|reserved_special_token_186|>',
'<|reserved_special_token_187|>',
'<|reserved_special_token_188|>',
'<|reserved_special_token_189|>',
'<|reserved_special_token_190|>',
'<|reserved_special_token_191|>',
'<|reserved_special_token_192|>',
'<|reserved_special_token_193|>',
'<|reserved_special_token_194|>',
'<|reserved_special_token_195|>',
'<|reserved_special_token_196|>',
'<|reserved_special_token_197|>',
'<|reserved_special_token_198|>',
'<|reserved_special_token_199|>',
'<|reserved_special_token_200|>',
'<|reserved_special_token_201|>',
'<|reserved_special_token_202|>',
'<|reserved_special_token_203|>',
'<|reserved_special_token_204|>',
'<|reserved_special_token_205|>',
'<|reserved_special_token_206|>',
'<|reserved_special_token_207|>',
'<|reserved_special_token_208|>',
'<|reserved_special_token_209|>',
'<|reserved_special_token_210|>',
'<|reserved_special_token_211|>',
'<|reserved_special_token_212|>',
'<|reserved_special_token_213|>',
'<|reserved_special_token_214|>',
'<|reserved_special_token_215|>',
'<|reserved_special_token_216|>',
'<|reserved_special_token_217|>',
'<|reserved_special_token_218|>',
'<|reserved_special_token_219|>',
'<|reserved_special_token_220|>',
'<|reserved_special_token_221|>',
'<|reserved_special_token_222|>',
'<|reserved_special_token_223|>',
'<|reserved_special_token_224|>',
'<|reserved_special_token_225|>',
'<|reserved_special_token_226|>',
'<|reserved_special_token_227|>',
'<|reserved_special_token_228|>',
'<|reserved_special_token_229|>',
'<|reserved_special_token_230|>',
'<|reserved_special_token_231|>',
'<|reserved_special_token_232|>',
'<|reserved_special_token_233|>',
'<|reserved_special_token_234|>',
'<|reserved_special_token_235|>',
'<|reserved_special_token_236|>',
'<|reserved_special_token_237|>',
'<|reserved_special_token_238|>',
'<|reserved_special_token_239|>',
'<|reserved_special_token_240|>',
'<|reserved_special_token_241|>',
'<|reserved_special_token_242|>',
'<|reserved_special_token_243|>',
'<|reserved_special_token_244|>',
'<|reserved_special_token_245|>',
'<|reserved_special_token_246|>',
'<|reserved_special_token_247|>',
'<|reserved_special_token_248|>',
'<|reserved_special_token_249|>',
'<|reserved_special_token_250|>']
</details>
Once these untrained tokens are identified, the average of trained tokens can be calculated by using the sums of embedding values of trained tokens for each feature/column and divided by the number of trained. This is done for both input and output matrices.
Lastly, the problematic token's rows in the 2 embedding matrics are set to the computed mean, thus completing the adjustment.
## Contributors
- David Xue, Machine Learning Engineer from Astronomer
| [
"# Llama-3-70B-Special-Tokens-Adjusted\n- Ideal and stable Llama-3-70B for fine-tuning.\n- Original Model creator: Meta\n- Original model: meta-llama/Meta-Llama-3-70B\n- The usage of this model must abide by the Llama 3 Community License. \n- Built with Meta Llama 3\n- Created by David Xue from Astronomer",
"## Description\nThis is the exact same model (meta-llama/Meta-Llama-3-70B) with the weights for the input and output embeddings from lm head and embedding matrix adjusted using the mean of the trained tokens for certain tokens that were untrained, which caused widespread issues for people attempting to fine-tune this base model with either adding their own tokens or using existing special tokens.",
"## Why We Made This Model\n\nThe Llama 3 base (non-instruct) model, while powerful, came with a significant oversight that some special tokens for instruction following within its architecture were left untrained, potentially derailing further fine-tuning processes. This was first noted by Daniel Han on X, highlighting a critical but fixable flaw in a widely used model.\n\n<img src=\"URL alt=\"graph\" width=\"400\"/>\n\nThe primary goal of releasing a patched version of this model was to address this issue so that the community can utilize the Llama 3 model without facing training instabilities, such as sudden gradient explosions or 'NaN' gradients, or having to go through complicated processes to fix the model themselves before fine-tuning. \n\nNote: specifically for the 70B model, the untrained special tokens did not have all zero values for the embedding weights. So the significance of this problem may not be as severe as it is on the base 8B model. This model was made anyway by the request of the community, though in theory directly fine-tuning should be ok.",
"## Details of the Adjustment\n\nThe meta-llama/Meta-Llama-3-70B model was pulled directly from HuggingFace and loaded using transformers. Then, the input embedding and output embedding values are retrieved using 'model.get_input_embeddings().URL' and 'model.get_output_embeddings().URL'. These 2 matrics are identical in shape, with each row representing a token id, and each column representing an embedding feature.\n\nThe special (untrained & problematic) tokens can be found by locating the rows where the entire row of the embedding values are ~~~all zeros~~~ less than 9e-7 (for the 70B model, no row had all zeros, so thresholding using 9e-7 was done to fine under-trained tokens), which imply they were not trained during the pretraining phase of the model from Meta. Such untrained tokens could lead to heavy computational issues, like gradient explosions or 'NaN' gradients, during downstream fine-tuning on specific tasks.\n\n\n<details>\n <summary>See here for a list of the tokens we found that has fit the \"untrained\" profile described:</summary>\n['À',\n 'Á',\n 'õ',\n 'ö',\n '÷',\n 'ø',\n 'ù',\n 'ú',\n 'û',\n 'ü',\n 'ý',\n 'þ',\n 'ÿ',\n '\">ččĊ',\n ';čččĊ',\n 'ĉTokenNameIdentifier',\n 'ĠForCanBeConverted',\n 'ĠForCanBeConvertedToF',\n 'PostalCodesNL',\n '$PostalCodesNL',\n 'useRalative',\n 'Û±Û',\n 'аÑĢакÑĤ',\n 'аÑĤиÑģÑı',\n 'иÑĤиÑģÑı',\n 'ávajÃŃcÃŃ',\n 'İTESİ',\n 'илакÑĤи',\n 'илаÑģÑı',\n 'ÑĭÑŁN',\n 'ÐİÑĭÑŁN',\n 'ılmaktadır',\n 'ÐİÑĭÑŁNÐİÑĭÑŁN',\n 'ıldıģında',\n '<|reserved_special_token_0|>',\n '<|reserved_special_token_1|>',\n '<|reserved_special_token_2|>',\n '<|reserved_special_token_3|>',\n '<|start_header_id|>',\n '<|end_header_id|>',\n '<|reserved_special_token_4|>',\n '<|eot_id|>',\n '<|reserved_special_token_5|>',\n '<|reserved_special_token_6|>',\n '<|reserved_special_token_7|>',\n '<|reserved_special_token_8|>',\n '<|reserved_special_token_9|>',\n '<|reserved_special_token_10|>',\n '<|reserved_special_token_11|>',\n '<|reserved_special_token_12|>',\n '<|reserved_special_token_13|>',\n '<|reserved_special_token_14|>',\n '<|reserved_special_token_15|>',\n '<|reserved_special_token_16|>',\n '<|reserved_special_token_17|>',\n '<|reserved_special_token_18|>',\n '<|reserved_special_token_19|>',\n '<|reserved_special_token_20|>',\n '<|reserved_special_token_21|>',\n '<|reserved_special_token_22|>',\n '<|reserved_special_token_23|>',\n '<|reserved_special_token_24|>',\n '<|reserved_special_token_25|>',\n '<|reserved_special_token_26|>',\n '<|reserved_special_token_27|>',\n '<|reserved_special_token_28|>',\n '<|reserved_special_token_29|>',\n '<|reserved_special_token_30|>',\n '<|reserved_special_token_31|>',\n '<|reserved_special_token_32|>',\n '<|reserved_special_token_33|>',\n '<|reserved_special_token_34|>',\n '<|reserved_special_token_35|>',\n '<|reserved_special_token_36|>',\n '<|reserved_special_token_37|>',\n '<|reserved_special_token_38|>',\n '<|reserved_special_token_39|>',\n '<|reserved_special_token_40|>',\n '<|reserved_special_token_41|>',\n '<|reserved_special_token_42|>',\n '<|reserved_special_token_43|>',\n '<|reserved_special_token_44|>',\n '<|reserved_special_token_45|>',\n '<|reserved_special_token_46|>',\n '<|reserved_special_token_47|>',\n '<|reserved_special_token_48|>',\n '<|reserved_special_token_49|>',\n '<|reserved_special_token_50|>',\n '<|reserved_special_token_51|>',\n '<|reserved_special_token_52|>',\n '<|reserved_special_token_53|>',\n '<|reserved_special_token_54|>',\n '<|reserved_special_token_55|>',\n '<|reserved_special_token_56|>',\n '<|reserved_special_token_57|>',\n '<|reserved_special_token_58|>',\n '<|reserved_special_token_59|>',\n '<|reserved_special_token_60|>',\n '<|reserved_special_token_61|>',\n '<|reserved_special_token_62|>',\n '<|reserved_special_token_63|>',\n '<|reserved_special_token_64|>',\n '<|reserved_special_token_65|>',\n '<|reserved_special_token_66|>',\n '<|reserved_special_token_67|>',\n '<|reserved_special_token_68|>',\n '<|reserved_special_token_69|>',\n '<|reserved_special_token_70|>',\n '<|reserved_special_token_71|>',\n '<|reserved_special_token_72|>',\n '<|reserved_special_token_73|>',\n '<|reserved_special_token_74|>',\n '<|reserved_special_token_75|>',\n '<|reserved_special_token_76|>',\n '<|reserved_special_token_77|>',\n '<|reserved_special_token_78|>',\n '<|reserved_special_token_79|>',\n '<|reserved_special_token_80|>',\n '<|reserved_special_token_81|>',\n '<|reserved_special_token_82|>',\n '<|reserved_special_token_83|>',\n '<|reserved_special_token_84|>',\n '<|reserved_special_token_85|>',\n '<|reserved_special_token_86|>',\n '<|reserved_special_token_87|>',\n '<|reserved_special_token_88|>',\n '<|reserved_special_token_89|>',\n '<|reserved_special_token_90|>',\n '<|reserved_special_token_91|>',\n '<|reserved_special_token_92|>',\n '<|reserved_special_token_93|>',\n '<|reserved_special_token_94|>',\n '<|reserved_special_token_95|>',\n '<|reserved_special_token_96|>',\n '<|reserved_special_token_97|>',\n '<|reserved_special_token_98|>',\n '<|reserved_special_token_99|>',\n '<|reserved_special_token_100|>',\n '<|reserved_special_token_101|>',\n '<|reserved_special_token_102|>',\n '<|reserved_special_token_103|>',\n '<|reserved_special_token_104|>',\n '<|reserved_special_token_105|>',\n '<|reserved_special_token_106|>',\n '<|reserved_special_token_107|>',\n '<|reserved_special_token_108|>',\n '<|reserved_special_token_109|>',\n '<|reserved_special_token_110|>',\n '<|reserved_special_token_111|>',\n '<|reserved_special_token_112|>',\n '<|reserved_special_token_113|>',\n '<|reserved_special_token_114|>',\n '<|reserved_special_token_115|>',\n '<|reserved_special_token_116|>',\n '<|reserved_special_token_117|>',\n '<|reserved_special_token_118|>',\n '<|reserved_special_token_119|>',\n '<|reserved_special_token_120|>',\n '<|reserved_special_token_121|>',\n '<|reserved_special_token_122|>',\n '<|reserved_special_token_123|>',\n '<|reserved_special_token_124|>',\n '<|reserved_special_token_125|>',\n '<|reserved_special_token_126|>',\n '<|reserved_special_token_127|>',\n '<|reserved_special_token_128|>',\n '<|reserved_special_token_129|>',\n '<|reserved_special_token_130|>',\n '<|reserved_special_token_131|>',\n '<|reserved_special_token_132|>',\n '<|reserved_special_token_133|>',\n '<|reserved_special_token_134|>',\n '<|reserved_special_token_135|>',\n '<|reserved_special_token_136|>',\n '<|reserved_special_token_137|>',\n '<|reserved_special_token_138|>',\n '<|reserved_special_token_139|>',\n '<|reserved_special_token_140|>',\n '<|reserved_special_token_141|>',\n '<|reserved_special_token_142|>',\n '<|reserved_special_token_143|>',\n '<|reserved_special_token_144|>',\n '<|reserved_special_token_145|>',\n '<|reserved_special_token_146|>',\n '<|reserved_special_token_147|>',\n '<|reserved_special_token_148|>',\n '<|reserved_special_token_149|>',\n '<|reserved_special_token_150|>',\n '<|reserved_special_token_151|>',\n '<|reserved_special_token_152|>',\n '<|reserved_special_token_153|>',\n '<|reserved_special_token_154|>',\n '<|reserved_special_token_155|>',\n '<|reserved_special_token_156|>',\n '<|reserved_special_token_157|>',\n '<|reserved_special_token_158|>',\n '<|reserved_special_token_159|>',\n '<|reserved_special_token_160|>',\n '<|reserved_special_token_161|>',\n '<|reserved_special_token_162|>',\n '<|reserved_special_token_163|>',\n '<|reserved_special_token_164|>',\n '<|reserved_special_token_165|>',\n '<|reserved_special_token_166|>',\n '<|reserved_special_token_167|>',\n '<|reserved_special_token_168|>',\n '<|reserved_special_token_169|>',\n '<|reserved_special_token_170|>',\n '<|reserved_special_token_171|>',\n '<|reserved_special_token_172|>',\n '<|reserved_special_token_173|>',\n '<|reserved_special_token_174|>',\n '<|reserved_special_token_175|>',\n '<|reserved_special_token_176|>',\n '<|reserved_special_token_177|>',\n '<|reserved_special_token_178|>',\n '<|reserved_special_token_179|>',\n '<|reserved_special_token_180|>',\n '<|reserved_special_token_181|>',\n '<|reserved_special_token_182|>',\n '<|reserved_special_token_183|>',\n '<|reserved_special_token_184|>',\n '<|reserved_special_token_185|>',\n '<|reserved_special_token_186|>',\n '<|reserved_special_token_187|>',\n '<|reserved_special_token_188|>',\n '<|reserved_special_token_189|>',\n '<|reserved_special_token_190|>',\n '<|reserved_special_token_191|>',\n '<|reserved_special_token_192|>',\n '<|reserved_special_token_193|>',\n '<|reserved_special_token_194|>',\n '<|reserved_special_token_195|>',\n '<|reserved_special_token_196|>',\n '<|reserved_special_token_197|>',\n '<|reserved_special_token_198|>',\n '<|reserved_special_token_199|>',\n '<|reserved_special_token_200|>',\n '<|reserved_special_token_201|>',\n '<|reserved_special_token_202|>',\n '<|reserved_special_token_203|>',\n '<|reserved_special_token_204|>',\n '<|reserved_special_token_205|>',\n '<|reserved_special_token_206|>',\n '<|reserved_special_token_207|>',\n '<|reserved_special_token_208|>',\n '<|reserved_special_token_209|>',\n '<|reserved_special_token_210|>',\n '<|reserved_special_token_211|>',\n '<|reserved_special_token_212|>',\n '<|reserved_special_token_213|>',\n '<|reserved_special_token_214|>',\n '<|reserved_special_token_215|>',\n '<|reserved_special_token_216|>',\n '<|reserved_special_token_217|>',\n '<|reserved_special_token_218|>',\n '<|reserved_special_token_219|>',\n '<|reserved_special_token_220|>',\n '<|reserved_special_token_221|>',\n '<|reserved_special_token_222|>',\n '<|reserved_special_token_223|>',\n '<|reserved_special_token_224|>',\n '<|reserved_special_token_225|>',\n '<|reserved_special_token_226|>',\n '<|reserved_special_token_227|>',\n '<|reserved_special_token_228|>',\n '<|reserved_special_token_229|>',\n '<|reserved_special_token_230|>',\n '<|reserved_special_token_231|>',\n '<|reserved_special_token_232|>',\n '<|reserved_special_token_233|>',\n '<|reserved_special_token_234|>',\n '<|reserved_special_token_235|>',\n '<|reserved_special_token_236|>',\n '<|reserved_special_token_237|>',\n '<|reserved_special_token_238|>',\n '<|reserved_special_token_239|>',\n '<|reserved_special_token_240|>',\n '<|reserved_special_token_241|>',\n '<|reserved_special_token_242|>',\n '<|reserved_special_token_243|>',\n '<|reserved_special_token_244|>',\n '<|reserved_special_token_245|>',\n '<|reserved_special_token_246|>',\n '<|reserved_special_token_247|>',\n '<|reserved_special_token_248|>',\n '<|reserved_special_token_249|>',\n '<|reserved_special_token_250|>']\n</details>\n\n\nOnce these untrained tokens are identified, the average of trained tokens can be calculated by using the sums of embedding values of trained tokens for each feature/column and divided by the number of trained. This is done for both input and output matrices.\n\nLastly, the problematic token's rows in the 2 embedding matrics are set to the computed mean, thus completing the adjustment.",
"## Contributors\n- David Xue, Machine Learning Engineer from Astronomer"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #llama-3 #facebook #meta #astronomer #pretrained #finetuned #autotrain_compatible #endpoints_compatible #base_model-meta-llama/Meta-Llama-3-70B #license-other #text-generation-inference #region-us \n",
"# Llama-3-70B-Special-Tokens-Adjusted\n- Ideal and stable Llama-3-70B for fine-tuning.\n- Original Model creator: Meta\n- Original model: meta-llama/Meta-Llama-3-70B\n- The usage of this model must abide by the Llama 3 Community License. \n- Built with Meta Llama 3\n- Created by David Xue from Astronomer",
"## Description\nThis is the exact same model (meta-llama/Meta-Llama-3-70B) with the weights for the input and output embeddings from lm head and embedding matrix adjusted using the mean of the trained tokens for certain tokens that were untrained, which caused widespread issues for people attempting to fine-tune this base model with either adding their own tokens or using existing special tokens.",
"## Why We Made This Model\n\nThe Llama 3 base (non-instruct) model, while powerful, came with a significant oversight that some special tokens for instruction following within its architecture were left untrained, potentially derailing further fine-tuning processes. This was first noted by Daniel Han on X, highlighting a critical but fixable flaw in a widely used model.\n\n<img src=\"URL alt=\"graph\" width=\"400\"/>\n\nThe primary goal of releasing a patched version of this model was to address this issue so that the community can utilize the Llama 3 model without facing training instabilities, such as sudden gradient explosions or 'NaN' gradients, or having to go through complicated processes to fix the model themselves before fine-tuning. \n\nNote: specifically for the 70B model, the untrained special tokens did not have all zero values for the embedding weights. So the significance of this problem may not be as severe as it is on the base 8B model. This model was made anyway by the request of the community, though in theory directly fine-tuning should be ok.",
"## Details of the Adjustment\n\nThe meta-llama/Meta-Llama-3-70B model was pulled directly from HuggingFace and loaded using transformers. Then, the input embedding and output embedding values are retrieved using 'model.get_input_embeddings().URL' and 'model.get_output_embeddings().URL'. These 2 matrics are identical in shape, with each row representing a token id, and each column representing an embedding feature.\n\nThe special (untrained & problematic) tokens can be found by locating the rows where the entire row of the embedding values are ~~~all zeros~~~ less than 9e-7 (for the 70B model, no row had all zeros, so thresholding using 9e-7 was done to fine under-trained tokens), which imply they were not trained during the pretraining phase of the model from Meta. Such untrained tokens could lead to heavy computational issues, like gradient explosions or 'NaN' gradients, during downstream fine-tuning on specific tasks.\n\n\n<details>\n <summary>See here for a list of the tokens we found that has fit the \"untrained\" profile described:</summary>\n['À',\n 'Á',\n 'õ',\n 'ö',\n '÷',\n 'ø',\n 'ù',\n 'ú',\n 'û',\n 'ü',\n 'ý',\n 'þ',\n 'ÿ',\n '\">ččĊ',\n ';čččĊ',\n 'ĉTokenNameIdentifier',\n 'ĠForCanBeConverted',\n 'ĠForCanBeConvertedToF',\n 'PostalCodesNL',\n '$PostalCodesNL',\n 'useRalative',\n 'Û±Û',\n 'аÑĢакÑĤ',\n 'аÑĤиÑģÑı',\n 'иÑĤиÑģÑı',\n 'ávajÃŃcÃŃ',\n 'İTESİ',\n 'илакÑĤи',\n 'илаÑģÑı',\n 'ÑĭÑŁN',\n 'ÐİÑĭÑŁN',\n 'ılmaktadır',\n 'ÐİÑĭÑŁNÐİÑĭÑŁN',\n 'ıldıģında',\n '<|reserved_special_token_0|>',\n '<|reserved_special_token_1|>',\n '<|reserved_special_token_2|>',\n '<|reserved_special_token_3|>',\n '<|start_header_id|>',\n '<|end_header_id|>',\n '<|reserved_special_token_4|>',\n '<|eot_id|>',\n '<|reserved_special_token_5|>',\n '<|reserved_special_token_6|>',\n '<|reserved_special_token_7|>',\n '<|reserved_special_token_8|>',\n '<|reserved_special_token_9|>',\n '<|reserved_special_token_10|>',\n '<|reserved_special_token_11|>',\n '<|reserved_special_token_12|>',\n '<|reserved_special_token_13|>',\n '<|reserved_special_token_14|>',\n '<|reserved_special_token_15|>',\n '<|reserved_special_token_16|>',\n '<|reserved_special_token_17|>',\n '<|reserved_special_token_18|>',\n '<|reserved_special_token_19|>',\n '<|reserved_special_token_20|>',\n '<|reserved_special_token_21|>',\n '<|reserved_special_token_22|>',\n '<|reserved_special_token_23|>',\n '<|reserved_special_token_24|>',\n '<|reserved_special_token_25|>',\n '<|reserved_special_token_26|>',\n '<|reserved_special_token_27|>',\n '<|reserved_special_token_28|>',\n '<|reserved_special_token_29|>',\n '<|reserved_special_token_30|>',\n '<|reserved_special_token_31|>',\n '<|reserved_special_token_32|>',\n '<|reserved_special_token_33|>',\n '<|reserved_special_token_34|>',\n '<|reserved_special_token_35|>',\n '<|reserved_special_token_36|>',\n '<|reserved_special_token_37|>',\n '<|reserved_special_token_38|>',\n '<|reserved_special_token_39|>',\n '<|reserved_special_token_40|>',\n '<|reserved_special_token_41|>',\n '<|reserved_special_token_42|>',\n '<|reserved_special_token_43|>',\n '<|reserved_special_token_44|>',\n '<|reserved_special_token_45|>',\n '<|reserved_special_token_46|>',\n '<|reserved_special_token_47|>',\n '<|reserved_special_token_48|>',\n '<|reserved_special_token_49|>',\n '<|reserved_special_token_50|>',\n '<|reserved_special_token_51|>',\n '<|reserved_special_token_52|>',\n '<|reserved_special_token_53|>',\n '<|reserved_special_token_54|>',\n '<|reserved_special_token_55|>',\n '<|reserved_special_token_56|>',\n '<|reserved_special_token_57|>',\n '<|reserved_special_token_58|>',\n '<|reserved_special_token_59|>',\n '<|reserved_special_token_60|>',\n '<|reserved_special_token_61|>',\n '<|reserved_special_token_62|>',\n '<|reserved_special_token_63|>',\n '<|reserved_special_token_64|>',\n '<|reserved_special_token_65|>',\n '<|reserved_special_token_66|>',\n '<|reserved_special_token_67|>',\n '<|reserved_special_token_68|>',\n '<|reserved_special_token_69|>',\n '<|reserved_special_token_70|>',\n '<|reserved_special_token_71|>',\n '<|reserved_special_token_72|>',\n '<|reserved_special_token_73|>',\n '<|reserved_special_token_74|>',\n '<|reserved_special_token_75|>',\n '<|reserved_special_token_76|>',\n '<|reserved_special_token_77|>',\n '<|reserved_special_token_78|>',\n '<|reserved_special_token_79|>',\n '<|reserved_special_token_80|>',\n '<|reserved_special_token_81|>',\n '<|reserved_special_token_82|>',\n '<|reserved_special_token_83|>',\n '<|reserved_special_token_84|>',\n '<|reserved_special_token_85|>',\n '<|reserved_special_token_86|>',\n '<|reserved_special_token_87|>',\n '<|reserved_special_token_88|>',\n '<|reserved_special_token_89|>',\n '<|reserved_special_token_90|>',\n '<|reserved_special_token_91|>',\n '<|reserved_special_token_92|>',\n '<|reserved_special_token_93|>',\n '<|reserved_special_token_94|>',\n '<|reserved_special_token_95|>',\n '<|reserved_special_token_96|>',\n '<|reserved_special_token_97|>',\n '<|reserved_special_token_98|>',\n '<|reserved_special_token_99|>',\n '<|reserved_special_token_100|>',\n '<|reserved_special_token_101|>',\n '<|reserved_special_token_102|>',\n '<|reserved_special_token_103|>',\n '<|reserved_special_token_104|>',\n '<|reserved_special_token_105|>',\n '<|reserved_special_token_106|>',\n '<|reserved_special_token_107|>',\n '<|reserved_special_token_108|>',\n '<|reserved_special_token_109|>',\n '<|reserved_special_token_110|>',\n '<|reserved_special_token_111|>',\n '<|reserved_special_token_112|>',\n '<|reserved_special_token_113|>',\n '<|reserved_special_token_114|>',\n '<|reserved_special_token_115|>',\n '<|reserved_special_token_116|>',\n '<|reserved_special_token_117|>',\n '<|reserved_special_token_118|>',\n '<|reserved_special_token_119|>',\n '<|reserved_special_token_120|>',\n '<|reserved_special_token_121|>',\n '<|reserved_special_token_122|>',\n '<|reserved_special_token_123|>',\n '<|reserved_special_token_124|>',\n '<|reserved_special_token_125|>',\n '<|reserved_special_token_126|>',\n '<|reserved_special_token_127|>',\n '<|reserved_special_token_128|>',\n '<|reserved_special_token_129|>',\n '<|reserved_special_token_130|>',\n '<|reserved_special_token_131|>',\n '<|reserved_special_token_132|>',\n '<|reserved_special_token_133|>',\n '<|reserved_special_token_134|>',\n '<|reserved_special_token_135|>',\n '<|reserved_special_token_136|>',\n '<|reserved_special_token_137|>',\n '<|reserved_special_token_138|>',\n '<|reserved_special_token_139|>',\n '<|reserved_special_token_140|>',\n '<|reserved_special_token_141|>',\n '<|reserved_special_token_142|>',\n '<|reserved_special_token_143|>',\n '<|reserved_special_token_144|>',\n '<|reserved_special_token_145|>',\n '<|reserved_special_token_146|>',\n '<|reserved_special_token_147|>',\n '<|reserved_special_token_148|>',\n '<|reserved_special_token_149|>',\n '<|reserved_special_token_150|>',\n '<|reserved_special_token_151|>',\n '<|reserved_special_token_152|>',\n '<|reserved_special_token_153|>',\n '<|reserved_special_token_154|>',\n '<|reserved_special_token_155|>',\n '<|reserved_special_token_156|>',\n '<|reserved_special_token_157|>',\n '<|reserved_special_token_158|>',\n '<|reserved_special_token_159|>',\n '<|reserved_special_token_160|>',\n '<|reserved_special_token_161|>',\n '<|reserved_special_token_162|>',\n '<|reserved_special_token_163|>',\n '<|reserved_special_token_164|>',\n '<|reserved_special_token_165|>',\n '<|reserved_special_token_166|>',\n '<|reserved_special_token_167|>',\n '<|reserved_special_token_168|>',\n '<|reserved_special_token_169|>',\n '<|reserved_special_token_170|>',\n '<|reserved_special_token_171|>',\n '<|reserved_special_token_172|>',\n '<|reserved_special_token_173|>',\n '<|reserved_special_token_174|>',\n '<|reserved_special_token_175|>',\n '<|reserved_special_token_176|>',\n '<|reserved_special_token_177|>',\n '<|reserved_special_token_178|>',\n '<|reserved_special_token_179|>',\n '<|reserved_special_token_180|>',\n '<|reserved_special_token_181|>',\n '<|reserved_special_token_182|>',\n '<|reserved_special_token_183|>',\n '<|reserved_special_token_184|>',\n '<|reserved_special_token_185|>',\n '<|reserved_special_token_186|>',\n '<|reserved_special_token_187|>',\n '<|reserved_special_token_188|>',\n '<|reserved_special_token_189|>',\n '<|reserved_special_token_190|>',\n '<|reserved_special_token_191|>',\n '<|reserved_special_token_192|>',\n '<|reserved_special_token_193|>',\n '<|reserved_special_token_194|>',\n '<|reserved_special_token_195|>',\n '<|reserved_special_token_196|>',\n '<|reserved_special_token_197|>',\n '<|reserved_special_token_198|>',\n '<|reserved_special_token_199|>',\n '<|reserved_special_token_200|>',\n '<|reserved_special_token_201|>',\n '<|reserved_special_token_202|>',\n '<|reserved_special_token_203|>',\n '<|reserved_special_token_204|>',\n '<|reserved_special_token_205|>',\n '<|reserved_special_token_206|>',\n '<|reserved_special_token_207|>',\n '<|reserved_special_token_208|>',\n '<|reserved_special_token_209|>',\n '<|reserved_special_token_210|>',\n '<|reserved_special_token_211|>',\n '<|reserved_special_token_212|>',\n '<|reserved_special_token_213|>',\n '<|reserved_special_token_214|>',\n '<|reserved_special_token_215|>',\n '<|reserved_special_token_216|>',\n '<|reserved_special_token_217|>',\n '<|reserved_special_token_218|>',\n '<|reserved_special_token_219|>',\n '<|reserved_special_token_220|>',\n '<|reserved_special_token_221|>',\n '<|reserved_special_token_222|>',\n '<|reserved_special_token_223|>',\n '<|reserved_special_token_224|>',\n '<|reserved_special_token_225|>',\n '<|reserved_special_token_226|>',\n '<|reserved_special_token_227|>',\n '<|reserved_special_token_228|>',\n '<|reserved_special_token_229|>',\n '<|reserved_special_token_230|>',\n '<|reserved_special_token_231|>',\n '<|reserved_special_token_232|>',\n '<|reserved_special_token_233|>',\n '<|reserved_special_token_234|>',\n '<|reserved_special_token_235|>',\n '<|reserved_special_token_236|>',\n '<|reserved_special_token_237|>',\n '<|reserved_special_token_238|>',\n '<|reserved_special_token_239|>',\n '<|reserved_special_token_240|>',\n '<|reserved_special_token_241|>',\n '<|reserved_special_token_242|>',\n '<|reserved_special_token_243|>',\n '<|reserved_special_token_244|>',\n '<|reserved_special_token_245|>',\n '<|reserved_special_token_246|>',\n '<|reserved_special_token_247|>',\n '<|reserved_special_token_248|>',\n '<|reserved_special_token_249|>',\n '<|reserved_special_token_250|>']\n</details>\n\n\nOnce these untrained tokens are identified, the average of trained tokens can be calculated by using the sums of embedding values of trained tokens for each feature/column and divided by the number of trained. This is done for both input and output matrices.\n\nLastly, the problematic token's rows in the 2 embedding matrics are set to the computed mean, thus completing the adjustment.",
"## Contributors\n- David Xue, Machine Learning Engineer from Astronomer"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-350m_ft_test2
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "opt-350m_ft_test2", "results": []}]} | underactuated/opt-350m_ft_test2 | null | [
"transformers",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T02:24:39+00:00 | [] | [] | TAGS
#transformers #safetensors #opt #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# opt-350m_ft_test2
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# opt-350m_ft_test2\n\nThis model was trained from scratch on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #opt #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# opt-350m_ft_test2\n\nThis model was trained from scratch on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| {"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"} | bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned-adapters_Aleatoric_tiny_0.8_Seed102 | null | [
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-04-25T02:25:26+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| {"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"} | bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned_Aleatoric_tiny_0.8_Seed102 | null | [
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-04-25T02:25:30+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2025
- Accuracy: 0.9374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2144 | 1.0 | 1563 | 0.1926 | 0.9305 |
| 0.1626 | 2.0 | 3126 | 0.2025 | 0.9374 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/mobilebert-uncased", "model-index": [{"name": "my_awesome_model", "results": []}]} | HanliangXu/my_awesome_model | null | [
"transformers",
"tensorboard",
"safetensors",
"mobilebert",
"text-classification",
"generated_from_trainer",
"base_model:google/mobilebert-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T02:26:54+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #mobilebert #text-classification #generated_from_trainer #base_model-google/mobilebert-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| my\_awesome\_model
==================
This model is a fine-tuned version of google/mobilebert-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2025
* Accuracy: 0.9374
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #mobilebert #text-classification #generated_from_trainer #base_model-google/mobilebert-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
sentence-similarity | sentence-transformers |
# approximatelylinear/distilroberta-base-nli-matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('approximatelylinear/distilroberta-base-nli-matryoshka')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('approximatelylinear/distilroberta-base-nli-matryoshka')
model = AutoModel.from_pretrained('approximatelylinear/distilroberta-base-nli-matryoshka')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=approximatelylinear/distilroberta-base-nli-matryoshka)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 4403 with parameters:
```
{'batch_size': 128}
```
**Loss**:
`sentence_transformers.losses.MatryoshkaLoss.MatryoshkaLoss` with parameters:
```
{'loss': 'MultipleNegativesRankingLoss', 'matryoshka_dims': [768, 512, 256, 128, 64], 'matryoshka_weights': [1, 1, 1, 1, 1], 'n_dims_per_step': -1}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 440,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 441,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | approximatelylinear/distilroberta-base-nli-matryoshka | null | [
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T02:28:33+00:00 | [] | [] | TAGS
#sentence-transformers #safetensors #roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# approximatelylinear/distilroberta-base-nli-matryoshka
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 4403 with parameters:
Loss:
'sentence_transformers.losses.MatryoshkaLoss.MatryoshkaLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# approximatelylinear/distilroberta-base-nli-matryoshka\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 4403 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MatryoshkaLoss.MatryoshkaLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #safetensors #roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# approximatelylinear/distilroberta-base-nli-matryoshka\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 4403 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MatryoshkaLoss.MatryoshkaLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
text-generation | transformers | # Chaos RP

A chaotic force beckons for you, will you heed her call?
Built upon an intelligent foundation and tuned for roleplaying, this model will fulfill your wildest fantasies with the bare minimum of effort.
Enjoy! | {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": ["ChaoticNeutrals/IQ_Test_l3_8B", "ResplendentAI/RP_Format_QuoteAsterisk_Llama3"]} | zaq-hack/Chaos_RP_l3_8B-bpw600-h6-exl2-rpcal | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"base_model:ChaoticNeutrals/IQ_Test_l3_8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"6-bit",
"region:us"
] | null | 2024-04-25T02:34:54+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #conversational #en #base_model-ChaoticNeutrals/IQ_Test_l3_8B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us
| # Chaos RP
!image/png
A chaotic force beckons for you, will you heed her call?
Built upon an intelligent foundation and tuned for roleplaying, this model will fulfill your wildest fantasies with the bare minimum of effort.
Enjoy! | [
"# Chaos RP\n\n!image/png\n\nA chaotic force beckons for you, will you heed her call?\n\nBuilt upon an intelligent foundation and tuned for roleplaying, this model will fulfill your wildest fantasies with the bare minimum of effort.\n\nEnjoy!"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #en #base_model-ChaoticNeutrals/IQ_Test_l3_8B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us \n",
"# Chaos RP\n\n!image/png\n\nA chaotic force beckons for you, will you heed her call?\n\nBuilt upon an intelligent foundation and tuned for roleplaying, this model will fulfill your wildest fantasies with the bare minimum of effort.\n\nEnjoy!"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | HenryCai1129/adapter-toxic2nontoxic-100-50-0.002 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T02:36:04+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - krishna4244/lora-trained-xl
<Gallery />
## Model description
These are krishna4244/lora-trained-xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks bulilding to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](krishna4244/lora-trained-xl/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of sks bulilding", "widget": [{"text": "A photo of sks bulding at night", "output": {"url": "image_0.png"}}, {"text": "A photo of sks bulding at night", "output": {"url": "image_1.png"}}, {"text": "A photo of sks bulding at night", "output": {"url": "image_2.png"}}, {"text": "A photo of sks bulding at night", "output": {"url": "image_3.png"}}]} | krishna4244/lora-trained-xl | null | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | null | 2024-04-25T02:36:37+00:00 | [] | [] | TAGS
#diffusers #text-to-image #diffusers-training #lora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
|
# SDXL LoRA DreamBooth - krishna4244/lora-trained-xl
<Gallery />
## Model description
These are krishna4244/lora-trained-xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using DreamBooth.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks bulilding to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# SDXL LoRA DreamBooth - krishna4244/lora-trained-xl\n\n<Gallery />",
"## Model description\n\nThese are krishna4244/lora-trained-xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.",
"## Trigger words\n\nYou should use a photo of sks bulilding to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #text-to-image #diffusers-training #lora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n",
"# SDXL LoRA DreamBooth - krishna4244/lora-trained-xl\n\n<Gallery />",
"## Model description\n\nThese are krishna4244/lora-trained-xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.",
"## Trigger words\n\nYou should use a photo of sks bulilding to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model
This model is a fine-tuned version of GPT2 on an IMDb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3105
- Accuracy: 0.92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.13.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuning-sentiment-model-3000-samples", "results": []}]} | dhrubochowdhury5758778/IMDb-gpt2 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T02:36:49+00:00 | [] | [] | TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# finetuning-sentiment-model
This model is a fine-tuned version of GPT2 on an IMDb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3105
- Accuracy: 0.92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.13.3
| [
"# finetuning-sentiment-model\n\nThis model is a fine-tuned version of GPT2 on an IMDb dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3105\n- Accuracy: 0.92",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.28.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.13.3"
] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# finetuning-sentiment-model\n\nThis model is a fine-tuned version of GPT2 on an IMDb dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3105\n- Accuracy: 0.92",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.28.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.13.3"
] |
sentence-similarity | sentence-transformers |
# AhmetAytar/all-mpnet-base-v2-fine-tuned_5_textbook
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('AhmetAytar/all-mpnet-base-v2-fine-tuned_5_textbook')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=AhmetAytar/all-mpnet-base-v2-fine-tuned_5_textbook)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 482 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 50,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 96,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | AhmetAytar/all-mpnet-base-v2-fine-tuned_5_textbook | null | [
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T02:41:49+00:00 | [] | [] | TAGS
#sentence-transformers #safetensors #mpnet #feature-extraction #sentence-similarity #endpoints_compatible #region-us
|
# AhmetAytar/all-mpnet-base-v2-fine-tuned_5_textbook
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 482 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
| [
"# AhmetAytar/all-mpnet-base-v2-fine-tuned_5_textbook\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 482 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] | [
"TAGS\n#sentence-transformers #safetensors #mpnet #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n",
"# AhmetAytar/all-mpnet-base-v2-fine-tuned_5_textbook\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 482 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | devkya/openai-whisper-small-ko-transcribe-self | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T02:44:25+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
# kat33/Mixtral-8x7B-Instruct-v0.1-Q4_K_S-GGUF
This model was converted to GGUF format from [`mistralai/Mixtral-8x7B-Instruct-v0.1`](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo kat33/Mixtral-8x7B-Instruct-v0.1-Q4_K_S-GGUF --model mixtral-8x7b-instruct-v0.1.Q4_K_S.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo kat33/Mixtral-8x7B-Instruct-v0.1-Q4_K_S-GGUF --model mixtral-8x7b-instruct-v0.1.Q4_K_S.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mixtral-8x7b-instruct-v0.1.Q4_K_S.gguf -n 128
```
| {"language": ["fr", "it", "de", "es", "en"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "inference": {"parameters": {"temperature": 0.5}}, "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]} | kat33/Mixtral-8x7B-Instruct-v0.1-Q4_K_S-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"fr",
"it",
"de",
"es",
"en",
"license:apache-2.0",
"region:us"
] | null | 2024-04-25T02:45:15+00:00 | [] | [
"fr",
"it",
"de",
"es",
"en"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #fr #it #de #es #en #license-apache-2.0 #region-us
|
# kat33/Mixtral-8x7B-Instruct-v0.1-Q4_K_S-GGUF
This model was converted to GGUF format from 'mistralai/Mixtral-8x7B-Instruct-v0.1' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# kat33/Mixtral-8x7B-Instruct-v0.1-Q4_K_S-GGUF\nThis model was converted to GGUF format from 'mistralai/Mixtral-8x7B-Instruct-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #fr #it #de #es #en #license-apache-2.0 #region-us \n",
"# kat33/Mixtral-8x7B-Instruct-v0.1-Q4_K_S-GGUF\nThis model was converted to GGUF format from 'mistralai/Mixtral-8x7B-Instruct-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-350m_ft_test3
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "opt-350m_ft_test3", "results": []}]} | underactuated/opt-350m_ft_test3 | null | [
"transformers",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T02:45:54+00:00 | [] | [] | TAGS
#transformers #safetensors #opt #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# opt-350m_ft_test3
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# opt-350m_ft_test3\n\nThis model was trained from scratch on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #opt #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# opt-350m_ft_test3\n\nThis model was trained from scratch on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/allknowingroger/MultiverseBuddy-15B-MoE
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MultiverseBuddy-15B-MoE-GGUF/resolve/main/MultiverseBuddy-15B-MoE.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/MultiverseBuddy-15B-MoE-GGUF/resolve/main/MultiverseBuddy-15B-MoE.IQ3_XS.gguf) | IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/MultiverseBuddy-15B-MoE-GGUF/resolve/main/MultiverseBuddy-15B-MoE.Q3_K_S.gguf) | Q3_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/MultiverseBuddy-15B-MoE-GGUF/resolve/main/MultiverseBuddy-15B-MoE.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MultiverseBuddy-15B-MoE-GGUF/resolve/main/MultiverseBuddy-15B-MoE.IQ3_M.gguf) | IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/MultiverseBuddy-15B-MoE-GGUF/resolve/main/MultiverseBuddy-15B-MoE.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MultiverseBuddy-15B-MoE-GGUF/resolve/main/MultiverseBuddy-15B-MoE.Q3_K_L.gguf) | Q3_K_L | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/MultiverseBuddy-15B-MoE-GGUF/resolve/main/MultiverseBuddy-15B-MoE.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/MultiverseBuddy-15B-MoE-GGUF/resolve/main/MultiverseBuddy-15B-MoE.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MultiverseBuddy-15B-MoE-GGUF/resolve/main/MultiverseBuddy-15B-MoE.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MultiverseBuddy-15B-MoE-GGUF/resolve/main/MultiverseBuddy-15B-MoE.Q5_K_S.gguf) | Q5_K_S | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/MultiverseBuddy-15B-MoE-GGUF/resolve/main/MultiverseBuddy-15B-MoE.Q5_K_M.gguf) | Q5_K_M | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/MultiverseBuddy-15B-MoE-GGUF/resolve/main/MultiverseBuddy-15B-MoE.Q6_K.gguf) | Q6_K | 10.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MultiverseBuddy-15B-MoE-GGUF/resolve/main/MultiverseBuddy-15B-MoE.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "allknowingroger/MultiverseEx26-7B-slerp", "OpenBuddy/openbuddy-mistral2-7b-v20.2-32k"], "base_model": "allknowingroger/MultiverseBuddy-15B-MoE", "quantized_by": "mradermacher"} | mradermacher/MultiverseBuddy-15B-MoE-GGUF | null | [
"transformers",
"gguf",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/MultiverseEx26-7B-slerp",
"OpenBuddy/openbuddy-mistral2-7b-v20.2-32k",
"en",
"base_model:allknowingroger/MultiverseBuddy-15B-MoE",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T02:46:01+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #moe #frankenmoe #merge #mergekit #lazymergekit #allknowingroger/MultiverseEx26-7B-slerp #OpenBuddy/openbuddy-mistral2-7b-v20.2-32k #en #base_model-allknowingroger/MultiverseBuddy-15B-MoE #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #moe #frankenmoe #merge #mergekit #lazymergekit #allknowingroger/MultiverseEx26-7B-slerp #OpenBuddy/openbuddy-mistral2-7b-v20.2-32k #en #base_model-allknowingroger/MultiverseBuddy-15B-MoE #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
text-generation | transformers | # This model isn't particularly great. It's just an undercooked experiment.
Releasing it anyways just in case it accidentally makes good merge meat.
# It also has a tendency to produce mature content without warning.
This model is tuned off of the base Llama-3-8B model.
I adapted the leaked Undi dataset into training samples for custom formatting. This model pretty much only functions properly in SillyTavern.
The formatting has two pairs of pseudotokens
```
[EGO]Name: Character name and then Everything that forms the personality and speech patterns.(i.e. scenario, sample dialogue, character definitions, etc)[/EGO]
[SEEN]User message.[/SEEN]
Character Name:
```
The self attention modules were fine tuned separately on this dataset and the pseudotokens were chosen because they made logical sense with respect to the character giving a reply without allowing the model to 'connect the dots' during training and figure out that it is indeed an AI language model.
After this was done all modules were then finetuned together on the dendrite dataset in order to connect the changes made to the attention modules.
So with regards to building a SillyTavern prompt template you basically want the entire story string and any additional stylistic instructions enclosed in the [EGO] tags and then the user messages enclosed in [SEEN] tags.
It doesn't give particularly verbose replies unless you're continueing a roleplay with verbose messages. Otherwise it's pretty bad.
Training was done using [qlora-pipe](https://github.com/tdrussell/qlora-pipe)
[GGUFs care of Qaunt Cartel](https://huggingface.co/Quant-Cartel/Llama-3-8B-EGO-iMat-GGUF)
[exl2 rpcal care of Qaunt Cartel](https://huggingface.co/Quant-Cartel/Llama-3-8B-EGO-exl2-rpcal) | {"license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences"]} | Envoid/Llama-3-8B-EGO | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T02:46:19+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #not-for-all-audiences #conversational #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # This model isn't particularly great. It's just an undercooked experiment.
Releasing it anyways just in case it accidentally makes good merge meat.
# It also has a tendency to produce mature content without warning.
This model is tuned off of the base Llama-3-8B model.
I adapted the leaked Undi dataset into training samples for custom formatting. This model pretty much only functions properly in SillyTavern.
The formatting has two pairs of pseudotokens
The self attention modules were fine tuned separately on this dataset and the pseudotokens were chosen because they made logical sense with respect to the character giving a reply without allowing the model to 'connect the dots' during training and figure out that it is indeed an AI language model.
After this was done all modules were then finetuned together on the dendrite dataset in order to connect the changes made to the attention modules.
So with regards to building a SillyTavern prompt template you basically want the entire story string and any additional stylistic instructions enclosed in the [EGO] tags and then the user messages enclosed in [SEEN] tags.
It doesn't give particularly verbose replies unless you're continueing a roleplay with verbose messages. Otherwise it's pretty bad.
Training was done using qlora-pipe
GGUFs care of Qaunt Cartel
exl2 rpcal care of Qaunt Cartel | [
"# This model isn't particularly great. It's just an undercooked experiment.\n\nReleasing it anyways just in case it accidentally makes good merge meat.",
"# It also has a tendency to produce mature content without warning. \n\nThis model is tuned off of the base Llama-3-8B model. \n\nI adapted the leaked Undi dataset into training samples for custom formatting. This model pretty much only functions properly in SillyTavern. \n\nThe formatting has two pairs of pseudotokens\n\n\n\nThe self attention modules were fine tuned separately on this dataset and the pseudotokens were chosen because they made logical sense with respect to the character giving a reply without allowing the model to 'connect the dots' during training and figure out that it is indeed an AI language model.\n\nAfter this was done all modules were then finetuned together on the dendrite dataset in order to connect the changes made to the attention modules.\n\nSo with regards to building a SillyTavern prompt template you basically want the entire story string and any additional stylistic instructions enclosed in the [EGO] tags and then the user messages enclosed in [SEEN] tags. \n\nIt doesn't give particularly verbose replies unless you're continueing a roleplay with verbose messages. Otherwise it's pretty bad. \n\nTraining was done using qlora-pipe\n\nGGUFs care of Qaunt Cartel\n\nexl2 rpcal care of Qaunt Cartel"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #not-for-all-audiences #conversational #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# This model isn't particularly great. It's just an undercooked experiment.\n\nReleasing it anyways just in case it accidentally makes good merge meat.",
"# It also has a tendency to produce mature content without warning. \n\nThis model is tuned off of the base Llama-3-8B model. \n\nI adapted the leaked Undi dataset into training samples for custom formatting. This model pretty much only functions properly in SillyTavern. \n\nThe formatting has two pairs of pseudotokens\n\n\n\nThe self attention modules were fine tuned separately on this dataset and the pseudotokens were chosen because they made logical sense with respect to the character giving a reply without allowing the model to 'connect the dots' during training and figure out that it is indeed an AI language model.\n\nAfter this was done all modules were then finetuned together on the dendrite dataset in order to connect the changes made to the attention modules.\n\nSo with regards to building a SillyTavern prompt template you basically want the entire story string and any additional stylistic instructions enclosed in the [EGO] tags and then the user messages enclosed in [SEEN] tags. \n\nIt doesn't give particularly verbose replies unless you're continueing a roleplay with verbose messages. Otherwise it's pretty bad. \n\nTraining was done using qlora-pipe\n\nGGUFs care of Qaunt Cartel\n\nexl2 rpcal care of Qaunt Cartel"
] |
feature-extraction | null | # Model Card for Model ID
This is an AnimateDiff Motion Lora model trained with 2000 steps
2000_natural_animated_talk_r64_temporal_unet
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
This is an AnimateDiff Motion Lora model trained with 2000 steps
### Model Description
Only personal use is allowed, and forwarding or dissemination is not allowed.
The model used
Checkpoint:
https://civitai.com/models/166609/realismbystableyogi
Animatediff motion:
https://civitai.com/models/326698/animatediff-lcm-motion-model
The following example video generates prompt words:
He had a magnetic presence and an affable demeanor that instantly put people at ease.
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/63a26889e36f2e4d5b1f9248/TK_XHbsq88TvxQH9RdrTN.mp4"></video>
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
| {"license": "unknown", "tags": ["Animatediff", " motion", " lora"], "pipeline_tag": "feature-extraction"} | pestxo/natural_talk_animated | null | [
"Animatediff",
" motion",
" lora",
"feature-extraction",
"license:unknown",
"region:us"
] | null | 2024-04-25T02:46:42+00:00 | [] | [] | TAGS
#Animatediff # motion # lora #feature-extraction #license-unknown #region-us
| # Model Card for Model ID
This is an AnimateDiff Motion Lora model trained with 2000 steps
2000_natural_animated_talk_r64_temporal_unet
## Model Details
This is an AnimateDiff Motion Lora model trained with 2000 steps
### Model Description
Only personal use is allowed, and forwarding or dissemination is not allowed.
The model used
Checkpoint:
URL
Animatediff motion:
URL
The following example video generates prompt words:
He had a magnetic presence and an affable demeanor that instantly put people at ease.
<video controls autoplay src="URL
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
| [
"# Model Card for Model ID\nThis is an AnimateDiff Motion Lora model trained with 2000 steps\n2000_natural_animated_talk_r64_temporal_unet",
"## Model Details\nThis is an AnimateDiff Motion Lora model trained with 2000 steps",
"### Model Description\nOnly personal use is allowed, and forwarding or dissemination is not allowed.\n\nThe model used\nCheckpoint: \nURL\n\nAnimatediff motion: \nURL\n\n\nThe following example video generates prompt words:\nHe had a magnetic presence and an affable demeanor that instantly put people at ease.\n\n<video controls autoplay src=\"URL\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:"
] | [
"TAGS\n#Animatediff # motion # lora #feature-extraction #license-unknown #region-us \n",
"# Model Card for Model ID\nThis is an AnimateDiff Motion Lora model trained with 2000 steps\n2000_natural_animated_talk_r64_temporal_unet",
"## Model Details\nThis is an AnimateDiff Motion Lora model trained with 2000 steps",
"### Model Description\nOnly personal use is allowed, and forwarding or dissemination is not allowed.\n\nThe model used\nCheckpoint: \nURL\n\nAnimatediff motion: \nURL\n\n\nThe following example video generates prompt words:\nHe had a magnetic presence and an affable demeanor that instantly put people at ease.\n\n<video controls autoplay src=\"URL\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:"
] |
null | null |
# Model Card for electric-turtle-0.1-FT
Our First model of a finetuned LLM on high quality data.
## Model Details
### Model Description
Electric-turtle-0.1-FT is our first public LLM. This model was finetuned on a dataset of high-quality data and was finetuned responsibly and with the environment in mind. To support open-source work, we have made this model open-source along with more planned open-source models coming soon.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"license": "apache-2.0"} | orionai/electric-turtle-0.1-FT | null | [
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | null | 2024-04-25T02:47:00+00:00 | [
"1910.09700"
] | [] | TAGS
#arxiv-1910.09700 #license-apache-2.0 #region-us
|
# Model Card for electric-turtle-0.1-FT
Our First model of a finetuned LLM on high quality data.
## Model Details
### Model Description
Electric-turtle-0.1-FT is our first public LLM. This model was finetuned on a dataset of high-quality data and was finetuned responsibly and with the environment in mind. To support open-source work, we have made this model open-source along with more planned open-source models coming soon.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for electric-turtle-0.1-FT\n \n Our First model of a finetuned LLM on high quality data.",
"## Model Details",
"### Model Description\n\nElectric-turtle-0.1-FT is our first public LLM. This model was finetuned on a dataset of high-quality data and was finetuned responsibly and with the environment in mind. To support open-source work, we have made this model open-source along with more planned open-source models coming soon.\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#arxiv-1910.09700 #license-apache-2.0 #region-us \n",
"# Model Card for electric-turtle-0.1-FT\n \n Our First model of a finetuned LLM on high quality data.",
"## Model Details",
"### Model Description\n\nElectric-turtle-0.1-FT is our first public LLM. This model was finetuned on a dataset of high-quality data and was finetuned responsibly and with the environment in mind. To support open-source work, we have made this model open-source along with more planned open-source models coming soon.\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
base model:llama-3-8B-instruct | {"license": "apache-2.0"} | unstoppable123/Llama-3-8B-chinese-lora-v0.1 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2024-04-25T02:48:28+00:00 | [] | [] | TAGS
#safetensors #license-apache-2.0 #region-us
|
base model:llama-3-8B-instruct | [] | [
"TAGS\n#safetensors #license-apache-2.0 #region-us \n"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Meta-Llama-3-8B-Instruct_fictional_Italian_v1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 36
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "other", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "Meta-Llama-3-8B-Instruct_fictional_Italian_v1", "results": []}]} | yzhuang/Meta-Llama-3-8B-Instruct_fictional_Italian_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T02:49:19+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #llama #text-generation #trl #sft #generated_from_trainer #conversational #dataset-generator #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Meta-Llama-3-8B-Instruct_fictional_Italian_v1
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 36
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# Meta-Llama-3-8B-Instruct_fictional_Italian_v1\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 36",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #llama #text-generation #trl #sft #generated_from_trainer #conversational #dataset-generator #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Meta-Llama-3-8B-Instruct_fictional_Italian_v1\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 2\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 36",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | null |
# Cardiotensive sito ufficiale, Cardiotensive farmaco, Cardiotensive recensioni oggi, Cardiotensive cosa contiene, Cardiotensive opinioni negative, Cardiotensive altroconsumo, antiaggreganti nomi commerciali, farmaci cardiovascolari
## Sito Ufficiale di Cardiotensive:
Benvenuti nel sito ufficiale di Cardiotensive! Scoprite ora una soluzione innovativa per la gestione dell'ipertensione e il miglioramento della salute cardiovascolare. Cardiotensive rappresenta il vostro alleato per affrontare le sfide legate alla pressione sanguigna e promuovere il benessere del vostro cuore e dei vasi sanguigni. Visita il nostro [sito ufficiale](//mandarv.com/3XdS?sub1=Cardiotensive)
per approfittare delle offerte speciali e scoprire tutti i dettagli su questo integratore unico.
👉👉👉 [__CLICCA QUI PER SAPERNE DI PIÙ__](//mandarv.com/3XdS?sub1=Cardiotensive)
## Contenuto di Cardiotensive:
Curiosi di sapere cosa contiene Cardiotensive? La nostra formula innovativa combina estratti naturali selezionati con cura per le loro comprovate proprietà benefiche sulla salute cardiovascolare. L'estratto di biancospino, di aglio, di foglie di ulivo e di ravanello sono solo alcuni degli ingredienti chiave che lavorano sinergicamente per supportare la funzione cardiaca, migliorare la circolazione e ridurre la pressione sanguigna. Scoprite di più sulle potenti proprietà di ogni ingrediente visitando il nostro sito ufficiale.
## Recensioni di Cardiotensive Oggi:
Le recensioni di Cardiotensive sono positive e incoraggianti. Gli utenti hanno condiviso esperienze soddisfacenti e miglioramenti significativi nella gestione della loro pressione sanguigna. Visita il nostro sito ufficiale per leggere le testimonianze degli utenti e scoprire come Cardiotensive possa fare la differenza nella tua vita.
## Cardiotensive: Integratore o Farmaco?
Cardiotensive è un integratore alimentare formulato con ingredienti naturali e scientificamente validati per sostenere la salute cardiovascolare. Sebbene la sua efficacia sia supportata da evidenze scientifiche, è importante chiarire che Cardiotensive è classificato come integratore e non come farmaco. Consulta il nostro sito ufficiale per ulteriori informazioni sulla distinzione tra integratori e farmaci e su come Cardiotensive possa integrare il tuo regime di salute.
👉👉👉 [__CLICCA QUI PER SAPERNE DI PIÙ__](//mandarv.com/3XdS?sub1=Cardiotensive)
## Opinioni Negative su Cardiotensive:
Siamo consapevoli che alcune opinioni negative su Cardiotensive possono sorgere da aspettative irrealistiche o da una comprensione limitata del prodotto. Tuttavia, accogliamo ogni feedback con apertura e siamo impegnati a garantire la massima soddisfazione dei nostri clienti. Visita il nostro sito ufficiale per avere una visione completa su Cardiotensive e per comprendere meglio come può supportare la tua salute cardiovascolare.
## Cardiotensive e Altroconsumo:
Cardiotensive è stato sottoposto a rigorosi test di qualità e sicurezza per garantire la massima efficacia e affidabilità. Se desideri ulteriori informazioni su Cardiotensive e le sue prestazioni, ti invitiamo a visitare il nostro sito ufficiale e a consultare le recensioni degli esperti. Siamo fiduciosi che i risultati parleranno da soli.
## Nomi Commerciali di Antiaggreganti e Farmaci Cardiovascolari:
Per informazioni sui nomi commerciali di antiaggreganti e farmaci cardiovascolari, ti consigliamo di consultare un professionista sanitario qualificato. Per quanto riguarda Cardiotensive, visita il nostro sito ufficiale per conoscere tutti i dettagli sulla sua composizione e sui suoi benefici per la salute cardiovascolare. Siamo qui per supportarti nel tuo percorso verso una vita più sana e attiva.
👉👉👉 [__CLICCA QUI PER SAPERNE DI PIÙ__](//mandarv.com/3XdS?sub1=Cardiotensive)
| {} | fafab34728/cardiotensive | null | [
"region:us"
] | null | 2024-04-25T02:50:50+00:00 | [] | [] | TAGS
#region-us
|
# Cardiotensive sito ufficiale, Cardiotensive farmaco, Cardiotensive recensioni oggi, Cardiotensive cosa contiene, Cardiotensive opinioni negative, Cardiotensive altroconsumo, antiaggreganti nomi commerciali, farmaci cardiovascolari
## Sito Ufficiale di Cardiotensive:
Benvenuti nel sito ufficiale di Cardiotensive! Scoprite ora una soluzione innovativa per la gestione dell'ipertensione e il miglioramento della salute cardiovascolare. Cardiotensive rappresenta il vostro alleato per affrontare le sfide legate alla pressione sanguigna e promuovere il benessere del vostro cuore e dei vasi sanguigni. Visita il nostro sito ufficiale
per approfittare delle offerte speciali e scoprire tutti i dettagli su questo integratore unico.
__CLICCA QUI PER SAPERNE DI PIÙ__
## Contenuto di Cardiotensive:
Curiosi di sapere cosa contiene Cardiotensive? La nostra formula innovativa combina estratti naturali selezionati con cura per le loro comprovate proprietà benefiche sulla salute cardiovascolare. L'estratto di biancospino, di aglio, di foglie di ulivo e di ravanello sono solo alcuni degli ingredienti chiave che lavorano sinergicamente per supportare la funzione cardiaca, migliorare la circolazione e ridurre la pressione sanguigna. Scoprite di più sulle potenti proprietà di ogni ingrediente visitando il nostro sito ufficiale.
## Recensioni di Cardiotensive Oggi:
Le recensioni di Cardiotensive sono positive e incoraggianti. Gli utenti hanno condiviso esperienze soddisfacenti e miglioramenti significativi nella gestione della loro pressione sanguigna. Visita il nostro sito ufficiale per leggere le testimonianze degli utenti e scoprire come Cardiotensive possa fare la differenza nella tua vita.
## Cardiotensive: Integratore o Farmaco?
Cardiotensive è un integratore alimentare formulato con ingredienti naturali e scientificamente validati per sostenere la salute cardiovascolare. Sebbene la sua efficacia sia supportata da evidenze scientifiche, è importante chiarire che Cardiotensive è classificato come integratore e non come farmaco. Consulta il nostro sito ufficiale per ulteriori informazioni sulla distinzione tra integratori e farmaci e su come Cardiotensive possa integrare il tuo regime di salute.
__CLICCA QUI PER SAPERNE DI PIÙ__
## Opinioni Negative su Cardiotensive:
Siamo consapevoli che alcune opinioni negative su Cardiotensive possono sorgere da aspettative irrealistiche o da una comprensione limitata del prodotto. Tuttavia, accogliamo ogni feedback con apertura e siamo impegnati a garantire la massima soddisfazione dei nostri clienti. Visita il nostro sito ufficiale per avere una visione completa su Cardiotensive e per comprendere meglio come può supportare la tua salute cardiovascolare.
## Cardiotensive e Altroconsumo:
Cardiotensive è stato sottoposto a rigorosi test di qualità e sicurezza per garantire la massima efficacia e affidabilità. Se desideri ulteriori informazioni su Cardiotensive e le sue prestazioni, ti invitiamo a visitare il nostro sito ufficiale e a consultare le recensioni degli esperti. Siamo fiduciosi che i risultati parleranno da soli.
## Nomi Commerciali di Antiaggreganti e Farmaci Cardiovascolari:
Per informazioni sui nomi commerciali di antiaggreganti e farmaci cardiovascolari, ti consigliamo di consultare un professionista sanitario qualificato. Per quanto riguarda Cardiotensive, visita il nostro sito ufficiale per conoscere tutti i dettagli sulla sua composizione e sui suoi benefici per la salute cardiovascolare. Siamo qui per supportarti nel tuo percorso verso una vita più sana e attiva.
__CLICCA QUI PER SAPERNE DI PIÙ__
| [
"# Cardiotensive sito ufficiale, Cardiotensive farmaco, Cardiotensive recensioni oggi, Cardiotensive cosa contiene, Cardiotensive opinioni negative, Cardiotensive altroconsumo, antiaggreganti nomi commerciali, farmaci cardiovascolari",
"## Sito Ufficiale di Cardiotensive:\n\nBenvenuti nel sito ufficiale di Cardiotensive! Scoprite ora una soluzione innovativa per la gestione dell'ipertensione e il miglioramento della salute cardiovascolare. Cardiotensive rappresenta il vostro alleato per affrontare le sfide legate alla pressione sanguigna e promuovere il benessere del vostro cuore e dei vasi sanguigni. Visita il nostro sito ufficiale\n per approfittare delle offerte speciali e scoprire tutti i dettagli su questo integratore unico.\n\n __CLICCA QUI PER SAPERNE DI PIÙ__",
"## Contenuto di Cardiotensive:\n\nCuriosi di sapere cosa contiene Cardiotensive? La nostra formula innovativa combina estratti naturali selezionati con cura per le loro comprovate proprietà benefiche sulla salute cardiovascolare. L'estratto di biancospino, di aglio, di foglie di ulivo e di ravanello sono solo alcuni degli ingredienti chiave che lavorano sinergicamente per supportare la funzione cardiaca, migliorare la circolazione e ridurre la pressione sanguigna. Scoprite di più sulle potenti proprietà di ogni ingrediente visitando il nostro sito ufficiale.",
"## Recensioni di Cardiotensive Oggi:\n\nLe recensioni di Cardiotensive sono positive e incoraggianti. Gli utenti hanno condiviso esperienze soddisfacenti e miglioramenti significativi nella gestione della loro pressione sanguigna. Visita il nostro sito ufficiale per leggere le testimonianze degli utenti e scoprire come Cardiotensive possa fare la differenza nella tua vita.",
"## Cardiotensive: Integratore o Farmaco?\n\nCardiotensive è un integratore alimentare formulato con ingredienti naturali e scientificamente validati per sostenere la salute cardiovascolare. Sebbene la sua efficacia sia supportata da evidenze scientifiche, è importante chiarire che Cardiotensive è classificato come integratore e non come farmaco. Consulta il nostro sito ufficiale per ulteriori informazioni sulla distinzione tra integratori e farmaci e su come Cardiotensive possa integrare il tuo regime di salute.\n\n __CLICCA QUI PER SAPERNE DI PIÙ__",
"## Opinioni Negative su Cardiotensive:\n\nSiamo consapevoli che alcune opinioni negative su Cardiotensive possono sorgere da aspettative irrealistiche o da una comprensione limitata del prodotto. Tuttavia, accogliamo ogni feedback con apertura e siamo impegnati a garantire la massima soddisfazione dei nostri clienti. Visita il nostro sito ufficiale per avere una visione completa su Cardiotensive e per comprendere meglio come può supportare la tua salute cardiovascolare.",
"## Cardiotensive e Altroconsumo:\n\nCardiotensive è stato sottoposto a rigorosi test di qualità e sicurezza per garantire la massima efficacia e affidabilità. Se desideri ulteriori informazioni su Cardiotensive e le sue prestazioni, ti invitiamo a visitare il nostro sito ufficiale e a consultare le recensioni degli esperti. Siamo fiduciosi che i risultati parleranno da soli.",
"## Nomi Commerciali di Antiaggreganti e Farmaci Cardiovascolari:\n\nPer informazioni sui nomi commerciali di antiaggreganti e farmaci cardiovascolari, ti consigliamo di consultare un professionista sanitario qualificato. Per quanto riguarda Cardiotensive, visita il nostro sito ufficiale per conoscere tutti i dettagli sulla sua composizione e sui suoi benefici per la salute cardiovascolare. Siamo qui per supportarti nel tuo percorso verso una vita più sana e attiva.\n\n __CLICCA QUI PER SAPERNE DI PIÙ__"
] | [
"TAGS\n#region-us \n",
"# Cardiotensive sito ufficiale, Cardiotensive farmaco, Cardiotensive recensioni oggi, Cardiotensive cosa contiene, Cardiotensive opinioni negative, Cardiotensive altroconsumo, antiaggreganti nomi commerciali, farmaci cardiovascolari",
"## Sito Ufficiale di Cardiotensive:\n\nBenvenuti nel sito ufficiale di Cardiotensive! Scoprite ora una soluzione innovativa per la gestione dell'ipertensione e il miglioramento della salute cardiovascolare. Cardiotensive rappresenta il vostro alleato per affrontare le sfide legate alla pressione sanguigna e promuovere il benessere del vostro cuore e dei vasi sanguigni. Visita il nostro sito ufficiale\n per approfittare delle offerte speciali e scoprire tutti i dettagli su questo integratore unico.\n\n __CLICCA QUI PER SAPERNE DI PIÙ__",
"## Contenuto di Cardiotensive:\n\nCuriosi di sapere cosa contiene Cardiotensive? La nostra formula innovativa combina estratti naturali selezionati con cura per le loro comprovate proprietà benefiche sulla salute cardiovascolare. L'estratto di biancospino, di aglio, di foglie di ulivo e di ravanello sono solo alcuni degli ingredienti chiave che lavorano sinergicamente per supportare la funzione cardiaca, migliorare la circolazione e ridurre la pressione sanguigna. Scoprite di più sulle potenti proprietà di ogni ingrediente visitando il nostro sito ufficiale.",
"## Recensioni di Cardiotensive Oggi:\n\nLe recensioni di Cardiotensive sono positive e incoraggianti. Gli utenti hanno condiviso esperienze soddisfacenti e miglioramenti significativi nella gestione della loro pressione sanguigna. Visita il nostro sito ufficiale per leggere le testimonianze degli utenti e scoprire come Cardiotensive possa fare la differenza nella tua vita.",
"## Cardiotensive: Integratore o Farmaco?\n\nCardiotensive è un integratore alimentare formulato con ingredienti naturali e scientificamente validati per sostenere la salute cardiovascolare. Sebbene la sua efficacia sia supportata da evidenze scientifiche, è importante chiarire che Cardiotensive è classificato come integratore e non come farmaco. Consulta il nostro sito ufficiale per ulteriori informazioni sulla distinzione tra integratori e farmaci e su come Cardiotensive possa integrare il tuo regime di salute.\n\n __CLICCA QUI PER SAPERNE DI PIÙ__",
"## Opinioni Negative su Cardiotensive:\n\nSiamo consapevoli che alcune opinioni negative su Cardiotensive possono sorgere da aspettative irrealistiche o da una comprensione limitata del prodotto. Tuttavia, accogliamo ogni feedback con apertura e siamo impegnati a garantire la massima soddisfazione dei nostri clienti. Visita il nostro sito ufficiale per avere una visione completa su Cardiotensive e per comprendere meglio come può supportare la tua salute cardiovascolare.",
"## Cardiotensive e Altroconsumo:\n\nCardiotensive è stato sottoposto a rigorosi test di qualità e sicurezza per garantire la massima efficacia e affidabilità. Se desideri ulteriori informazioni su Cardiotensive e le sue prestazioni, ti invitiamo a visitare il nostro sito ufficiale e a consultare le recensioni degli esperti. Siamo fiduciosi che i risultati parleranno da soli.",
"## Nomi Commerciali di Antiaggreganti e Farmaci Cardiovascolari:\n\nPer informazioni sui nomi commerciali di antiaggreganti e farmaci cardiovascolari, ti consigliamo di consultare un professionista sanitario qualificato. Per quanto riguarda Cardiotensive, visita il nostro sito ufficiale per conoscere tutti i dettagli sulla sua composizione e sui suoi benefici per la salute cardiovascolare. Siamo qui per supportarti nel tuo percorso verso una vita più sana e attiva.\n\n __CLICCA QUI PER SAPERNE DI PIÙ__"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | SulthanTriesToCode/Meta-Llama-3-8B-DoNot | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T02:52:17+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VTS_Model by Edgar
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the vts_sample_data dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1127
- Wer: 5.2632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 3.7293 | 0.1667 | 1 | 1.1127 | 5.2632 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"language": ["ko"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["vts_sample_data"], "metrics": ["wer"], "base_model": "openai/whisper-large-v3", "model-index": [{"name": "VTS_Model by Edgar", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "vts_sample_data", "type": "vts_sample_data", "args": "config: ko, split: test"}, "metrics": [{"type": "wer", "value": 5.263157894736842, "name": "Wer"}]}]}]} | idiotDeveloper/vts_to_text_1.0.0 | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ko",
"dataset:vts_sample_data",
"base_model:openai/whisper-large-v3",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T02:53:31+00:00 | [] | [
"ko"
] | TAGS
#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #ko #dataset-vts_sample_data #base_model-openai/whisper-large-v3 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| VTS\_Model by Edgar
===================
This model is a fine-tuned version of openai/whisper-large-v3 on the vts\_sample\_data dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1127
* Wer: 5.2632
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.001
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1
* training\_steps: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.41.0.dev0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1\n* training\\_steps: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #ko #dataset-vts_sample_data #base_model-openai/whisper-large-v3 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1\n* training\\_steps: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Critical Dream - cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.0
<Gallery />
## Model description
These are cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.0 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0, for the purposes of
generating images for the [Critical Dream](https://github.com/cosmicBboy/critical-dream)
project.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
Special VAE used for training: stabilityai/sdxl-vae.
## Trigger words
You should use a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus" to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.0/tree/main) them in the Files & versions tab.
## Tracker run link
https://wandb.ai/nielsbantilan/dreambooth-lora-sd-xl/runs/65tlx5n4
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "lora", "template:sd-lora"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "prompt": "a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus\"", "widget": [{"text": "a picture of [dm-matt-mercer]", "output": {"url": "image_0.png"}}, {"text": "a picture of [dm-matt-mercer]", "output": {"url": "image_1.png"}}, {"text": "a picture of a dungeon master.", "output": {"url": "image_2.png"}}, {"text": "a picture of a dungeon master.", "output": {"url": "image_3.png"}}, {"text": "a picture of [critrole-fjord], a male half-orc warlock. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_4.png"}}, {"text": "a picture of [critrole-fjord], a male half-orc warlock. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_5.png"}}, {"text": "a picture of a male half-orc warlock", "output": {"url": "image_6.png"}}, {"text": "a picture of a male half-orc warlock", "output": {"url": "image_7.png"}}, {"text": "a picture of [critrole-beau], a female human monk. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_8.png"}}, {"text": "a picture of [critrole-beau], a female human monk. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_9.png"}}, {"text": "a picture of a female human monk", "output": {"url": "image_10.png"}}, {"text": "a picture of a female human monk", "output": {"url": "image_11.png"}}, {"text": "a picture of [critrole-caduceus], a male firbolg cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_12.png"}}, {"text": "a picture of [critrole-caduceus], a male firbolg cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_13.png"}}, {"text": "a picture of a male firbolg cleric", "output": {"url": "image_14.png"}}, {"text": "a picture of a male firbolg cleric", "output": {"url": "image_15.png"}}, {"text": "a picture of [critrole-caleb], a male human wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_16.png"}}, {"text": "a picture of [critrole-caleb], a male human wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_17.png"}}, {"text": "a picture of a male human wizard", "output": {"url": "image_18.png"}}, {"text": "a picture of a male human wizard", "output": {"url": "image_19.png"}}, {"text": "a picture of [critrole-jester], a female tiefling cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_20.png"}}, {"text": "a picture of [critrole-jester], a female tiefling cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_21.png"}}, {"text": "a picture of a female tiefling cleric", "output": {"url": "image_22.png"}}, {"text": "a picture of a female tiefling cleric", "output": {"url": "image_23.png"}}, {"text": "a picture of [critrole-nott], a female goblin rogue. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_24.png"}}, {"text": "a picture of [critrole-nott], a female goblin rogue. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_25.png"}}, {"text": "a picture of a female goblin rogue", "output": {"url": "image_26.png"}}, {"text": "a picture of a female goblin rogue", "output": {"url": "image_27.png"}}, {"text": "a picture of [critrole-veth], a female halfling rogue/wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_28.png"}}, {"text": "a picture of [critrole-veth], a female halfling rogue/wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_29.png"}}, {"text": "a picture of a female halfling rogue/wizard", "output": {"url": "image_30.png"}}, {"text": "a picture of a female halfling rogue/wizard", "output": {"url": "image_31.png"}}, {"text": "a picture of [critrole-yasha], a female aasimar barbarian. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_32.png"}}, {"text": "a picture of [critrole-yasha], a female aasimar barbarian. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_33.png"}}, {"text": "a picture of a female aasimar barbarian", "output": {"url": "image_34.png"}}, {"text": "a picture of a female aasimar barbarian", "output": {"url": "image_35.png"}}, {"text": "a picture of [critrole-mollymauk], a male tiefling blood hunter. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_36.png"}}, {"text": "a picture of [critrole-mollymauk], a male tiefling blood hunter. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_37.png"}}, {"text": "a picture of a male tiefling blood hunter", "output": {"url": "image_38.png"}}, {"text": "a picture of a male tiefling blood hunter", "output": {"url": "image_39.png"}}, {"text": "a picture of [critrole-essek], a male drow wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_40.png"}}, {"text": "a picture of [critrole-essek], a male drow wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_41.png"}}, {"text": "a picture of a male drow wizard", "output": {"url": "image_42.png"}}, {"text": "a picture of a male drow wizard", "output": {"url": "image_43.png"}}]} | cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.0 | null | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | null | 2024-04-25T02:55:18+00:00 | [] | [] | TAGS
#diffusers #text-to-image #stable-diffusion-xl #stable-diffusion-xl-diffusers #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
|
# Critical Dream - cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.0
<Gallery />
## Model description
These are cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.0 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0, for the purposes of
generating images for the Critical Dream
project.
The weights were trained using DreamBooth.
LoRA for the text encoder was enabled: True.
Special VAE used for training: stabilityai/sdxl-vae.
## Trigger words
You should use a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus" to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
## Tracker run link
URL
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# Critical Dream - cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.0\n\n<Gallery />",
"## Model description\n\nThese are cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.0 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0, for the purposes of\ngenerating images for the Critical Dream\nproject.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: True.\n\nSpecial VAE used for training: stabilityai/sdxl-vae.",
"## Trigger words\n\nYou should use a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus\" to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Tracker run link\n\nURL",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #text-to-image #stable-diffusion-xl #stable-diffusion-xl-diffusers #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n",
"# Critical Dream - cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.0\n\n<Gallery />",
"## Model description\n\nThese are cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.0 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0, for the purposes of\ngenerating images for the Critical Dream\nproject.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: True.\n\nSpecial VAE used for training: stabilityai/sdxl-vae.",
"## Trigger words\n\nYou should use a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus\" to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Tracker run link\n\nURL",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
text-generation | transformers |
# Keiana-L3-Test4.7-8B-3
Keiana-L3-Test4.7-8B-3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [jeiku/Average_Normie_l3_v1_8B](https://huggingface.co/jeiku/Average_Normie_l3_v1_8B)
* [Kaoeiri/Keiana-L3-Test4.6-8B-2](https://huggingface.co/Kaoeiri/Keiana-L3-Test4.6-8B-2)
## 🧩 Configuration
```yaml
merge_method: task_arithmetic
dtype: float16
base_model: ResplendentAI/SOVL_Llama3_8B
models:
- model: jeiku/Average_Normie_l3_v1_8B
parameters:
weight: 1.0
- model: Kaoeiri/Keiana-L3-Test4.6-8B-2
parameters:
weight: 1.0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kaoeiri/Keiana-L3-Test4.7-8B-3"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "jeiku/Average_Normie_l3_v1_8B", "Kaoeiri/Keiana-L3-Test4.6-8B-2"], "base_model": ["jeiku/Average_Normie_l3_v1_8B", "Kaoeiri/Keiana-L3-Test4.6-8B-2"]} | Kaoeiri/Keiana-L3-Test4.7-8B-3 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"jeiku/Average_Normie_l3_v1_8B",
"Kaoeiri/Keiana-L3-Test4.6-8B-2",
"conversational",
"base_model:jeiku/Average_Normie_l3_v1_8B",
"base_model:Kaoeiri/Keiana-L3-Test4.6-8B-2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T02:56:02+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #jeiku/Average_Normie_l3_v1_8B #Kaoeiri/Keiana-L3-Test4.6-8B-2 #conversational #base_model-jeiku/Average_Normie_l3_v1_8B #base_model-Kaoeiri/Keiana-L3-Test4.6-8B-2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Keiana-L3-Test4.7-8B-3
Keiana-L3-Test4.7-8B-3 is a merge of the following models using LazyMergekit:
* jeiku/Average_Normie_l3_v1_8B
* Kaoeiri/Keiana-L3-Test4.6-8B-2
## Configuration
## Usage
| [
"# Keiana-L3-Test4.7-8B-3\n\nKeiana-L3-Test4.7-8B-3 is a merge of the following models using LazyMergekit:\n* jeiku/Average_Normie_l3_v1_8B\n* Kaoeiri/Keiana-L3-Test4.6-8B-2",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #jeiku/Average_Normie_l3_v1_8B #Kaoeiri/Keiana-L3-Test4.6-8B-2 #conversational #base_model-jeiku/Average_Normie_l3_v1_8B #base_model-Kaoeiri/Keiana-L3-Test4.6-8B-2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Keiana-L3-Test4.7-8B-3\n\nKeiana-L3-Test4.7-8B-3 is a merge of the following models using LazyMergekit:\n* jeiku/Average_Normie_l3_v1_8B\n* Kaoeiri/Keiana-L3-Test4.6-8B-2",
"## Configuration",
"## Usage"
] |
text-generation | transformers |
I am really enjoying this version of Cinder. More information coming. As well as Cinder character specific data, a mix of RAG generated Q and A of world knowledge, STEM topics, and Cinder Character data. I suplimented the Cinder character with an abreviated Samantha dataset edited for Cinder and removed a lot of the negative responses.
Model Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration.

## Model Summary
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ Phi-3 GGUF: [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Phi-3 ONNX: [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Phi-3 Mini-4K-Instruct is also available in [HuggingChat](https://aka.ms/try-phi3-hf-chat).
### Chat Format
Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|user|>\nQuestion <|end|>\n<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-4k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
messages = [
{"role": "system", "content": "You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 7 days
* Training data: 3.3T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
### Datasets
Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/sample_finetune.py).
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* CPU: use the **GGUF** quantized models [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Cross Platform Support
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model [here](https://aka.ms/phi3-mini-4k-instruct-onnx).
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-4k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| {"language": ["en"], "license": "mit", "tags": ["nlp", "code"], "license_link": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE", "pipeline_tag": "text-generation", "widget": [{"text": "<|system|>\nYou are a helpful assistant.<|end|>\n<|user|>\n"}]} | Josephgflowers/Phi-3-mini-4k-instruct-Cinder-with-16bit-GGUF | null | [
"transformers",
"safetensors",
"gguf",
"phi3",
"text-generation",
"nlp",
"code",
"conversational",
"custom_code",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T02:56:17+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #gguf #phi3 #text-generation #nlp #code #conversational #custom_code #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
I am really enjoying this version of Cinder. More information coming. As well as Cinder character specific data, a mix of RAG generated Q and A of world knowledge, STEM topics, and Cinder Character data. I suplimented the Cinder character with an abreviated Samantha dataset edited for Cinder and removed a lot of the negative responses.
Model Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration.
!image/png
## Model Summary
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Resources and Technical Documentation:
+ Phi-3 Microsoft Blog
+ Phi-3 Technical Report
+ Phi-3 on Azure AI Studio
+ Phi-3 GGUF: 4K
+ Phi-3 ONNX: 4K
## Intended Uses
Primary use cases
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
Use case considerations
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of 'transformers'. Until the official version is released through 'pip', ensure that you are doing one of the following:
* When loading the model, ensure that 'trust_remote_code=True' is passed as an argument of the 'from_pretrained()' function.
* Update your local 'transformers' to the development version: 'pip uninstall -y transformers && pip install git+URL The previous command is an alternative to cloning and installing from the source.
The current 'transformers' version can be verified with: 'pip list | grep transformers'.
Phi-3 Mini-4K-Instruct is also available in HuggingChat.
### Chat Format
Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
For example:
where the model generates the text after '<|assistant|>' . In case of few-shots prompt, the prompt can be formatted as the following:
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 7 days
* Training data: 3.3T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
### Datasets
Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here.
## Software
* PyTorch
* DeepSpeed
* Transformers
* Flash-Attention
## Hardware
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* CPU: use the GGUF quantized models 4K
+ Optimized inference on GPU, CPU, and Mobile: use the ONNX models 4K
## Cross Platform Support
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model here.
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the MIT license.
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| [
"## Model Summary\n\nThe Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.\nThe model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) that it can support.\n\nThe model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.\nWhen assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.\n\nResources and Technical Documentation:\n\n+ Phi-3 Microsoft Blog\n+ Phi-3 Technical Report\n+ Phi-3 on Azure AI Studio\n+ Phi-3 GGUF: 4K\n+ Phi-3 ONNX: 4K",
"## Intended Uses\n\nPrimary use cases\n\nThe model is intended for commercial and research use in English. The model provides uses for applications which require:\n\n1) Memory/compute constrained environments\n2) Latency bound scenarios\n3) Strong reasoning (especially code, math and logic)\n\nOur model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. \n\nUse case considerations\n\nOur models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.\n\nNothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.",
"## How to Use\n\nPhi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of 'transformers'. Until the official version is released through 'pip', ensure that you are doing one of the following:\n\n* When loading the model, ensure that 'trust_remote_code=True' is passed as an argument of the 'from_pretrained()' function.\n\n* Update your local 'transformers' to the development version: 'pip uninstall -y transformers && pip install git+URL The previous command is an alternative to cloning and installing from the source.\n\nThe current 'transformers' version can be verified with: 'pip list | grep transformers'.\n\nPhi-3 Mini-4K-Instruct is also available in HuggingChat.",
"### Chat Format\n\nGiven the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows. \nYou can provide the prompt as a question with a generic template as follow:\n\nFor example:\n\n\nwhere the model generates the text after '<|assistant|>' . In case of few-shots prompt, the prompt can be formatted as the following:",
"### Sample inference code\n\nThis code snippets show how to get quickly started with running the model on a GPU:",
"## Responsible AI Considerations\n\nLike other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:\n\n+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. \n+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. \n+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. \n+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. \n+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. \n\nDevelopers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:\n\n+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.\n+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. \n+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). \n+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. \n+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.",
"## Training",
"### Model\n\n* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.\n* Inputs: Text. It is best suited for prompts using chat format.\n* Context length: 4K tokens\n* GPUs: 512 H100-80G\n* Training time: 7 days\n* Training data: 3.3T tokens\n* Outputs: Generated text in response to the input\n* Dates: Our models were trained between February and April 2024\n* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.",
"### Datasets\n\nOur training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of \n1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; \n2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); \n3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.",
"### Fine-tuning\n\nA basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here.",
"## Software\n\n* PyTorch\n* DeepSpeed\n* Transformers\n* Flash-Attention",
"## Hardware\nNote that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:\n* NVIDIA A100\n* NVIDIA A6000\n* NVIDIA H100\n\nIf you want to run the model on:\n* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation=\"eager\"\n* CPU: use the GGUF quantized models 4K\n+ Optimized inference on GPU, CPU, and Mobile: use the ONNX models 4K",
"## Cross Platform Support\n\nONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model here.\n\nOptimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. \nAlong with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.\n\nHere are some of the optimized configurations we have added: \n\n1. ONNX models for int4 DML: Quantized to int4 via AWQ\n2. ONNX model for fp16 CUDA\n3. ONNX model for int4 CUDA: Quantized to int4 via RTN\n4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN",
"## License\n\nThe model is licensed under the MIT license.",
"## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] | [
"TAGS\n#transformers #safetensors #gguf #phi3 #text-generation #nlp #code #conversational #custom_code #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"## Model Summary\n\nThe Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.\nThe model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) that it can support.\n\nThe model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.\nWhen assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.\n\nResources and Technical Documentation:\n\n+ Phi-3 Microsoft Blog\n+ Phi-3 Technical Report\n+ Phi-3 on Azure AI Studio\n+ Phi-3 GGUF: 4K\n+ Phi-3 ONNX: 4K",
"## Intended Uses\n\nPrimary use cases\n\nThe model is intended for commercial and research use in English. The model provides uses for applications which require:\n\n1) Memory/compute constrained environments\n2) Latency bound scenarios\n3) Strong reasoning (especially code, math and logic)\n\nOur model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. \n\nUse case considerations\n\nOur models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.\n\nNothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.",
"## How to Use\n\nPhi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of 'transformers'. Until the official version is released through 'pip', ensure that you are doing one of the following:\n\n* When loading the model, ensure that 'trust_remote_code=True' is passed as an argument of the 'from_pretrained()' function.\n\n* Update your local 'transformers' to the development version: 'pip uninstall -y transformers && pip install git+URL The previous command is an alternative to cloning and installing from the source.\n\nThe current 'transformers' version can be verified with: 'pip list | grep transformers'.\n\nPhi-3 Mini-4K-Instruct is also available in HuggingChat.",
"### Chat Format\n\nGiven the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows. \nYou can provide the prompt as a question with a generic template as follow:\n\nFor example:\n\n\nwhere the model generates the text after '<|assistant|>' . In case of few-shots prompt, the prompt can be formatted as the following:",
"### Sample inference code\n\nThis code snippets show how to get quickly started with running the model on a GPU:",
"## Responsible AI Considerations\n\nLike other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:\n\n+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. \n+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. \n+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. \n+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. \n+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. \n\nDevelopers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:\n\n+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.\n+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. \n+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). \n+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. \n+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.",
"## Training",
"### Model\n\n* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.\n* Inputs: Text. It is best suited for prompts using chat format.\n* Context length: 4K tokens\n* GPUs: 512 H100-80G\n* Training time: 7 days\n* Training data: 3.3T tokens\n* Outputs: Generated text in response to the input\n* Dates: Our models were trained between February and April 2024\n* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.",
"### Datasets\n\nOur training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of \n1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; \n2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); \n3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.",
"### Fine-tuning\n\nA basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here.",
"## Software\n\n* PyTorch\n* DeepSpeed\n* Transformers\n* Flash-Attention",
"## Hardware\nNote that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:\n* NVIDIA A100\n* NVIDIA A6000\n* NVIDIA H100\n\nIf you want to run the model on:\n* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation=\"eager\"\n* CPU: use the GGUF quantized models 4K\n+ Optimized inference on GPU, CPU, and Mobile: use the ONNX models 4K",
"## Cross Platform Support\n\nONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model here.\n\nOptimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. \nAlong with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.\n\nHere are some of the optimized configurations we have added: \n\n1. ONNX models for int4 DML: Quantized to int4 via AWQ\n2. ONNX model for fp16 CUDA\n3. ONNX model for int4 CUDA: Quantized to int4 via RTN\n4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN",
"## License\n\nThe model is licensed under the MIT license.",
"## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] |
text-generation | transformers | # About this model
This model can handle (limited) TSF content. If you Character Card have complex plot, maybe you should try other model (maybe bigger parameter?).
- This model have overfit 1 time, I don't expect it too good, but it stable than [v0.9](https://huggingface.co/Alsebay/Narumashi-11B-v0.9). Version 0.9 is better in Roleplay, while this better in storytelling.
Do you know TSF, TS, TG? A lot of model don't really know about that, so I do some experiment to finetune TSF dataset.
- **Finetuned with Chinese Novels dataset, to increase the accuracy in TSF theme, which is not quite popular.
(R18 dataset). You should include chinese/japanese word about tag you want(search it in pixiv) in your character card to trigger it.
This finetune idea is suitable for Chinese Roleplay than English (Becaue I could only find good Chinese datasets about it 🙃, it is nice that if you can open a discussion about English TSF datasets). But it still affect the models writing styles, so maybe less GPT-like response in both Chinese and English?.**
- **Finetuned from model :** Sao10K/Fimbulvetr-11B-v2 . Thank Sao10K a lot :)
## 8k Context Length BTW, the original Fimbulvetr and Solar have only 4k context length, so I extended it 😆.
## GGUF version? [here is it](https://huggingface.co/Alsebay/Narumashi-11B-GGUF).
## Dataset
All chinese novels dataset
```
Dataset(all are novels):
60% skinsuit
25% possession
5% transform(shapeshift)
10% other
```
# Thank Unsloth for good finetuning tool. This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) | {"language": ["en"], "license": "cc-by-nc-4.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft", "Roleplay", "roleplay"], "base_model": "Sao10K/Fimbulvetr-11B-v2"} | Alsebay/Narumashi-11B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"Roleplay",
"roleplay",
"en",
"base_model:Sao10K/Fimbulvetr-11B-v2",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T02:56:22+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #Roleplay #roleplay #en #base_model-Sao10K/Fimbulvetr-11B-v2 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
| # About this model
This model can handle (limited) TSF content. If you Character Card have complex plot, maybe you should try other model (maybe bigger parameter?).
- This model have overfit 1 time, I don't expect it too good, but it stable than v0.9. Version 0.9 is better in Roleplay, while this better in storytelling.
Do you know TSF, TS, TG? A lot of model don't really know about that, so I do some experiment to finetune TSF dataset.
- Finetuned with Chinese Novels dataset, to increase the accuracy in TSF theme, which is not quite popular.
(R18 dataset). You should include chinese/japanese word about tag you want(search it in pixiv) in your character card to trigger it.
This finetune idea is suitable for Chinese Roleplay than English (Becaue I could only find good Chinese datasets about it , it is nice that if you can open a discussion about English TSF datasets). But it still affect the models writing styles, so maybe less GPT-like response in both Chinese and English?.
- Finetuned from model : Sao10K/Fimbulvetr-11B-v2 . Thank Sao10K a lot :)
## 8k Context Length BTW, the original Fimbulvetr and Solar have only 4k context length, so I extended it .
## GGUF version? here is it.
## Dataset
All chinese novels dataset
# Thank Unsloth for good finetuning tool. This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/> | [
"# About this model\n\nThis model can handle (limited) TSF content. If you Character Card have complex plot, maybe you should try other model (maybe bigger parameter?).\n\n- This model have overfit 1 time, I don't expect it too good, but it stable than v0.9. Version 0.9 is better in Roleplay, while this better in storytelling.\n\nDo you know TSF, TS, TG? A lot of model don't really know about that, so I do some experiment to finetune TSF dataset.\n\n- Finetuned with Chinese Novels dataset, to increase the accuracy in TSF theme, which is not quite popular.\n (R18 dataset). You should include chinese/japanese word about tag you want(search it in pixiv) in your character card to trigger it.\n This finetune idea is suitable for Chinese Roleplay than English (Becaue I could only find good Chinese datasets about it , it is nice that if you can open a discussion about English TSF datasets). But it still affect the models writing styles, so maybe less GPT-like response in both Chinese and English?.\n- Finetuned from model : Sao10K/Fimbulvetr-11B-v2 . Thank Sao10K a lot :)",
"## 8k Context Length BTW, the original Fimbulvetr and Solar have only 4k context length, so I extended it .",
"## GGUF version? here is it.",
"## Dataset\nAll chinese novels dataset",
"# Thank Unsloth for good finetuning tool. This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #Roleplay #roleplay #en #base_model-Sao10K/Fimbulvetr-11B-v2 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# About this model\n\nThis model can handle (limited) TSF content. If you Character Card have complex plot, maybe you should try other model (maybe bigger parameter?).\n\n- This model have overfit 1 time, I don't expect it too good, but it stable than v0.9. Version 0.9 is better in Roleplay, while this better in storytelling.\n\nDo you know TSF, TS, TG? A lot of model don't really know about that, so I do some experiment to finetune TSF dataset.\n\n- Finetuned with Chinese Novels dataset, to increase the accuracy in TSF theme, which is not quite popular.\n (R18 dataset). You should include chinese/japanese word about tag you want(search it in pixiv) in your character card to trigger it.\n This finetune idea is suitable for Chinese Roleplay than English (Becaue I could only find good Chinese datasets about it , it is nice that if you can open a discussion about English TSF datasets). But it still affect the models writing styles, so maybe less GPT-like response in both Chinese and English?.\n- Finetuned from model : Sao10K/Fimbulvetr-11B-v2 . Thank Sao10K a lot :)",
"## 8k Context Length BTW, the original Fimbulvetr and Solar have only 4k context length, so I extended it .",
"## GGUF version? here is it.",
"## Dataset\nAll chinese novels dataset",
"# Thank Unsloth for good finetuning tool. This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-to-image | diffusers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "diffusers"} | ghunkins/juggernautXL_hyper_4step | null | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | null | 2024-04-25T02:56:54+00:00 | [
"1910.09700"
] | [] | TAGS
#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-350m_ft_test4
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "opt-350m_ft_test4", "results": []}]} | underactuated/opt-350m_ft_test4 | null | [
"transformers",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T03:00:26+00:00 | [] | [] | TAGS
#transformers #safetensors #opt #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# opt-350m_ft_test4
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# opt-350m_ft_test4\n\nThis model was trained from scratch on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #opt #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# opt-350m_ft_test4\n\nThis model was trained from scratch on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text-to-speech | transformers |
[X(Twitter) アカウント](https://twitter.com/peony__snow)

# このモデルの長所は幼げなおっとりしたボイス生成を商用・非商用問わず無料で自由に使える点です。
# The advantage of this model is that you can freely use the childish and unapologetic voice generation for free, both commercial and non-commercial.
このモデルはRikkaBotanのスイートバージョンです。
セリフの読み上げに適しています。
もしもっと硬く話してほしい場合は、[coolバージョン](https://huggingface.co/RikkaBotan/style_bert_vits2_jp_extra_cool_original)
英語で話してほしい場合は[englishバージョン](https://huggingface.co/RikkaBotan/style_bert_vits2_english_original)
ささやき声で話してほしい場合は[ASMRバージョン](https://huggingface.co/RikkaBotan/style_bert_vits2_jp_extra_asmr_original)
を試してみてください。
This model is sweet version.
It is suitable for reading emotional text.
If you want them to speak more descriptively, try the [cool version](https://huggingface.co/RikkaBotan/style_bert_vits2_jp_extra_cool_original).
If you want them to speak in English, try the [English version](https://huggingface.co/RikkaBotan/style_bert_vits2_english_original)
If you want them to speak whisper voice, try the [ASMR version](https://huggingface.co/RikkaBotan/style_bert_vits2_jp_extra_asmr_original).
# モデルのサンプル音声/sample voice
このモデルのサンプル音声①です
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/6629ba7d59854b02da014f64/REXsvPirk6F_PVp3oKLp-.mpga"></audio>
このモデルのサンプル音声②です。
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/6629ba7d59854b02da014f64/0Xr9KOdkjd-qj5xnR5vJv.mpga"></audio>
# モデルの説明/model description
このモデルはTTS(text-to-speech)モデルである、
style_bert_vits2_jp_extraを独自の音声データで学習させたモデルです。
style_bert_vits2_jp_extraは日本語に特化した音声生成モデルであり、
これまでのモデルと比較して高精度かつ自然な音声生成が可能となっています。
学習データはモデルを作成した研究者本人の音声のみであるため、
ライセンスはstyle_bert_vits2_jp_extraと同様に
商用・非商用問わず、自由に無料でご使用いただけます。
This model is a TTS (text-to-speech) model.
This is a model that has trained style_bert_vits2_jp_extra with my own voice data.
style_bert_vits2_jp_extra is a speech generation model specialized for Japanese.
Compared to previous models, it is possible to generate highly accurate and natural speech.
Since the training data is only the voice of the researcher who created the model,
The license is the same as style_bert_vits2_jp_extra
You can use it freely and free of charge, regardless of whether it is commercial or non-commercial.
# モデルを使うときのお約束/limitation
〇できること/What you can do
成果物の加工 Processing of deliverables
成果物の商用利用 Commercial use of deliverables
成果物の学習素材としての利用 Use of deliverables as learning materials
R-18、R-18G表現への利用(ただしゾーニングは必須です(小さなお友達のことをちゃんと考えてあげてね))
Use for R-18 and R-18G expressions (but zoning is required (please think about your little friends))
×できないこと/What you cannot do
音声モデルの二次配布 Secondary distribution of voice models
人を批判・攻撃すること Criticizing or attacking others
特定の政治的立場・宗教・思想への賛同または反対を呼びかけること Calling for support or opposition to a particular political position, religion, or ideology
刺激の強い表現をゾーニングなしで公開すること Publishing R-18 voice without zoning
なりすましなど、提供者に不利益をもたらすこと detrimental to the provider
# 商用利用可能なVRMも作りました。/ VRM(Vroid) model for commercial use
AITuberや動画解説などに用いてください。/Please use this for AITuber and video creations
[VRM(Vroid)Model](https://hub.vroid.com/characters/610722650807128806/models/3779097297253430502)
# できればやって欲しいこと/If you like
X(Twitter)や説明文でこのモデルを使ったことを書いてもらえると作者が喜びます。(必須ではありません)
If you write that you are using this model, I will be glad!
# モデルの使い方/how to use (コードはgoogle colab用です。 For google colab)
2通りの使用方法があります。必要に応じて選択してください。There are 2 ways to use model.
1.style-bert-vits2のアプリを使ってボイスを生成する/to use style-bert-vits2 app
①Style-Bert-VITS2 インストール先の Style-Bert-VITS2/model_assets/rikka_botan/ フォルダに config.json, safetensors, style_vectors.npy の 3ファイルを置きます。
Put 3 files on Style-Bert-VITS2/model_assets/rikka_botan/ folder
以下のプログラムで自動的に保存できます。By using this program, we can save files.
```python
from google.colab import drive
drive.mount("/content/drive")
%cd /content/drive/MyDrive/
!mkdir Style-Bert-VITS2/
%cd Style-Bert-VITS2/
!mkdir model_assets/
%cd model_assets/
!mkdir rikka_botan/
from huggingface_hub import snapshot_download
model_name = "RikkaBotan/style_bert_vits2_jp_extra_sweet_original"
download_path = snapshot_download(
repo_id=model_name,
local_dir = f"rikka_botan/",
local_dir_use_symlinks=False
)
```
②以下のプログラムを実行します execute this program
```python
!git clone https://github.com/litagin02/Style-Bert-VITS2.git
%cd Style-Bert-VITS2/
!pip install -r requirements.txt
!python initialize.py --skip_jvnv
from google.colab import drive
drive.mount("/content/drive")
dataset_root = "/content/drive/MyDrive/Style-Bert-VITS2/Data"
assets_root = "/content/drive/MyDrive/Style-Bert-VITS2/model_assets"
import yaml
with open("configs/paths.yml", "w", encoding="utf-8") as f:
yaml.dump({"dataset_root": dataset_root, "assets_root": assets_root}, f)
!python app.py --share
```
③public URLにアクセスします。access public url
2.以下のコードを利用します。use this code
```python
# At first, we will install the required libraries
!git clone https://github.com/litagin02/Style-Bert-VITS2.git
%cd Style-Bert-VITS2/
!pip install -r requirements.txt
!pip install style-bert-vits2 --no-build-isolation # To avoid bugs
# load Japanese bert model
from style_bert_vits2.nlp import bert_models
from style_bert_vits2.constants import Languages
bert_models.load_model(Languages.JP, "ku-nlp/deberta-v2-large-japanese-char-wwm")
bert_models.load_tokenizer(Languages.JP, "ku-nlp/deberta-v2-large-japanese-char-wwm")
# save model files to model_assets dir
from pathlib import Path
from huggingface_hub import hf_hub_download
model_file = "rikka_botan_mokyumokyu.safetensors"
config_file = "config.json"
style_file = "style_vectors.npy"
for file in [model_file, config_file, style_file]:
print(file)
hf_hub_download(
"RikkaBotan/style_bert_vits2_jp_extra_sweet_original",
file,
local_dir="model_assets"
)
# By using saved model, we will test text-to-speech demo
from style_bert_vits2.tts_model import TTSModel
assets_root = Path("model_assets")
model = TTSModel(
model_path=assets_root / model_file,
config_path=assets_root / config_file,
style_vec_path=assets_root / style_file,
device="cuda" # If you cannot use cuda, please input cpu
)
# Please input the Japanese text
from IPython.display import Audio, display
sr, audio = model.infer(text="ここに文章を入力してください")
display(Audio(audio, rate=sr))
```
# 謝辞/Acknowledgments
style-bert-vits2-jp-extraを開発してくださった[litagin](https://huggingface.co/litagin)さんに感謝いたします。
また、本モデルは多くの研究者さんの努力の上にできています。先人たちの努力に深く感謝します。
We would like to thank Mr./Ms. [litagin](https://huggingface.co/litagin) for developing style-bert-vits2-jp-extra.
Additionally, this model was created based on the efforts of many developers. We are deeply grateful for the efforts of our predecessors. | {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["style-bert-vits2", "style-bert-vits2-jp-extra", "tts", "childish", "childish voice", "japanese", "text2audio", "text-to-audio", "text to audio", "audio"], "pipeline_tag": "text-to-speech"} | RikkaBotan/style_bert_vits2_jp_extra_sweet_original | null | [
"transformers",
"style-bert-vits2",
"style-bert-vits2-jp-extra",
"tts",
"childish",
"childish voice",
"japanese",
"text2audio",
"text-to-audio",
"text to audio",
"audio",
"text-to-speech",
"ja",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T03:01:28+00:00 | [] | [
"ja"
] | TAGS
#transformers #style-bert-vits2 #style-bert-vits2-jp-extra #tts #childish #childish voice #japanese #text2audio #text-to-audio #text to audio #audio #text-to-speech #ja #license-cc-by-sa-4.0 #endpoints_compatible #region-us
|
X(Twitter) アカウント
!image/png
# このモデルの長所は幼げなおっとりしたボイス生成を商用・非商用問わず無料で自由に使える点です。
# The advantage of this model is that you can freely use the childish and unapologetic voice generation for free, both commercial and non-commercial.
このモデルはRikkaBotanのスイートバージョンです。
セリフの読み上げに適しています。
もしもっと硬く話してほしい場合は、coolバージョン
英語で話してほしい場合はenglishバージョン
ささやき声で話してほしい場合はASMRバージョン
を試してみてください。
This model is sweet version.
It is suitable for reading emotional text.
If you want them to speak more descriptively, try the cool version.
If you want them to speak in English, try the English version
If you want them to speak whisper voice, try the ASMR version.
# モデルのサンプル音声/sample voice
このモデルのサンプル音声①です
<audio controls src="URL
このモデルのサンプル音声②です。
<audio controls src="URL
# モデルの説明/model description
このモデルはTTS(text-to-speech)モデルである、
style_bert_vits2_jp_extraを独自の音声データで学習させたモデルです。
style_bert_vits2_jp_extraは日本語に特化した音声生成モデルであり、
これまでのモデルと比較して高精度かつ自然な音声生成が可能となっています。
学習データはモデルを作成した研究者本人の音声のみであるため、
ライセンスはstyle_bert_vits2_jp_extraと同様に
商用・非商用問わず、自由に無料でご使用いただけます。
This model is a TTS (text-to-speech) model.
This is a model that has trained style_bert_vits2_jp_extra with my own voice data.
style_bert_vits2_jp_extra is a speech generation model specialized for Japanese.
Compared to previous models, it is possible to generate highly accurate and natural speech.
Since the training data is only the voice of the researcher who created the model,
The license is the same as style_bert_vits2_jp_extra
You can use it freely and free of charge, regardless of whether it is commercial or non-commercial.
# モデルを使うときのお約束/limitation
〇できること/What you can do
成果物の加工 Processing of deliverables
成果物の商用利用 Commercial use of deliverables
成果物の学習素材としての利用 Use of deliverables as learning materials
R-18、R-18G表現への利用(ただしゾーニングは必須です(小さなお友達のことをちゃんと考えてあげてね))
Use for R-18 and R-18G expressions (but zoning is required (please think about your little friends))
×できないこと/What you cannot do
音声モデルの二次配布 Secondary distribution of voice models
人を批判・攻撃すること Criticizing or attacking others
特定の政治的立場・宗教・思想への賛同または反対を呼びかけること Calling for support or opposition to a particular political position, religion, or ideology
刺激の強い表現をゾーニングなしで公開すること Publishing R-18 voice without zoning
なりすましなど、提供者に不利益をもたらすこと detrimental to the provider
# 商用利用可能なVRMも作りました。/ VRM(Vroid) model for commercial use
AITuberや動画解説などに用いてください。/Please use this for AITuber and video creations
VRM(Vroid)Model
# できればやって欲しいこと/If you like
X(Twitter)や説明文でこのモデルを使ったことを書いてもらえると作者が喜びます。(必須ではありません)
If you write that you are using this model, I will be glad!
# モデルの使い方/how to use (コードはgoogle colab用です。 For google colab)
2通りの使用方法があります。必要に応じて選択してください。There are 2 ways to use model.
1.style-bert-vits2のアプリを使ってボイスを生成する/to use style-bert-vits2 app
①Style-Bert-VITS2 インストール先の Style-Bert-VITS2/model_assets/rikka_botan/ フォルダに URL, safetensors, style_vectors.npy の 3ファイルを置きます。
Put 3 files on Style-Bert-VITS2/model_assets/rikka_botan/ folder
以下のプログラムで自動的に保存できます。By using this program, we can save files.
②以下のプログラムを実行します execute this program
③public URLにアクセスします。access public url
2.以下のコードを利用します。use this code
# 謝辞/Acknowledgments
style-bert-vits2-jp-extraを開発してくださったlitaginさんに感謝いたします。
また、本モデルは多くの研究者さんの努力の上にできています。先人たちの努力に深く感謝します。
We would like to thank Mr./Ms. litagin for developing style-bert-vits2-jp-extra.
Additionally, this model was created based on the efforts of many developers. We are deeply grateful for the efforts of our predecessors. | [
"# このモデルの長所は幼げなおっとりしたボイス生成を商用・非商用問わず無料で自由に使える点です。",
"# The advantage of this model is that you can freely use the childish and unapologetic voice generation for free, both commercial and non-commercial.\n\nこのモデルはRikkaBotanのスイートバージョンです。\nセリフの読み上げに適しています。\nもしもっと硬く話してほしい場合は、coolバージョン\n英語で話してほしい場合はenglishバージョン\nささやき声で話してほしい場合はASMRバージョン\nを試してみてください。\n\nThis model is sweet version.\nIt is suitable for reading emotional text.\nIf you want them to speak more descriptively, try the cool version.\nIf you want them to speak in English, try the English version\nIf you want them to speak whisper voice, try the ASMR version.",
"# モデルのサンプル音声/sample voice\n\nこのモデルのサンプル音声①です\n\n<audio controls src=\"URL\n\nこのモデルのサンプル音声②です。\n\n<audio controls src=\"URL",
"# モデルの説明/model description\n\nこのモデルはTTS(text-to-speech)モデルである、\nstyle_bert_vits2_jp_extraを独自の音声データで学習させたモデルです。\nstyle_bert_vits2_jp_extraは日本語に特化した音声生成モデルであり、\nこれまでのモデルと比較して高精度かつ自然な音声生成が可能となっています。\n学習データはモデルを作成した研究者本人の音声のみであるため、\nライセンスはstyle_bert_vits2_jp_extraと同様に\n商用・非商用問わず、自由に無料でご使用いただけます。\n\nThis model is a TTS (text-to-speech) model.\nThis is a model that has trained style_bert_vits2_jp_extra with my own voice data.\nstyle_bert_vits2_jp_extra is a speech generation model specialized for Japanese.\nCompared to previous models, it is possible to generate highly accurate and natural speech.\nSince the training data is only the voice of the researcher who created the model,\nThe license is the same as style_bert_vits2_jp_extra\nYou can use it freely and free of charge, regardless of whether it is commercial or non-commercial.",
"# モデルを使うときのお約束/limitation\n\n〇できること/What you can do\n\n成果物の加工 Processing of deliverables\n\n成果物の商用利用 Commercial use of deliverables\n\n成果物の学習素材としての利用 Use of deliverables as learning materials\n\nR-18、R-18G表現への利用(ただしゾーニングは必須です(小さなお友達のことをちゃんと考えてあげてね))\n\nUse for R-18 and R-18G expressions (but zoning is required (please think about your little friends))\n\n\n×できないこと/What you cannot do\n\n音声モデルの二次配布 Secondary distribution of voice models\n\n人を批判・攻撃すること Criticizing or attacking others\n\n特定の政治的立場・宗教・思想への賛同または反対を呼びかけること Calling for support or opposition to a particular political position, religion, or ideology\n\n刺激の強い表現をゾーニングなしで公開すること Publishing R-18 voice without zoning\n\nなりすましなど、提供者に不利益をもたらすこと detrimental to the provider",
"# 商用利用可能なVRMも作りました。/ VRM(Vroid) model for commercial use\n\n\nAITuberや動画解説などに用いてください。/Please use this for AITuber and video creations\n\n\nVRM(Vroid)Model",
"# できればやって欲しいこと/If you like\n\nX(Twitter)や説明文でこのモデルを使ったことを書いてもらえると作者が喜びます。(必須ではありません)\nIf you write that you are using this model, I will be glad!",
"# モデルの使い方/how to use (コードはgoogle colab用です。 For google colab)\n\n\n2通りの使用方法があります。必要に応じて選択してください。There are 2 ways to use model.\n\n\n1.style-bert-vits2のアプリを使ってボイスを生成する/to use style-bert-vits2 app\n\n\n①Style-Bert-VITS2 インストール先の Style-Bert-VITS2/model_assets/rikka_botan/ フォルダに URL, safetensors, style_vectors.npy の 3ファイルを置きます。\nPut 3 files on Style-Bert-VITS2/model_assets/rikka_botan/ folder\n以下のプログラムで自動的に保存できます。By using this program, we can save files.\n\n\n②以下のプログラムを実行します execute this program\n\n\n③public URLにアクセスします。access public url\n\n2.以下のコードを利用します。use this code",
"# 謝辞/Acknowledgments\nstyle-bert-vits2-jp-extraを開発してくださったlitaginさんに感謝いたします。\nまた、本モデルは多くの研究者さんの努力の上にできています。先人たちの努力に深く感謝します。\nWe would like to thank Mr./Ms. litagin for developing style-bert-vits2-jp-extra.\nAdditionally, this model was created based on the efforts of many developers. We are deeply grateful for the efforts of our predecessors."
] | [
"TAGS\n#transformers #style-bert-vits2 #style-bert-vits2-jp-extra #tts #childish #childish voice #japanese #text2audio #text-to-audio #text to audio #audio #text-to-speech #ja #license-cc-by-sa-4.0 #endpoints_compatible #region-us \n",
"# このモデルの長所は幼げなおっとりしたボイス生成を商用・非商用問わず無料で自由に使える点です。",
"# The advantage of this model is that you can freely use the childish and unapologetic voice generation for free, both commercial and non-commercial.\n\nこのモデルはRikkaBotanのスイートバージョンです。\nセリフの読み上げに適しています。\nもしもっと硬く話してほしい場合は、coolバージョン\n英語で話してほしい場合はenglishバージョン\nささやき声で話してほしい場合はASMRバージョン\nを試してみてください。\n\nThis model is sweet version.\nIt is suitable for reading emotional text.\nIf you want them to speak more descriptively, try the cool version.\nIf you want them to speak in English, try the English version\nIf you want them to speak whisper voice, try the ASMR version.",
"# モデルのサンプル音声/sample voice\n\nこのモデルのサンプル音声①です\n\n<audio controls src=\"URL\n\nこのモデルのサンプル音声②です。\n\n<audio controls src=\"URL",
"# モデルの説明/model description\n\nこのモデルはTTS(text-to-speech)モデルである、\nstyle_bert_vits2_jp_extraを独自の音声データで学習させたモデルです。\nstyle_bert_vits2_jp_extraは日本語に特化した音声生成モデルであり、\nこれまでのモデルと比較して高精度かつ自然な音声生成が可能となっています。\n学習データはモデルを作成した研究者本人の音声のみであるため、\nライセンスはstyle_bert_vits2_jp_extraと同様に\n商用・非商用問わず、自由に無料でご使用いただけます。\n\nThis model is a TTS (text-to-speech) model.\nThis is a model that has trained style_bert_vits2_jp_extra with my own voice data.\nstyle_bert_vits2_jp_extra is a speech generation model specialized for Japanese.\nCompared to previous models, it is possible to generate highly accurate and natural speech.\nSince the training data is only the voice of the researcher who created the model,\nThe license is the same as style_bert_vits2_jp_extra\nYou can use it freely and free of charge, regardless of whether it is commercial or non-commercial.",
"# モデルを使うときのお約束/limitation\n\n〇できること/What you can do\n\n成果物の加工 Processing of deliverables\n\n成果物の商用利用 Commercial use of deliverables\n\n成果物の学習素材としての利用 Use of deliverables as learning materials\n\nR-18、R-18G表現への利用(ただしゾーニングは必須です(小さなお友達のことをちゃんと考えてあげてね))\n\nUse for R-18 and R-18G expressions (but zoning is required (please think about your little friends))\n\n\n×できないこと/What you cannot do\n\n音声モデルの二次配布 Secondary distribution of voice models\n\n人を批判・攻撃すること Criticizing or attacking others\n\n特定の政治的立場・宗教・思想への賛同または反対を呼びかけること Calling for support or opposition to a particular political position, religion, or ideology\n\n刺激の強い表現をゾーニングなしで公開すること Publishing R-18 voice without zoning\n\nなりすましなど、提供者に不利益をもたらすこと detrimental to the provider",
"# 商用利用可能なVRMも作りました。/ VRM(Vroid) model for commercial use\n\n\nAITuberや動画解説などに用いてください。/Please use this for AITuber and video creations\n\n\nVRM(Vroid)Model",
"# できればやって欲しいこと/If you like\n\nX(Twitter)や説明文でこのモデルを使ったことを書いてもらえると作者が喜びます。(必須ではありません)\nIf you write that you are using this model, I will be glad!",
"# モデルの使い方/how to use (コードはgoogle colab用です。 For google colab)\n\n\n2通りの使用方法があります。必要に応じて選択してください。There are 2 ways to use model.\n\n\n1.style-bert-vits2のアプリを使ってボイスを生成する/to use style-bert-vits2 app\n\n\n①Style-Bert-VITS2 インストール先の Style-Bert-VITS2/model_assets/rikka_botan/ フォルダに URL, safetensors, style_vectors.npy の 3ファイルを置きます。\nPut 3 files on Style-Bert-VITS2/model_assets/rikka_botan/ folder\n以下のプログラムで自動的に保存できます。By using this program, we can save files.\n\n\n②以下のプログラムを実行します execute this program\n\n\n③public URLにアクセスします。access public url\n\n2.以下のコードを利用します。use this code",
"# 謝辞/Acknowledgments\nstyle-bert-vits2-jp-extraを開発してくださったlitaginさんに感謝いたします。\nまた、本モデルは多くの研究者さんの努力の上にできています。先人たちの努力に深く感謝します。\nWe would like to thank Mr./Ms. litagin for developing style-bert-vits2-jp-extra.\nAdditionally, this model was created based on the efforts of many developers. We are deeply grateful for the efforts of our predecessors."
] |
image-text-to-text | xtuner |
<div align="center">
<img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/>
[](https://github.com/InternLM/xtuner)
</div>
## Model
llava-phi-3-mini is a LLaVA model fine-tuned from [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) with [ShareGPT4V-PT](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V) and [InternVL-SFT](https://github.com/OpenGVLab/InternVL/tree/main/internvl_chat#prepare-training-datasets) by [XTuner](https://github.com/InternLM/xtuner).
**Note: This model is in official LLaVA format.**
Resources:
- GitHub: [xtuner](https://github.com/InternLM/xtuner)
- HuggingFace LLaVA format model: [xtuner/llava-phi-3-mini-hf](https://huggingface.co/xtuner/llava-phi-3-mini-hf)
- GGUF LLaVA model: [xtuner/llava-phi-3-mini-gguf](https://huggingface.co/xtuner/llava-phi-3-mini-gguf)
- XTuner LLaVA format model: [xtuner/llava-phi-3-mini-xtuner](https://huggingface.co/xtuner/llava-phi-3-mini-xtuner)
## Details
| Model | Visual Encoder | Projector | Resolution | Pretraining Strategy | Fine-tuning Strategy | Pretrain Dataset | Fine-tune Dataset | Pretrain Epoch | Fine-tune Epoch |
| :-------------------- | ------------------: | --------: | ---------: | ---------------------: | ------------------------: | ------------------------: | -----------------------: | -------------- | --------------- |
| LLaVA-v1.5-7B | CLIP-L | MLP | 336 | Frozen LLM, Frozen ViT | Full LLM, Frozen ViT | LLaVA-PT (558K) | LLaVA-Mix (665K) | 1 | 1 |
| LLaVA-Llama-3-8B | CLIP-L | MLP | 336 | Frozen LLM, Frozen ViT | Full LLM, LoRA ViT | LLaVA-PT (558K) | LLaVA-Mix (665K) | 1 | 1 |
| LLaVA-Llama-3-8B-v1.1 | CLIP-L | MLP | 336 | Frozen LLM, Frozen ViT | Full LLM, LoRA ViT | ShareGPT4V-PT (1246K) | InternVL-SFT (1268K) | 1 | 1 |
| **LLaVA-Phi-3-mini** | CLIP-L | MLP | 336 | Frozen LLM, Frozen ViT | Full LLM, Full ViT | ShareGPT4V-PT (1246K) | InternVL-SFT (1268K) | 1 | 2 |
## Results
<div align="center">
<img src="https://github.com/InternLM/xtuner/assets/36994684/78524f65-260d-4ae3-a687-03fc5a19dcbb" alt="Image" width=500" />
</div>
| Model | MMBench Test (EN) | MMMU Val | SEED-IMG | AI2D Test | ScienceQA Test | HallusionBench aAcc | POPE | GQA | TextVQA | MME | MMStar |
| :-------------------- | :---------------: | :-------: | :------: | :-------: | :------------: | :-----------------: | :--: | :--: | :-----: | :------: | :----: |
| LLaVA-v1.5-7B | 66.5 | 35.3 | 60.5 | 54.8 | 70.4 | 44.9 | 85.9 | 62.0 | 58.2 | 1511/348 | 30.3 |
| LLaVA-Llama-3-8B | 68.9 | 36.8 | 69.8 | 60.9 | 73.3 | 47.3 | 87.2 | 63.5 | 58.0 | 1506/295 | 38.2 |
| LLaVA-Llama-3-8B-v1.1 | 72.3 | 37.1 | 70.1 | 70.0 | 72.9 | 47.7 | 86.4 | 62.6 | 59.0 | 1469/349 | 45.1 |
| **LLaVA-Phi-3-mini** | 69.2 | 41.4 | 70.0 | 69.3 | 73.7 | 49.8 | 87.3 | 61.5 | 57.8 | 1477/313 | 43.7 |
## Quickstart
### Chat by LLaVA official library
1. Install official LLaVA library
```bash
pip install git+https://github.com/haotian-liu/LLaVA.git
```
2. Chat by below script
<details>
<summary>cli.py</summary>
```python
import argparse
from io import BytesIO
import requests
import torch
from llava.constants import DEFAULT_IMAGE_TOKEN, IMAGE_TOKEN_INDEX
from llava.conversation import Conversation, SeparatorStyle
from llava.mm_utils import process_images, tokenizer_image_token
from llava.model import LlavaLlamaForCausalLM
from PIL import Image
from transformers import (AutoTokenizer, BitsAndBytesConfig, StoppingCriteria,
StoppingCriteriaList, TextStreamer)
def load_image(image_file):
if image_file.startswith('http://') or image_file.startswith('https://'):
response = requests.get(image_file)
image = Image.open(BytesIO(response.content)).convert('RGB')
else:
image = Image.open(image_file).convert('RGB')
return image
class StopWordStoppingCriteria(StoppingCriteria):
"""StopWord stopping criteria."""
def __init__(self, tokenizer, stop_word):
self.tokenizer = tokenizer
self.stop_word = stop_word
self.length = len(self.stop_word)
def __call__(self, input_ids, *args, **kwargs) -> bool:
cur_text = self.tokenizer.decode(input_ids[0])
cur_text = cur_text.replace('\r', '').replace('\n', '')
return cur_text[-self.length:] == self.stop_word
def get_stop_criteria(tokenizer, stop_words=[]):
stop_criteria = StoppingCriteriaList()
for word in stop_words:
stop_criteria.append(StopWordStoppingCriteria(tokenizer, word))
return stop_criteria
def main(args):
kwargs = {'device_map': args.device}
if args.load_8bit:
kwargs['load_in_8bit'] = True
elif args.load_4bit:
kwargs['load_in_4bit'] = True
kwargs['quantization_config'] = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4')
else:
kwargs['torch_dtype'] = torch.float16
tokenizer = AutoTokenizer.from_pretrained(args.model_path)
model = LlavaLlamaForCausalLM.from_pretrained(
args.model_path, low_cpu_mem_usage=True, **kwargs)
vision_tower = model.get_vision_tower()
if not vision_tower.is_loaded:
vision_tower.load_model(device_map=args.device)
image_processor = vision_tower.image_processor
conv = Conversation(
system=system='<|system|>\nAnswer the questions.',
roles=('<|user|>\n', '<|assistant|>\n'),
messages=[],
offset=0,
sep_style=SeparatorStyle.MPT,
sep='<|end|>',
)
roles = conv.roles
image = load_image(args.image_file)
image_size = image.size
image_tensor = process_images([image], image_processor, model.config)
if type(image_tensor) is list:
image_tensor = [
image.to(model.device, dtype=torch.float16)
for image in image_tensor
]
else:
image_tensor = image_tensor.to(model.device, dtype=torch.float16)
while True:
try:
inp = input(f'{roles[0]}: ')
except EOFError:
inp = ''
if not inp:
print('exit...')
break
print(f'{roles[1]}: ', end='')
if image is not None:
inp = DEFAULT_IMAGE_TOKEN + '\n' + inp
image = None
conv.append_message(conv.roles[0], inp)
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
input_ids = tokenizer_image_token(
prompt, tokenizer, IMAGE_TOKEN_INDEX,
return_tensors='pt').unsqueeze(0).to(model.device)
stop_criteria = get_stop_criteria(
tokenizer=tokenizer, stop_words=[conv.sep])
streamer = TextStreamer(
tokenizer, skip_prompt=True, skip_special_tokens=True)
with torch.inference_mode():
output_ids = model.generate(
input_ids,
images=image_tensor,
image_sizes=[image_size],
do_sample=True if args.temperature > 0 else False,
temperature=args.temperature,
max_new_tokens=args.max_new_tokens,
streamer=streamer,
stopping_criteria=stop_criteria,
use_cache=True)
outputs = tokenizer.decode(output_ids[0]).strip()
conv.messages[-1][-1] = outputs
if args.debug:
print('\n', {'prompt': prompt, 'outputs': outputs}, '\n')
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--model-path', type=str, default='xtuner/llava-llama-3-8b-v1_1-hf')
parser.add_argument('--image-file', type=str, required=True)
parser.add_argument('--device', type=str, default='auto')
parser.add_argument('--temperature', type=float, default=0.2)
parser.add_argument('--max-new-tokens', type=int, default=512)
parser.add_argument('--load-8bit', action='store_true')
parser.add_argument('--load-4bit', action='store_true')
parser.add_argument('--debug', action='store_true')
args = parser.parse_args()
main(args)
```
</details>
```
python ./cli.py --model-path xtuner/llava-phi-3-mini --image-file https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg --load-4bit
```
### Reproduce
Please refer to [docs](https://github.com/InternLM/xtuner/tree/main/xtuner/configs/llava/phi3_mini_4k_instruct_clip_vit_large_p14_336#readme).
## Citation
```bibtex
@misc{2023xtuner,
title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
author={XTuner Contributors},
howpublished = {\url{https://github.com/InternLM/xtuner}},
year={2023}
}
```
| {"library_name": "xtuner", "datasets": ["Lin-Chen/ShareGPT4V"], "pipeline_tag": "image-text-to-text"} | xtuner/llava-phi-3-mini | null | [
"xtuner",
"safetensors",
"llava_llama",
"image-text-to-text",
"dataset:Lin-Chen/ShareGPT4V",
"region:us"
] | null | 2024-04-25T03:01:40+00:00 | [] | [] | TAGS
#xtuner #safetensors #llava_llama #image-text-to-text #dataset-Lin-Chen/ShareGPT4V #region-us
|


Quickstart
----------
### Chat by LLaVA official library
1. Install official LLaVA library
2. Chat by below script
URL
### Reproduce
Please refer to docs.
| [
"### Chat by LLaVA official library\n\n\n1. Install official LLaVA library\n2. Chat by below script\n\n\n\nURL",
"### Reproduce\n\n\nPlease refer to docs."
] | [
"TAGS\n#xtuner #safetensors #llava_llama #image-text-to-text #dataset-Lin-Chen/ShareGPT4V #region-us \n",
"### Chat by LLaVA official library\n\n\n1. Install official LLaVA library\n2. Chat by below script\n\n\n\nURL",
"### Reproduce\n\n\nPlease refer to docs."
] |
text-to-speech | transformers |
[X(Twitter) アカウント](https://twitter.com/peony__snow)

# このモデルの長所は幼げなおっとりしたボイス生成を商用・非商用問わず無料で自由に使える点です。
# The advantage of this model is that you can freely use the childish and unapologetic voice generation for free, both commercial and non-commercial.
このモデルはRikkaBotanのクールバージョンです。
説明文の読み上げに適しています。
もしもっと感情的に話してほしい場合は、[sweetバージョン](https://huggingface.co/RikkaBotan/style_bert_vits2_jp_extra_sweet_original)
英語で話してほしい場合は[英語バージョン](https://huggingface.co/RikkaBotan/style_bert_vits2_english_original)
ささやき声で話してほしい場合は[ASMRバージョン](https://huggingface.co/RikkaBotan/style_bert_vits2_jp_extra_asmr_original)
を試してみてください。
This model is cool version.
It is suitable for reading explanatory text.
If you want them to speak more emotionally, try the [sweet version](https://huggingface.co/RikkaBotan/style_bert_vits2_jp_extra_sweet_original)
If you want them to speak in English, try the [English version](https://huggingface.co/RikkaBotan/style_bert_vits2_english_original)
If you want them to speak whisper voice, try the [ASMR version](https://huggingface.co/RikkaBotan/style_bert_vits2_jp_extra_asmr_original).
# モデルのサンプル音声/sample voice
このモデルのサンプル音声①です
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/6629ba7d59854b02da014f64/fVMp_-vqhveARz63qYcUI.mpga"></audio>
このモデルのサンプル音声②です。
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/6629ba7d59854b02da014f64/cfD9OE7NgljswKRPSBaJm.mpga"></audio>
# モデルの説明/model description
このモデルはTTS(text-to-speech)モデルである、
style_bert_vits2_jp_extraを独自の音声データで学習させたモデルです。
style_bert_vits2_jp_extraは日本語に特化した音声生成モデルであり、
これまでのモデルと比較して高精度かつ自然な音声生成が可能となっています。
学習データはモデルを作成した研究者本人の音声のみであるため、
ライセンスはstyle_bert_vits2_jp_extraと同様に
商用・非商用問わず、自由に無料でご使用いただけます。
This model is a TTS (text-to-speech) model.
This is a model that has trained style_bert_vits2_jp_extra with my own voice data.
style_bert_vits2_jp_extra is a speech generation model specialized for Japanese.
Compared to previous models, it is possible to generate highly accurate and natural speech.
Since the training data is only the voice of the researcher who created the model,
The license is the same as style_bert_vits2_jp_extra
You can use it freely and free of charge, regardless of whether it is commercial or non-commercial.
# モデルを使うときのお約束/limitation
〇できること/What you can do
成果物の加工 Processing of deliverables
成果物の商用利用 Commercial use of deliverables
成果物の学習素材としての利用 Use of deliverables as learning materials
R-18、R-18G表現への利用(ただしゾーニングは必須です(小さなお友達のことをちゃんと考えてあげてね))
Use for R-18 and R-18G expressions (but zoning is required (please think about your little friends))
×できないこと/What you cannot do
音声モデルの二次配布 Secondary distribution of voice models
人を批判・攻撃すること Criticizing or attacking others
特定の政治的立場・宗教・思想への賛同または反対を呼びかけること Calling for support or opposition to a particular political position, religion, or ideology
刺激の強い表現をゾーニングなしで公開すること Publishing R-18 voice without zoning
なりすましなど、提供者に不利益をもたらすこと detrimental to the provider
# 商用利用可能なVRMも作りました。/ VRM(Vroid) model for commercial use
AITuberや動画解説などに用いてください。/Please use this for AITuber and video creations
[VRM(Vroid)Model](https://hub.vroid.com/characters/610722650807128806/models/3779097297253430502)
# できればやって欲しいこと/If you like
X(Twitter)や説明文でこのモデルを使ったことを書いてもらえると作者が喜びます。(必須ではありません)
If you write that you are using this model, I will be glad!
# モデルの使い方/how to use (コードはgoogle colab用です。 For google colab)
2通りの使用方法があります。必要に応じて選択してください。There are 2 ways to use model.
1.style-bert-vits2のアプリを使ってボイスを生成する/to use style-bert-vits2 app
①Style-Bert-VITS2 インストール先の Style-Bert-VITS2/model_assets/rikka_botan/ フォルダに config.json, safetensors, style_vectors.npy の 3ファイルを置きます。
Put 3 files on Style-Bert-VITS2/model_assets/rikka_botan/ folder
以下のプログラムで自動的に保存できます。By using this program, we can save files.
```python
from google.colab import drive
drive.mount("/content/drive")
%cd /content/drive/MyDrive/
!mkdir Style-Bert-VITS2/
%cd Style-Bert-VITS2/
!mkdir model_assets/
%cd model_assets/
!mkdir rikka_botan/
from huggingface_hub import snapshot_download
model_name = "RikkaBotan/style_bert_vits2_jp_extra_cool_original"
download_path = snapshot_download(
repo_id=model_name,
local_dir = f"rikka_botan/",
local_dir_use_symlinks=False
)
```
②以下のプログラムを実行します execute this program
```python
!git clone https://github.com/litagin02/Style-Bert-VITS2.git
%cd Style-Bert-VITS2/
!pip install -r requirements.txt
!python initialize.py --skip_jvnv
from google.colab import drive
drive.mount("/content/drive")
dataset_root = "/content/drive/MyDrive/Style-Bert-VITS2/Data"
assets_root = "/content/drive/MyDrive/Style-Bert-VITS2/model_assets"
import yaml
with open("configs/paths.yml", "w", encoding="utf-8") as f:
yaml.dump({"dataset_root": dataset_root, "assets_root": assets_root}, f)
!python app.py --share
```
③public URLにアクセスします。access public url
2.以下のコードを利用します。use this code
```python
# At first, we will install the required libraries
!git clone https://github.com/litagin02/Style-Bert-VITS2.git
%cd Style-Bert-VITS2/
!pip install -r requirements.txt
!pip install style-bert-vits2 --no-build-isolation # To avoid bugs
# load Japanese bert model
from style_bert_vits2.nlp import bert_models
from style_bert_vits2.constants import Languages
bert_models.load_model(Languages.JP, "ku-nlp/deberta-v2-large-japanese-char-wwm")
bert_models.load_tokenizer(Languages.JP, "ku-nlp/deberta-v2-large-japanese-char-wwm")
# save model files to model_assets dir
from pathlib import Path
from huggingface_hub import hf_hub_download
model_file = "rikka_botan_cool.safetensors"
config_file = "config.json"
style_file = "style_vectors.npy"
for file in [model_file, config_file, style_file]:
print(file)
hf_hub_download(
"RikkaBotan/style_bert_vits2_jp_extra_cool_original",
file,
local_dir="model_assets"
)
# By using saved model, we will test text-to-speech demo
from style_bert_vits2.tts_model import TTSModel
assets_root = Path("model_assets")
model = TTSModel(
model_path=assets_root / model_file,
config_path=assets_root / config_file,
style_vec_path=assets_root / style_file,
device="cuda" # If you cannot use cuda, please input cpu
)
# Please input the Japanese text
from IPython.display import Audio, display
sr, audio = model.infer(text="ここに文章を入力してください")
display(Audio(audio, rate=sr))
```
# 謝辞/Acknowledgments
style-bert-vits2-jp-extraを開発してくださった[litagin](https://huggingface.co/litagin)さんに感謝いたします。
また、本モデルは多くの研究者さんの努力の上にできています。先人たちの努力に深く感謝します。
We would like to thank Mr./Ms. [litagin](https://huggingface.co/litagin) for developing style-bert-vits2-jp-extra.
Additionally, this model was created based on the efforts of many developers. We are deeply grateful for the efforts of our predecessors. | {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["style-bert-vits2", "style-bert-vits2-jp-extra", "tts", "childish", "childish voice", "japanese", "text2audio", "text-to-audio", "text to audio", "audio"], "pipeline_tag": "text-to-speech"} | RikkaBotan/style_bert_vits2_jp_extra_cool_original | null | [
"transformers",
"style-bert-vits2",
"style-bert-vits2-jp-extra",
"tts",
"childish",
"childish voice",
"japanese",
"text2audio",
"text-to-audio",
"text to audio",
"audio",
"text-to-speech",
"ja",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T03:01:53+00:00 | [] | [
"ja"
] | TAGS
#transformers #style-bert-vits2 #style-bert-vits2-jp-extra #tts #childish #childish voice #japanese #text2audio #text-to-audio #text to audio #audio #text-to-speech #ja #license-cc-by-sa-4.0 #endpoints_compatible #region-us
|
X(Twitter) アカウント
!image/png
# このモデルの長所は幼げなおっとりしたボイス生成を商用・非商用問わず無料で自由に使える点です。
# The advantage of this model is that you can freely use the childish and unapologetic voice generation for free, both commercial and non-commercial.
このモデルはRikkaBotanのクールバージョンです。
説明文の読み上げに適しています。
もしもっと感情的に話してほしい場合は、sweetバージョン
英語で話してほしい場合は英語バージョン
ささやき声で話してほしい場合はASMRバージョン
を試してみてください。
This model is cool version.
It is suitable for reading explanatory text.
If you want them to speak more emotionally, try the sweet version
If you want them to speak in English, try the English version
If you want them to speak whisper voice, try the ASMR version.
# モデルのサンプル音声/sample voice
このモデルのサンプル音声①です
<audio controls src="URL
このモデルのサンプル音声②です。
<audio controls src="URL
# モデルの説明/model description
このモデルはTTS(text-to-speech)モデルである、
style_bert_vits2_jp_extraを独自の音声データで学習させたモデルです。
style_bert_vits2_jp_extraは日本語に特化した音声生成モデルであり、
これまでのモデルと比較して高精度かつ自然な音声生成が可能となっています。
学習データはモデルを作成した研究者本人の音声のみであるため、
ライセンスはstyle_bert_vits2_jp_extraと同様に
商用・非商用問わず、自由に無料でご使用いただけます。
This model is a TTS (text-to-speech) model.
This is a model that has trained style_bert_vits2_jp_extra with my own voice data.
style_bert_vits2_jp_extra is a speech generation model specialized for Japanese.
Compared to previous models, it is possible to generate highly accurate and natural speech.
Since the training data is only the voice of the researcher who created the model,
The license is the same as style_bert_vits2_jp_extra
You can use it freely and free of charge, regardless of whether it is commercial or non-commercial.
# モデルを使うときのお約束/limitation
〇できること/What you can do
成果物の加工 Processing of deliverables
成果物の商用利用 Commercial use of deliverables
成果物の学習素材としての利用 Use of deliverables as learning materials
R-18、R-18G表現への利用(ただしゾーニングは必須です(小さなお友達のことをちゃんと考えてあげてね))
Use for R-18 and R-18G expressions (but zoning is required (please think about your little friends))
×できないこと/What you cannot do
音声モデルの二次配布 Secondary distribution of voice models
人を批判・攻撃すること Criticizing or attacking others
特定の政治的立場・宗教・思想への賛同または反対を呼びかけること Calling for support or opposition to a particular political position, religion, or ideology
刺激の強い表現をゾーニングなしで公開すること Publishing R-18 voice without zoning
なりすましなど、提供者に不利益をもたらすこと detrimental to the provider
# 商用利用可能なVRMも作りました。/ VRM(Vroid) model for commercial use
AITuberや動画解説などに用いてください。/Please use this for AITuber and video creations
VRM(Vroid)Model
# できればやって欲しいこと/If you like
X(Twitter)や説明文でこのモデルを使ったことを書いてもらえると作者が喜びます。(必須ではありません)
If you write that you are using this model, I will be glad!
# モデルの使い方/how to use (コードはgoogle colab用です。 For google colab)
2通りの使用方法があります。必要に応じて選択してください。There are 2 ways to use model.
1.style-bert-vits2のアプリを使ってボイスを生成する/to use style-bert-vits2 app
①Style-Bert-VITS2 インストール先の Style-Bert-VITS2/model_assets/rikka_botan/ フォルダに URL, safetensors, style_vectors.npy の 3ファイルを置きます。
Put 3 files on Style-Bert-VITS2/model_assets/rikka_botan/ folder
以下のプログラムで自動的に保存できます。By using this program, we can save files.
②以下のプログラムを実行します execute this program
③public URLにアクセスします。access public url
2.以下のコードを利用します。use this code
# 謝辞/Acknowledgments
style-bert-vits2-jp-extraを開発してくださったlitaginさんに感謝いたします。
また、本モデルは多くの研究者さんの努力の上にできています。先人たちの努力に深く感謝します。
We would like to thank Mr./Ms. litagin for developing style-bert-vits2-jp-extra.
Additionally, this model was created based on the efforts of many developers. We are deeply grateful for the efforts of our predecessors. | [
"# このモデルの長所は幼げなおっとりしたボイス生成を商用・非商用問わず無料で自由に使える点です。",
"# The advantage of this model is that you can freely use the childish and unapologetic voice generation for free, both commercial and non-commercial.\n\nこのモデルはRikkaBotanのクールバージョンです。\n説明文の読み上げに適しています。\nもしもっと感情的に話してほしい場合は、sweetバージョン\n英語で話してほしい場合は英語バージョン\nささやき声で話してほしい場合はASMRバージョン\nを試してみてください。\n\nThis model is cool version.\nIt is suitable for reading explanatory text.\nIf you want them to speak more emotionally, try the sweet version\nIf you want them to speak in English, try the English version\nIf you want them to speak whisper voice, try the ASMR version.",
"# モデルのサンプル音声/sample voice\n\nこのモデルのサンプル音声①です\n\n<audio controls src=\"URL\n\nこのモデルのサンプル音声②です。\n\n<audio controls src=\"URL",
"# モデルの説明/model description\n\nこのモデルはTTS(text-to-speech)モデルである、\nstyle_bert_vits2_jp_extraを独自の音声データで学習させたモデルです。\nstyle_bert_vits2_jp_extraは日本語に特化した音声生成モデルであり、\nこれまでのモデルと比較して高精度かつ自然な音声生成が可能となっています。\n学習データはモデルを作成した研究者本人の音声のみであるため、\nライセンスはstyle_bert_vits2_jp_extraと同様に\n商用・非商用問わず、自由に無料でご使用いただけます。\n\nThis model is a TTS (text-to-speech) model.\nThis is a model that has trained style_bert_vits2_jp_extra with my own voice data.\nstyle_bert_vits2_jp_extra is a speech generation model specialized for Japanese.\nCompared to previous models, it is possible to generate highly accurate and natural speech.\nSince the training data is only the voice of the researcher who created the model,\nThe license is the same as style_bert_vits2_jp_extra\nYou can use it freely and free of charge, regardless of whether it is commercial or non-commercial.",
"# モデルを使うときのお約束/limitation\n\n〇できること/What you can do\n\n成果物の加工 Processing of deliverables\n\n成果物の商用利用 Commercial use of deliverables\n\n成果物の学習素材としての利用 Use of deliverables as learning materials\n\nR-18、R-18G表現への利用(ただしゾーニングは必須です(小さなお友達のことをちゃんと考えてあげてね))\n\nUse for R-18 and R-18G expressions (but zoning is required (please think about your little friends))\n\n\n×できないこと/What you cannot do\n\n音声モデルの二次配布 Secondary distribution of voice models\n\n人を批判・攻撃すること Criticizing or attacking others\n\n特定の政治的立場・宗教・思想への賛同または反対を呼びかけること Calling for support or opposition to a particular political position, religion, or ideology\n\n刺激の強い表現をゾーニングなしで公開すること Publishing R-18 voice without zoning\n\nなりすましなど、提供者に不利益をもたらすこと detrimental to the provider",
"# 商用利用可能なVRMも作りました。/ VRM(Vroid) model for commercial use\n\n\nAITuberや動画解説などに用いてください。/Please use this for AITuber and video creations\n\n\nVRM(Vroid)Model",
"# できればやって欲しいこと/If you like\nX(Twitter)や説明文でこのモデルを使ったことを書いてもらえると作者が喜びます。(必須ではありません)\nIf you write that you are using this model, I will be glad!",
"# モデルの使い方/how to use (コードはgoogle colab用です。 For google colab)\n\n2通りの使用方法があります。必要に応じて選択してください。There are 2 ways to use model.\n\n1.style-bert-vits2のアプリを使ってボイスを生成する/to use style-bert-vits2 app\n\n①Style-Bert-VITS2 インストール先の Style-Bert-VITS2/model_assets/rikka_botan/ フォルダに URL, safetensors, style_vectors.npy の 3ファイルを置きます。\nPut 3 files on Style-Bert-VITS2/model_assets/rikka_botan/ folder\n\n以下のプログラムで自動的に保存できます。By using this program, we can save files.\n\n\n②以下のプログラムを実行します execute this program\n\n\n③public URLにアクセスします。access public url\n\n2.以下のコードを利用します。use this code",
"# 謝辞/Acknowledgments\nstyle-bert-vits2-jp-extraを開発してくださったlitaginさんに感謝いたします。\nまた、本モデルは多くの研究者さんの努力の上にできています。先人たちの努力に深く感謝します。 \nWe would like to thank Mr./Ms. litagin for developing style-bert-vits2-jp-extra.\n Additionally, this model was created based on the efforts of many developers. We are deeply grateful for the efforts of our predecessors."
] | [
"TAGS\n#transformers #style-bert-vits2 #style-bert-vits2-jp-extra #tts #childish #childish voice #japanese #text2audio #text-to-audio #text to audio #audio #text-to-speech #ja #license-cc-by-sa-4.0 #endpoints_compatible #region-us \n",
"# このモデルの長所は幼げなおっとりしたボイス生成を商用・非商用問わず無料で自由に使える点です。",
"# The advantage of this model is that you can freely use the childish and unapologetic voice generation for free, both commercial and non-commercial.\n\nこのモデルはRikkaBotanのクールバージョンです。\n説明文の読み上げに適しています。\nもしもっと感情的に話してほしい場合は、sweetバージョン\n英語で話してほしい場合は英語バージョン\nささやき声で話してほしい場合はASMRバージョン\nを試してみてください。\n\nThis model is cool version.\nIt is suitable for reading explanatory text.\nIf you want them to speak more emotionally, try the sweet version\nIf you want them to speak in English, try the English version\nIf you want them to speak whisper voice, try the ASMR version.",
"# モデルのサンプル音声/sample voice\n\nこのモデルのサンプル音声①です\n\n<audio controls src=\"URL\n\nこのモデルのサンプル音声②です。\n\n<audio controls src=\"URL",
"# モデルの説明/model description\n\nこのモデルはTTS(text-to-speech)モデルである、\nstyle_bert_vits2_jp_extraを独自の音声データで学習させたモデルです。\nstyle_bert_vits2_jp_extraは日本語に特化した音声生成モデルであり、\nこれまでのモデルと比較して高精度かつ自然な音声生成が可能となっています。\n学習データはモデルを作成した研究者本人の音声のみであるため、\nライセンスはstyle_bert_vits2_jp_extraと同様に\n商用・非商用問わず、自由に無料でご使用いただけます。\n\nThis model is a TTS (text-to-speech) model.\nThis is a model that has trained style_bert_vits2_jp_extra with my own voice data.\nstyle_bert_vits2_jp_extra is a speech generation model specialized for Japanese.\nCompared to previous models, it is possible to generate highly accurate and natural speech.\nSince the training data is only the voice of the researcher who created the model,\nThe license is the same as style_bert_vits2_jp_extra\nYou can use it freely and free of charge, regardless of whether it is commercial or non-commercial.",
"# モデルを使うときのお約束/limitation\n\n〇できること/What you can do\n\n成果物の加工 Processing of deliverables\n\n成果物の商用利用 Commercial use of deliverables\n\n成果物の学習素材としての利用 Use of deliverables as learning materials\n\nR-18、R-18G表現への利用(ただしゾーニングは必須です(小さなお友達のことをちゃんと考えてあげてね))\n\nUse for R-18 and R-18G expressions (but zoning is required (please think about your little friends))\n\n\n×できないこと/What you cannot do\n\n音声モデルの二次配布 Secondary distribution of voice models\n\n人を批判・攻撃すること Criticizing or attacking others\n\n特定の政治的立場・宗教・思想への賛同または反対を呼びかけること Calling for support or opposition to a particular political position, religion, or ideology\n\n刺激の強い表現をゾーニングなしで公開すること Publishing R-18 voice without zoning\n\nなりすましなど、提供者に不利益をもたらすこと detrimental to the provider",
"# 商用利用可能なVRMも作りました。/ VRM(Vroid) model for commercial use\n\n\nAITuberや動画解説などに用いてください。/Please use this for AITuber and video creations\n\n\nVRM(Vroid)Model",
"# できればやって欲しいこと/If you like\nX(Twitter)や説明文でこのモデルを使ったことを書いてもらえると作者が喜びます。(必須ではありません)\nIf you write that you are using this model, I will be glad!",
"# モデルの使い方/how to use (コードはgoogle colab用です。 For google colab)\n\n2通りの使用方法があります。必要に応じて選択してください。There are 2 ways to use model.\n\n1.style-bert-vits2のアプリを使ってボイスを生成する/to use style-bert-vits2 app\n\n①Style-Bert-VITS2 インストール先の Style-Bert-VITS2/model_assets/rikka_botan/ フォルダに URL, safetensors, style_vectors.npy の 3ファイルを置きます。\nPut 3 files on Style-Bert-VITS2/model_assets/rikka_botan/ folder\n\n以下のプログラムで自動的に保存できます。By using this program, we can save files.\n\n\n②以下のプログラムを実行します execute this program\n\n\n③public URLにアクセスします。access public url\n\n2.以下のコードを利用します。use this code",
"# 謝辞/Acknowledgments\nstyle-bert-vits2-jp-extraを開発してくださったlitaginさんに感謝いたします。\nまた、本モデルは多くの研究者さんの努力の上にできています。先人たちの努力に深く感謝します。 \nWe would like to thank Mr./Ms. litagin for developing style-bert-vits2-jp-extra.\n Additionally, this model was created based on the efforts of many developers. We are deeply grateful for the efforts of our predecessors."
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| {"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"} | bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned-adapters_Aleatoric_tiny_0.8_Seed103 | null | [
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-04-25T03:06:23+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| {"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"} | bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned_Aleatoric_tiny_0.8_Seed103 | null | [
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-04-25T03:06:29+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.024 | 1.0 | 1485 | 0.0142 |
| 0.0132 | 2.0 | 2970 | 0.0086 |
| 0.0084 | 3.0 | 4455 | 0.0077 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["eli5_category"], "base_model": "FacebookAI/xlm-roberta-base", "model-index": [{"name": "my_awesome_eli5_clm-model", "results": []}]} | liamvbetts/my_awesome_eli5_clm-model | null | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-generation",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T03:06:34+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #xlm-roberta #text-generation #generated_from_trainer #dataset-eli5_category #base_model-FacebookAI/xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
| my\_awesome\_eli5\_clm-model
============================
This model is a fine-tuned version of FacebookAI/xlm-roberta-base on the eli5\_category dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0077
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #xlm-roberta #text-generation #generated_from_trainer #dataset-eli5_category #base_model-FacebookAI/xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4331
- Accuracy: 0.4310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3921 | 0.3626 |
| 1.4036 | 2.0 | 776 | 1.2921 | 0.4284 |
| 1.0099 | 3.0 | 1164 | 1.4331 | 0.4310 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["classification", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mrm8488/electricidad-base-discriminator", "model-index": [{"name": "clasificador-muchocine", "results": []}]} | ironheard/clasificador-muchocine | null | [
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T03:07:28+00:00 | [] | [] | TAGS
#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us
| clasificador-muchocine
======================
This model is a fine-tuned version of mrm8488/electricidad-base-discriminator on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4331
* Accuracy: 0.4310
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #electra #text-classification #classification #generated_from_trainer #base_model-mrm8488/electricidad-base-discriminator #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | ripaaiii/fine-tune-C1-stage1_5epoch | null | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T03:08:49+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-70b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/dolphin-2.9-llama3-70b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-70b-GGUF/resolve/main/dolphin-2.9-llama3-70b.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-70b-GGUF/resolve/main/dolphin-2.9-llama3-70b.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-70b-GGUF/resolve/main/dolphin-2.9-llama3-70b.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-70b-GGUF/resolve/main/dolphin-2.9-llama3-70b.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-70b-GGUF/resolve/main/dolphin-2.9-llama3-70b.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-70b-GGUF/resolve/main/dolphin-2.9-llama3-70b.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-70b-GGUF/resolve/main/dolphin-2.9-llama3-70b.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-70b-GGUF/resolve/main/dolphin-2.9-llama3-70b.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-70b-GGUF/resolve/main/dolphin-2.9-llama3-70b.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-70b-GGUF/resolve/main/dolphin-2.9-llama3-70b.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-70b-GGUF/resolve/main/dolphin-2.9-llama3-70b.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-70b-GGUF/resolve/main/dolphin-2.9-llama3-70b.Q5_K_M.gguf) | Q5_K_M | 50.1 | |
| [PART 1](https://huggingface.co/mradermacher/dolphin-2.9-llama3-70b-GGUF/resolve/main/dolphin-2.9-llama3-70b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dolphin-2.9-llama3-70b-GGUF/resolve/main/dolphin-2.9-llama3-70b.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/dolphin-2.9-llama3-70b-GGUF/resolve/main/dolphin-2.9-llama3-70b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dolphin-2.9-llama3-70b-GGUF/resolve/main/dolphin-2.9-llama3-70b.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "llama3", "library_name": "transformers", "datasets": ["cognitivecomputations/Dolphin-2.9", "teknium/OpenHermes-2.5", "m-a-p/CodeFeedback-Filtered-Instruction", "cognitivecomputations/dolphin-coder", "cognitivecomputations/samantha-data", "HuggingFaceH4/ultrachat_200k", "microsoft/orca-math-word-problems-200k", "abacusai/SystemChat-1.1", "Locutusque/function-calling-chatml", "internlm/Agent-FLAN"], "base_model": "cognitivecomputations/dolphin-2.9-llama3-70b", "quantized_by": "mradermacher"} | mradermacher/dolphin-2.9-llama3-70b-GGUF | null | [
"transformers",
"gguf",
"en",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:abacusai/SystemChat-1.1",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"base_model:cognitivecomputations/dolphin-2.9-llama3-70b",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T03:10:49+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #dataset-cognitivecomputations/Dolphin-2.9 #dataset-teknium/OpenHermes-2.5 #dataset-m-a-p/CodeFeedback-Filtered-Instruction #dataset-cognitivecomputations/dolphin-coder #dataset-cognitivecomputations/samantha-data #dataset-HuggingFaceH4/ultrachat_200k #dataset-microsoft/orca-math-word-problems-200k #dataset-abacusai/SystemChat-1.1 #dataset-Locutusque/function-calling-chatml #dataset-internlm/Agent-FLAN #base_model-cognitivecomputations/dolphin-2.9-llama3-70b #license-llama3 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #dataset-cognitivecomputations/Dolphin-2.9 #dataset-teknium/OpenHermes-2.5 #dataset-m-a-p/CodeFeedback-Filtered-Instruction #dataset-cognitivecomputations/dolphin-coder #dataset-cognitivecomputations/samantha-data #dataset-HuggingFaceH4/ultrachat_200k #dataset-microsoft/orca-math-word-problems-200k #dataset-abacusai/SystemChat-1.1 #dataset-Locutusque/function-calling-chatml #dataset-internlm/Agent-FLAN #base_model-cognitivecomputations/dolphin-2.9-llama3-70b #license-llama3 #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_ablation_4iters_bs128_nodpo_iter_3
This model is a fine-tuned version of [ShenaoZhang/0.001_ablation_4iters_bs128_nodpo_iter_2](https://huggingface.co/ShenaoZhang/0.001_ablation_4iters_bs128_nodpo_iter_2) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.001_ablation_4iters_bs128_nodpo_iter_2", "model-index": [{"name": "0.001_ablation_4iters_bs128_nodpo_iter_3", "results": []}]} | ShenaoZhang/0.001_ablation_4iters_bs128_nodpo_iter_3 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.001_ablation_4iters_bs128_nodpo_iter_2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T03:12:11+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.001_ablation_4iters_bs128_nodpo_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.001_ablation_4iters_bs128_nodpo_iter_3
This model is a fine-tuned version of ShenaoZhang/0.001_ablation_4iters_bs128_nodpo_iter_2 on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| [
"# 0.001_ablation_4iters_bs128_nodpo_iter_3\n\nThis model is a fine-tuned version of ShenaoZhang/0.001_ablation_4iters_bs128_nodpo_iter_2 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.001_ablation_4iters_bs128_nodpo_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.001_ablation_4iters_bs128_nodpo_iter_3\n\nThis model is a fine-tuned version of ShenaoZhang/0.001_ablation_4iters_bs128_nodpo_iter_2 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
text-to-image | diffusers | # BTD ARTSTYLE XL
<Gallery />
## Trigger words
You should use `Manhwa` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/ORILIN024/BTD_ART_STYLEXL/tree/main) them in the Files & versions tab.
| {"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "-", "output": {"url": "images/52ab3039395ef8f7d68cd028bcb5b40f_high.webp"}}], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "BTD"} | ORILIN024/BTD_ART_STYLEXL | null | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | null | 2024-04-25T03:13:54+00:00 | [] | [] | TAGS
#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us
| # BTD ARTSTYLE XL
<Gallery />
## Trigger words
You should use 'Manhwa' to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
| [
"# BTD ARTSTYLE XL\n\n<Gallery />",
"## Trigger words\n\nYou should use 'Manhwa' to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] | [
"TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us \n",
"# BTD ARTSTYLE XL\n\n<Gallery />",
"## Trigger words\n\nYou should use 'Manhwa' to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] |
null | null |
# NikolayKozloff/Orbita-v0.1-Q5_K_M-GGUF
This model was converted to GGUF format from [`Orbina/Orbita-v0.1`](https://huggingface.co/Orbina/Orbita-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Orbina/Orbita-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Orbita-v0.1-Q5_K_M-GGUF --model orbita-v0.1.Q5_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo NikolayKozloff/Orbita-v0.1-Q5_K_M-GGUF --model orbita-v0.1.Q5_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m orbita-v0.1.Q5_K_M.gguf -n 128
```
| {"language": ["tr"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "model-index": [{"name": "Orbita-v0.1", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge TR", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc", "value": 30.15, "name": "accuracy"}]}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag TR", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc", "value": 37.95, "name": "accuracy"}]}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU TR", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 47.94, "name": "accuracy"}]}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA TR", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "acc", "value": 41.93, "name": "accuracy"}]}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande TR", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 54.42, "name": "accuracy"}]}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k TR", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 47.72, "name": "accuracy"}]}]}]} | NikolayKozloff/Orbita-v0.1-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"tr",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-04-25T03:14:18+00:00 | [] | [
"tr"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #tr #license-apache-2.0 #model-index #region-us
|
# NikolayKozloff/Orbita-v0.1-Q5_K_M-GGUF
This model was converted to GGUF format from 'Orbina/Orbita-v0.1' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# NikolayKozloff/Orbita-v0.1-Q5_K_M-GGUF\nThis model was converted to GGUF format from 'Orbina/Orbita-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #tr #license-apache-2.0 #model-index #region-us \n",
"# NikolayKozloff/Orbita-v0.1-Q5_K_M-GGUF\nThis model was converted to GGUF format from 'Orbina/Orbita-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | gkMSDA/Llama-3-8b-FinChatGTP298_DJ30_Model_4 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T03:15:09+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | Quantizations of https://huggingface.co/bigscience/bloom-3b
# From original readme
... | {"language": ["en"], "license": "other", "tags": ["transformers", "gguf", "imatrix", "bigscience", "bloom-3b"], "inference": false, "pipeline_tag": "text-generation"} | duyntnet/bloom-3b-imatrix-GGUF | null | [
"transformers",
"gguf",
"imatrix",
"bigscience",
"bloom-3b",
"text-generation",
"en",
"license:other",
"region:us"
] | null | 2024-04-25T03:23:19+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #imatrix #bigscience #bloom-3b #text-generation #en #license-other #region-us
| Quantizations of URL
# From original readme
... | [
"# From original readme\n\n..."
] | [
"TAGS\n#transformers #gguf #imatrix #bigscience #bloom-3b #text-generation #en #license-other #region-us \n",
"# From original readme\n\n..."
] |
text-generation | transformers |
# Keiana-L3-Test4.8-8B-4
# Keep in mind that it's not yet tested, and I unsure if would work as planned.
Keiana-L3-Test4.8-8B-4 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kaoeiri/Keiana-L3-Test4.7-8B-3](https://huggingface.co/Kaoeiri/Keiana-L3-Test4.7-8B-3)
## 🧩 Configuration
```yaml
merge_method: task_arithmetic
dtype: float16
base_model: jeiku/Average_Normie_l3_v1_8B
models:
- model: Kaoeiri/Keiana-L3-Test4.7-8B-3
parameters:
weight: 1.0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kaoeiri/Keiana-L3-Test4.8-8B-4"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test4.7-8B-3"], "base_model": ["Kaoeiri/Keiana-L3-Test4.7-8B-3"]} | Kaoeiri/Keiana-L3-Test4.8-8B-4 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kaoeiri/Keiana-L3-Test4.7-8B-3",
"conversational",
"base_model:Kaoeiri/Keiana-L3-Test4.7-8B-3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T03:23:30+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #Kaoeiri/Keiana-L3-Test4.7-8B-3 #conversational #base_model-Kaoeiri/Keiana-L3-Test4.7-8B-3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Keiana-L3-Test4.8-8B-4
# Keep in mind that it's not yet tested, and I unsure if would work as planned.
Keiana-L3-Test4.8-8B-4 is a merge of the following models using LazyMergekit:
* Kaoeiri/Keiana-L3-Test4.7-8B-3
## Configuration
## Usage
| [
"# Keiana-L3-Test4.8-8B-4",
"# Keep in mind that it's not yet tested, and I unsure if would work as planned.\n\n\nKeiana-L3-Test4.8-8B-4 is a merge of the following models using LazyMergekit:\n* Kaoeiri/Keiana-L3-Test4.7-8B-3",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #Kaoeiri/Keiana-L3-Test4.7-8B-3 #conversational #base_model-Kaoeiri/Keiana-L3-Test4.7-8B-3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Keiana-L3-Test4.8-8B-4",
"# Keep in mind that it's not yet tested, and I unsure if would work as planned.\n\n\nKeiana-L3-Test4.8-8B-4 is a merge of the following models using LazyMergekit:\n* Kaoeiri/Keiana-L3-Test4.7-8B-3",
"## Configuration",
"## Usage"
] |
sentence-similarity | sentence-transformers |
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developed this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developed this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intended to be used as a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained our model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** | | {"language": "en", "license": "apache-2.0", "library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "datasets": ["s2orc", "flax-sentence-embeddings/stackexchange_xml", "ms_marco", "gooaq", "yahoo_answers_topics", "code_search_net", "search_qa", "eli5", "snli", "multi_nli", "wikihow", "natural_questions", "trivia_qa", "embedding-data/sentence-compression", "embedding-data/flickr30k-captions", "embedding-data/altlex", "embedding-data/simple-wiki", "embedding-data/QQP", "embedding-data/SPECTER", "embedding-data/PAQ_pairs", "embedding-data/WikiAnswers"], "pipeline_tag": "sentence-similarity"} | PIXMELT/all-MiniLM-L6-v2 | null | [
"sentence-transformers",
"pytorch",
"tf",
"rust",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T03:24:27+00:00 | [
"1904.06472",
"2102.07033",
"2104.08727",
"1704.05179",
"1810.09305"
] | [
"en"
] | TAGS
#sentence-transformers #pytorch #tf #rust #safetensors #bert #feature-extraction #sentence-similarity #transformers #en #dataset-s2orc #dataset-flax-sentence-embeddings/stackexchange_xml #dataset-ms_marco #dataset-gooaq #dataset-yahoo_answers_topics #dataset-code_search_net #dataset-search_qa #dataset-eli5 #dataset-snli #dataset-multi_nli #dataset-wikihow #dataset-natural_questions #dataset-trivia_qa #dataset-embedding-data/sentence-compression #dataset-embedding-data/flickr30k-captions #dataset-embedding-data/altlex #dataset-embedding-data/simple-wiki #dataset-embedding-data/QQP #dataset-embedding-data/SPECTER #dataset-embedding-data/PAQ_pairs #dataset-embedding-data/WikiAnswers #arxiv-1904.06472 #arxiv-2102.07033 #arxiv-2104.08727 #arxiv-1704.05179 #arxiv-1810.09305 #license-apache-2.0 #endpoints_compatible #region-us
| all-MiniLM-L6-v2
================
This is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
Usage (Sentence-Transformers)
-----------------------------
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
Usage (HuggingFace Transformers)
--------------------------------
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
Evaluation Results
------------------
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
---
Background
----------
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained 'nreimers/MiniLM-L6-H384-uncased' model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developed this model during the
Community week using JAX/Flax for NLP & CV,
organized by Hugging Face. We developed this model as part of the project:
Train the Best Sentence Embedding Model Ever with 1B Training Pairs. We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
Intended uses
-------------
Our model is intended to be used as a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
Training procedure
------------------
### Pre-training
We use the pretrained 'nreimers/MiniLM-L6-H384-uncased' model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained our model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: 'train\_script.py'.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the 'data\_config.json' file.
| [
"### Pre-training\n\n\nWe use the pretrained 'nreimers/MiniLM-L6-H384-uncased' model. Please refer to the model card for more detailed information about the pre-training procedure.",
"### Fine-tuning\n\n\nWe fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.\nWe then apply the cross entropy loss by comparing with true pairs.",
"#### Hyper parameters\n\n\nWe trained our model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).\nWe use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with\na 2e-5 learning rate. The full training script is accessible in this current repository: 'train\\_script.py'.",
"#### Training data\n\n\nWe use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.\nWe sampled each dataset given a weighted probability which configuration is detailed in the 'data\\_config.json' file."
] | [
"TAGS\n#sentence-transformers #pytorch #tf #rust #safetensors #bert #feature-extraction #sentence-similarity #transformers #en #dataset-s2orc #dataset-flax-sentence-embeddings/stackexchange_xml #dataset-ms_marco #dataset-gooaq #dataset-yahoo_answers_topics #dataset-code_search_net #dataset-search_qa #dataset-eli5 #dataset-snli #dataset-multi_nli #dataset-wikihow #dataset-natural_questions #dataset-trivia_qa #dataset-embedding-data/sentence-compression #dataset-embedding-data/flickr30k-captions #dataset-embedding-data/altlex #dataset-embedding-data/simple-wiki #dataset-embedding-data/QQP #dataset-embedding-data/SPECTER #dataset-embedding-data/PAQ_pairs #dataset-embedding-data/WikiAnswers #arxiv-1904.06472 #arxiv-2102.07033 #arxiv-2104.08727 #arxiv-1704.05179 #arxiv-1810.09305 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Pre-training\n\n\nWe use the pretrained 'nreimers/MiniLM-L6-H384-uncased' model. Please refer to the model card for more detailed information about the pre-training procedure.",
"### Fine-tuning\n\n\nWe fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.\nWe then apply the cross entropy loss by comparing with true pairs.",
"#### Hyper parameters\n\n\nWe trained our model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).\nWe use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with\na 2e-5 learning rate. The full training script is accessible in this current repository: 'train\\_script.py'.",
"#### Training data\n\n\nWe use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.\nWe sampled each dataset given a weighted probability which configuration is detailed in the 'data\\_config.json' file."
] |
text-generation | transformers | # Chaos RP

A chaotic force beckons for you, will you heed her call?
Built upon an intelligent foundation and tuned for roleplaying, this model will fulfill your wildest fantasies with the bare minimum of effort.
Enjoy! | {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": ["ChaoticNeutrals/IQ_Test_l3_8B", "ResplendentAI/RP_Format_QuoteAsterisk_Llama3"]} | zaq-hack/Chaos_RP_l3_8B-bpw500-h6-exl2-rpcal | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"base_model:ChaoticNeutrals/IQ_Test_l3_8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"5-bit",
"region:us"
] | null | 2024-04-25T03:26:02+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #conversational #en #base_model-ChaoticNeutrals/IQ_Test_l3_8B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #5-bit #region-us
| # Chaos RP
!image/png
A chaotic force beckons for you, will you heed her call?
Built upon an intelligent foundation and tuned for roleplaying, this model will fulfill your wildest fantasies with the bare minimum of effort.
Enjoy! | [
"# Chaos RP\n\n!image/png\n\nA chaotic force beckons for you, will you heed her call?\n\nBuilt upon an intelligent foundation and tuned for roleplaying, this model will fulfill your wildest fantasies with the bare minimum of effort.\n\nEnjoy!"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #en #base_model-ChaoticNeutrals/IQ_Test_l3_8B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #5-bit #region-us \n",
"# Chaos RP\n\n!image/png\n\nA chaotic force beckons for you, will you heed her call?\n\nBuilt upon an intelligent foundation and tuned for roleplaying, this model will fulfill your wildest fantasies with the bare minimum of effort.\n\nEnjoy!"
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-base-patch16-224-finetuned-ind-17-imbalanced-aadhaarmask-14687
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3950
- Accuracy: 0.8310
- Recall: 0.8310
- F1: 0.8298
- Precision: 0.8360
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.8293 | 0.9974 | 293 | 0.7793 | 0.7680 | 0.7680 | 0.7403 | 0.7277 |
| 0.5921 | 1.9983 | 587 | 0.5663 | 0.7940 | 0.7940 | 0.7843 | 0.7839 |
| 0.4308 | 2.9991 | 881 | 0.4589 | 0.8208 | 0.8208 | 0.8161 | 0.8213 |
| 0.3999 | 4.0 | 1175 | 0.4772 | 0.8263 | 0.8263 | 0.8216 | 0.8337 |
| 0.4801 | 4.9974 | 1468 | 0.4258 | 0.8378 | 0.8378 | 0.8306 | 0.8463 |
| 0.4201 | 5.9983 | 1762 | 0.4120 | 0.8246 | 0.8246 | 0.8213 | 0.8394 |
| 0.3233 | 6.9991 | 2056 | 0.3989 | 0.8306 | 0.8306 | 0.8268 | 0.8445 |
| 0.3954 | 8.0 | 2350 | 0.3794 | 0.8365 | 0.8365 | 0.8341 | 0.8383 |
| 0.2835 | 8.9974 | 2643 | 0.4438 | 0.8318 | 0.8318 | 0.8278 | 0.8434 |
| 0.2913 | 9.9983 | 2937 | 0.3799 | 0.8416 | 0.8416 | 0.8404 | 0.8451 |
| 0.3261 | 10.9991 | 3231 | 0.3694 | 0.8297 | 0.8297 | 0.8272 | 0.8306 |
| 0.3299 | 12.0 | 3525 | 0.3637 | 0.8442 | 0.8442 | 0.8425 | 0.8529 |
| 0.3273 | 12.9974 | 3818 | 0.3649 | 0.8421 | 0.8421 | 0.8411 | 0.8482 |
| 0.2596 | 13.9983 | 4112 | 0.4152 | 0.8259 | 0.8259 | 0.8213 | 0.8281 |
| 0.2813 | 14.9991 | 4406 | 0.3578 | 0.8429 | 0.8429 | 0.8409 | 0.8491 |
| 0.2406 | 16.0 | 4700 | 0.3813 | 0.8323 | 0.8323 | 0.8285 | 0.8362 |
| 0.2263 | 16.9974 | 4993 | 0.3808 | 0.8318 | 0.8318 | 0.8275 | 0.8377 |
| 0.3192 | 17.9983 | 5287 | 0.3625 | 0.8412 | 0.8412 | 0.8372 | 0.8484 |
| 0.2003 | 18.9991 | 5581 | 0.3549 | 0.8438 | 0.8438 | 0.8430 | 0.8462 |
| 0.2431 | 20.0 | 5875 | 0.3620 | 0.8425 | 0.8425 | 0.8408 | 0.8467 |
| 0.2654 | 20.9974 | 6168 | 0.3865 | 0.8340 | 0.8340 | 0.8320 | 0.8338 |
| 0.2989 | 21.9983 | 6462 | 0.3632 | 0.8463 | 0.8463 | 0.8449 | 0.8498 |
| 0.2403 | 22.9991 | 6756 | 0.3824 | 0.8301 | 0.8301 | 0.8267 | 0.8304 |
| 0.2393 | 24.0 | 7050 | 0.3607 | 0.8489 | 0.8489 | 0.8473 | 0.8519 |
| 0.2305 | 24.9974 | 7343 | 0.3758 | 0.8365 | 0.8365 | 0.8350 | 0.8401 |
| 0.2654 | 25.9983 | 7637 | 0.3652 | 0.8421 | 0.8421 | 0.8392 | 0.8415 |
| 0.176 | 26.9991 | 7931 | 0.3929 | 0.8306 | 0.8306 | 0.8289 | 0.8385 |
| 0.1893 | 28.0 | 8225 | 0.3794 | 0.8374 | 0.8374 | 0.8365 | 0.8404 |
| 0.2652 | 28.9974 | 8518 | 0.3995 | 0.8387 | 0.8387 | 0.8372 | 0.8423 |
| 0.2029 | 29.9983 | 8812 | 0.3981 | 0.8433 | 0.8433 | 0.8411 | 0.8430 |
| 0.1799 | 30.9991 | 9106 | 0.3554 | 0.8352 | 0.8352 | 0.8340 | 0.8368 |
| 0.2002 | 32.0 | 9400 | 0.3618 | 0.8310 | 0.8310 | 0.8300 | 0.8322 |
| 0.1525 | 32.9974 | 9693 | 0.3629 | 0.8348 | 0.8348 | 0.8343 | 0.8381 |
| 0.1663 | 33.9983 | 9987 | 0.3664 | 0.8425 | 0.8425 | 0.8410 | 0.8427 |
| 0.1728 | 34.9991 | 10281 | 0.3928 | 0.8429 | 0.8429 | 0.8415 | 0.8468 |
| 0.2252 | 36.0 | 10575 | 0.3842 | 0.8421 | 0.8421 | 0.8420 | 0.8443 |
| 0.1554 | 36.9974 | 10868 | 0.3889 | 0.8301 | 0.8301 | 0.8294 | 0.8349 |
| 0.2179 | 37.9983 | 11162 | 0.3775 | 0.8399 | 0.8399 | 0.8389 | 0.8429 |
| 0.1771 | 38.9991 | 11456 | 0.3906 | 0.8306 | 0.8306 | 0.8291 | 0.8324 |
| 0.2167 | 40.0 | 11750 | 0.3870 | 0.8404 | 0.8404 | 0.8382 | 0.8456 |
| 0.1563 | 40.9974 | 12043 | 0.3779 | 0.8284 | 0.8284 | 0.8277 | 0.8288 |
| 0.1419 | 41.9983 | 12337 | 0.4049 | 0.8340 | 0.8340 | 0.8327 | 0.8360 |
| 0.2083 | 42.9991 | 12631 | 0.3800 | 0.8421 | 0.8421 | 0.8410 | 0.8427 |
| 0.2185 | 44.0 | 12925 | 0.3964 | 0.8433 | 0.8433 | 0.8422 | 0.8441 |
| 0.1989 | 44.9974 | 13218 | 0.3870 | 0.8340 | 0.8340 | 0.8339 | 0.8357 |
| 0.1731 | 45.9983 | 13512 | 0.4206 | 0.8340 | 0.8340 | 0.8335 | 0.8357 |
| 0.1831 | 46.9991 | 13806 | 0.4027 | 0.8429 | 0.8429 | 0.8422 | 0.8439 |
| 0.1471 | 48.0 | 14100 | 0.4016 | 0.8318 | 0.8318 | 0.8307 | 0.8320 |
| 0.1879 | 48.9974 | 14393 | 0.3877 | 0.8438 | 0.8438 | 0.8441 | 0.8468 |
| 0.1775 | 49.8723 | 14650 | 0.3984 | 0.8421 | 0.8421 | 0.8408 | 0.8428 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.0a0+81ea7a4
- Datasets 2.18.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy", "recall", "f1", "precision"], "base_model": "facebook/deit-base-patch16-224", "model-index": [{"name": "deit-base-patch16-224-finetuned-ind-17-imbalanced-aadhaarmask-14687", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.8309919114516816, "name": "Accuracy"}, {"type": "recall", "value": 0.8309919114516816, "name": "Recall"}, {"type": "f1", "value": 0.8298114215031374, "name": "F1"}, {"type": "precision", "value": 0.8359531770567361, "name": "Precision"}]}]}]} | Kushagra07/deit-base-patch16-224-finetuned-ind-17-imbalanced-aadhaarmask-14687 | null | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T03:28:20+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #dataset-imagefolder #base_model-facebook/deit-base-patch16-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| deit-base-patch16-224-finetuned-ind-17-imbalanced-aadhaarmask-14687
===================================================================
This model is a fine-tuned version of facebook/deit-base-patch16-224 on the imagefolder dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3950
* Accuracy: 0.8310
* Recall: 0.8310
* F1: 0.8298
* Precision: 0.8360
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 50
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.0a0+81ea7a4
* Datasets 2.18.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 50",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0a0+81ea7a4\n* Datasets 2.18.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #dataset-imagefolder #base_model-facebook/deit-base-patch16-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 50",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0a0+81ea7a4\n* Datasets 2.18.0\n* Tokenizers 0.19.1"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ResplendentAI/SOVL_Llama3_8B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "ResplendentAI/SOVL_Llama3_8B", "quantized_by": "mradermacher"} | mradermacher/SOVL_Llama3_8B-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:ResplendentAI/SOVL_Llama3_8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T03:31:02+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-ResplendentAI/SOVL_Llama3_8B #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-ResplendentAI/SOVL_Llama3_8B #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
null | null | Pre-trained models of Portrait4D and Portrait4D-v2:
- genhead-ffhq512: GenHead trained on FFHQ dataset at 512x512 resolution
- portrait4d-genhead512: Portrait4D trained with synthetic data of GenHead at 512x512 resolution
- portrait4d-static-genhead512: Portrait4D without motion-related cross-attentions, trained with synthetic data of GenHead at 512x512 resolution
- portrait4d-v2-vfhq512: Portrait4D-v2 finetuned from portrait4d-static-genhead512, trained on VFHQ dataset at 512x512 resolution
| {"license": "mit"} | bEijuuu/Portrait4D | null | [
"license:mit",
"region:us"
] | null | 2024-04-25T03:31:11+00:00 | [] | [] | TAGS
#license-mit #region-us
| Pre-trained models of Portrait4D and Portrait4D-v2:
- genhead-ffhq512: GenHead trained on FFHQ dataset at 512x512 resolution
- portrait4d-genhead512: Portrait4D trained with synthetic data of GenHead at 512x512 resolution
- portrait4d-static-genhead512: Portrait4D without motion-related cross-attentions, trained with synthetic data of GenHead at 512x512 resolution
- portrait4d-v2-vfhq512: Portrait4D-v2 finetuned from portrait4d-static-genhead512, trained on VFHQ dataset at 512x512 resolution
| [] | [
"TAGS\n#license-mit #region-us \n"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-350m_ft_test5
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "opt-350m_ft_test5", "results": []}]} | underactuated/opt-350m_ft_test5 | null | [
"transformers",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T03:32:59+00:00 | [] | [] | TAGS
#transformers #safetensors #opt #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# opt-350m_ft_test5
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# opt-350m_ft_test5\n\nThis model was trained from scratch on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #opt #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# opt-350m_ft_test5\n\nThis model was trained from scratch on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
automatic-speech-recognition | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | jxie/whisper-tiny-speechocean | null | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T03:34:05+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #whisper #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #whisper #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - BKM1804/textual_inversion_cat
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "textual_inversion", "diffusers-training"], "base_model": "runwayml/stable-diffusion-v1-5", "inference": true} | BKM1804/textual_inversion_cat | null | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"diffusers-training",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-25T03:34:11+00:00 | [] | [] | TAGS
#diffusers #tensorboard #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #textual_inversion #diffusers-training #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# Textual inversion text2image fine-tuning - BKM1804/textual_inversion_cat
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# Textual inversion text2image fine-tuning - BKM1804/textual_inversion_cat\nThese are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #tensorboard #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #textual_inversion #diffusers-training #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# Textual inversion text2image fine-tuning - BKM1804/textual_inversion_cat\nThese are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | dahye1/lg-gemma-ko-ver3 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T03:34:52+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model-2
This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0386 | 1.0 | 1311 | 0.0046 |
| 0.0173 | 2.0 | 2622 | 0.0025 |
| 0.0049 | 3.0 | 3933 | 0.0009 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["eli5_category"], "base_model": "FacebookAI/roberta-base", "model-index": [{"name": "my_awesome_eli5_clm-model-2", "results": []}]} | liamvbetts/my_awesome_eli5_clm-model-2 | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-generation",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T03:35:21+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #roberta #text-generation #generated_from_trainer #dataset-eli5_category #base_model-FacebookAI/roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
| my\_awesome\_eli5\_clm-model-2
==============================
This model is a fine-tuned version of FacebookAI/roberta-base on the eli5\_category dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0009
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #roberta #text-generation #generated_from_trainer #dataset-eli5_category #base_model-FacebookAI/roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | HenryCai1129/adapter-toxic2nontoxic-100-50-0.003 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T03:37:22+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth"]} | chillies/llama-3-8b-vn-legal-chat | null | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T03:38:41+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DPO-PairRM-5-SMI-lr-1e6-iteration-5-t-7e-beta-15e3-1-iteration-6e1-confidence-D1-D2_smi
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6444
- Rewards/chosen: -2.4793
- Rewards/rejected: -2.9560
- Rewards/accuracies: 0.6667
- Rewards/margins: 0.4767
- Rewards/mix Margin: 0.1749
- Logps/rejected: -481.8095
- Logps/chosen: -453.2426
- Logits/rejected: -1.7012
- Logits/chosen: -1.7287
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.17.1
- Tokenizers 0.15.1
| {"license": "apache-2.0", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "DPO-PairRM-5-SMI-lr-1e6-iteration-5-t-7e-beta-15e3-1-iteration-6e1-confidence-D1-D2_smi", "results": []}]} | vangard703/DPO-PairRM-5-SMI-lr-1e6-iteration-5-t-7e-beta-15e3-1-iteration-6e1-confidence-D1-D2_smi | null | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T03:39:07+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #mistral #text-generation #trl #dpo #generated_from_trainer #conversational #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# DPO-PairRM-5-SMI-lr-1e6-iteration-5-t-7e-beta-15e3-1-iteration-6e1-confidence-D1-D2_smi
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6444
- Rewards/chosen: -2.4793
- Rewards/rejected: -2.9560
- Rewards/accuracies: 0.6667
- Rewards/margins: 0.4767
- Rewards/mix Margin: 0.1749
- Logps/rejected: -481.8095
- Logps/chosen: -453.2426
- Logits/rejected: -1.7012
- Logits/chosen: -1.7287
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.17.1
- Tokenizers 0.15.1
| [
"# DPO-PairRM-5-SMI-lr-1e6-iteration-5-t-7e-beta-15e3-1-iteration-6e1-confidence-D1-D2_smi\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6444\n- Rewards/chosen: -2.4793\n- Rewards/rejected: -2.9560\n- Rewards/accuracies: 0.6667\n- Rewards/margins: 0.4767\n- Rewards/mix Margin: 0.1749\n- Logps/rejected: -481.8095\n- Logps/chosen: -453.2426\n- Logits/rejected: -1.7012\n- Logits/chosen: -1.7287",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-06\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 64\n- total_eval_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2\n- Datasets 2.17.1\n- Tokenizers 0.15.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #mistral #text-generation #trl #dpo #generated_from_trainer #conversational #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# DPO-PairRM-5-SMI-lr-1e6-iteration-5-t-7e-beta-15e3-1-iteration-6e1-confidence-D1-D2_smi\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6444\n- Rewards/chosen: -2.4793\n- Rewards/rejected: -2.9560\n- Rewards/accuracies: 0.6667\n- Rewards/margins: 0.4767\n- Rewards/mix Margin: 0.1749\n- Logps/rejected: -481.8095\n- Logps/chosen: -453.2426\n- Logits/rejected: -1.7012\n- Logits/chosen: -1.7287",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-06\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 64\n- total_eval_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2\n- Datasets 2.17.1\n- Tokenizers 0.15.1"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_summarization_finetuned
This model is a fine-tuned version of [Falconsai/text_summarization](https://huggingface.co/Falconsai/text_summarization) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2709
- Rouge1: 0.0876
- Rouge2: 0.0826
- Rougel: 0.0876
- Rougelsum: 0.0876
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.3375 | 1.0 | 4000 | 0.2961 | 0.0876 | 0.0826 | 0.0876 | 0.0876 | 19.0 |
| 0.3046 | 2.0 | 8000 | 0.2776 | 0.0876 | 0.0826 | 0.0876 | 0.0876 | 19.0 |
| 0.2929 | 3.0 | 12000 | 0.2726 | 0.0876 | 0.0826 | 0.0876 | 0.0876 | 19.0 |
| 0.2915 | 4.0 | 16000 | 0.2709 | 0.0876 | 0.0826 | 0.0876 | 0.0876 | 19.0 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "Falconsai/text_summarization", "model-index": [{"name": "text_summarization_finetuned", "results": []}]} | HARDYCHEN/text_summarization_finetuned | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Falconsai/text_summarization",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T03:40:12+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-Falconsai/text_summarization #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| text\_summarization\_finetuned
==============================
This model is a fine-tuned version of Falconsai/text\_summarization on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2709
* Rouge1: 0.0876
* Rouge2: 0.0826
* Rougel: 0.0876
* Rougelsum: 0.0876
* Gen Len: 19.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* distributed\_type: multi-GPU
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 4
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.16.1
* Tokenizers 0.15.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* distributed\\_type: multi-GPU\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-Falconsai/text_summarization #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* distributed\\_type: multi-GPU\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [motherfucker0/zhun02](https://huggingface.co/motherfucker0/zhun02)
* [motherfucker0/zhun01](https://huggingface.co/motherfucker0/zhun01)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: motherfucker0/zhun01
layer_range: [0, 30]
- model: motherfucker0/zhun02
layer_range: [0, 30]
merge_method: slerp
base_model: motherfucker0/zhun02
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.05
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["motherfucker0/zhun02", "motherfucker0/zhun01"]} | motherfucker0/zhen02 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:motherfucker0/zhun02",
"base_model:motherfucker0/zhun01",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T03:40:35+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-motherfucker0/zhun02 #base_model-motherfucker0/zhun01 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* motherfucker0/zhun02
* motherfucker0/zhun01
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* motherfucker0/zhun02\n* motherfucker0/zhun01",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-motherfucker0/zhun02 #base_model-motherfucker0/zhun01 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* motherfucker0/zhun02\n* motherfucker0/zhun01",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | SamaahKhan/Phi-before-fine-tuning | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T03:44:28+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ppo_zephyr4
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "ppo_zephyr4", "results": []}]} | vwxyzjn/ppo_zephyr4 | null | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T03:45:13+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #mistral #text-generation #generated_from_trainer #conversational #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ppo_zephyr4
This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"# ppo_zephyr4\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-06\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #mistral #text-generation #generated_from_trainer #conversational #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ppo_zephyr4\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-06\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | chohi/phi-3-finetuned2-med-text | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T03:45:33+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/cloudyu/Meta-Llama-3-70B-Instruct-DPO
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q5_K_M.gguf) | Q5_K_M | 50.1 | |
| [PART 1](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF/resolve/main/Meta-Llama-3-70B-Instruct-DPO.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "cloudyu/Meta-Llama-3-70B-Instruct-DPO", "quantized_by": "mradermacher"} | mradermacher/Meta-Llama-3-70B-Instruct-DPO-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:cloudyu/Meta-Llama-3-70B-Instruct-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T03:45:43+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-cloudyu/Meta-Llama-3-70B-Instruct-DPO #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-cloudyu/Meta-Llama-3-70B-Instruct-DPO #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
null | null | # #llama-3 #experimental #work-in-progress
GGUF-IQ-Imatrix quants for @jeiku's [ResplendentAI/SOVL_Llama3_8B](https://huggingface.co/ResplendentAI/SOVL_Llama3_8B). <br> Give them some love!
> [!IMPORTANT]
> **Updated!**
> These quants have been redone with the fixes from [llama.cpp/pull/6920](https://github.com/ggerganov/llama.cpp/pull/6920) in mind. <br>
> Use **KoboldCpp version 1.64** or higher.
> [!NOTE]
> **Well...!** <br>
> Turns out it was not just a hallucination and this model actually is pretty cool so **give it a chance!** <br>
> For **8GB VRAM** GPUs, I recommend the **Q4_K_M-imat** quant for up to 12288 context sizes.
> [!WARNING]
> **Use the provided presets.** <br>
> Compatible SillyTavern presets [here (simple)](https://huggingface.co/Lewdiculous/Model-Requests/tree/main/data/presets/cope-llama-3-0.1) or [here (Virt's roleplay)](https://huggingface.co/Virt-io/SillyTavern-Presets).
> Use the latest version of KoboldCpp.
 | {"license": "apache-2.0"} | Lewdiculous/SOVL_Llama3_8B-GGUF-IQ-Imatrix | null | [
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2024-04-25T03:46:59+00:00 | [] | [] | TAGS
#gguf #license-apache-2.0 #region-us
| # #llama-3 #experimental #work-in-progress
GGUF-IQ-Imatrix quants for @jeiku's ResplendentAI/SOVL_Llama3_8B. <br> Give them some love!
> [!IMPORTANT]
> Updated!
> These quants have been redone with the fixes from URL in mind. <br>
> Use KoboldCpp version 1.64 or higher.
> [!NOTE]
> Well...! <br>
> Turns out it was not just a hallucination and this model actually is pretty cool so give it a chance! <br>
> For 8GB VRAM GPUs, I recommend the Q4_K_M-imat quant for up to 12288 context sizes.
> [!WARNING]
> Use the provided presets. <br>
> Compatible SillyTavern presets here (simple) or here (Virt's roleplay).
> Use the latest version of KoboldCpp.
!image/png | [
"# #llama-3 #experimental #work-in-progress\n\nGGUF-IQ-Imatrix quants for @jeiku's ResplendentAI/SOVL_Llama3_8B. <br> Give them some love!\n\n> [!IMPORTANT] \n> Updated!\n> These quants have been redone with the fixes from URL in mind. <br>\n> Use KoboldCpp version 1.64 or higher.\n\n> [!NOTE]\n> Well...! <br>\n> Turns out it was not just a hallucination and this model actually is pretty cool so give it a chance! <br>\n> For 8GB VRAM GPUs, I recommend the Q4_K_M-imat quant for up to 12288 context sizes.\n\n> [!WARNING]\n> Use the provided presets. <br>\n> Compatible SillyTavern presets here (simple) or here (Virt's roleplay).\n> Use the latest version of KoboldCpp.\n\n!image/png"
] | [
"TAGS\n#gguf #license-apache-2.0 #region-us \n",
"# #llama-3 #experimental #work-in-progress\n\nGGUF-IQ-Imatrix quants for @jeiku's ResplendentAI/SOVL_Llama3_8B. <br> Give them some love!\n\n> [!IMPORTANT] \n> Updated!\n> These quants have been redone with the fixes from URL in mind. <br>\n> Use KoboldCpp version 1.64 or higher.\n\n> [!NOTE]\n> Well...! <br>\n> Turns out it was not just a hallucination and this model actually is pretty cool so give it a chance! <br>\n> For 8GB VRAM GPUs, I recommend the Q4_K_M-imat quant for up to 12288 context sizes.\n\n> [!WARNING]\n> Use the provided presets. <br>\n> Compatible SillyTavern presets here (simple) or here (Virt's roleplay).\n> Use the latest version of KoboldCpp.\n\n!image/png"
] |
text-generation | transformers |
# Keiana-L3-Test4.9-8B-5
# Keep in mind that it's not yet tested, and I unsure if would work as planned.
Keiana-L3-Test4.9-8B-5 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kaoeiri/Keiana-L3-Test4.8-8B-4](https://huggingface.co/Kaoeiri/Keiana-L3-Test4.8-8B-4)
* [vicgalle/Roleplay-Llama-3-8B](https://huggingface.co/vicgalle/Roleplay-Llama-3-8B)
## 🧩 Configuration
```yaml
merge_method: task_arithmetic
dtype: float16
base_model: jeiku/Average_Normie_v2_l3_8B
models:
- model: Kaoeiri/Keiana-L3-Test4.8-8B-4
parameters:
weight: 1.0
- model: vicgalle/Roleplay-Llama-3-8B
parameters:
weight: 1.0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kaoeiri/Keiana-L3-Test4.9-8B-5"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test4.8-8B-4", "vicgalle/Roleplay-Llama-3-8B"], "base_model": ["Kaoeiri/Keiana-L3-Test4.8-8B-4", "vicgalle/Roleplay-Llama-3-8B"]} | Kaoeiri/Keiana-L3-Test4.9-8B-5 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kaoeiri/Keiana-L3-Test4.8-8B-4",
"vicgalle/Roleplay-Llama-3-8B",
"conversational",
"base_model:Kaoeiri/Keiana-L3-Test4.8-8B-4",
"base_model:vicgalle/Roleplay-Llama-3-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T03:47:13+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #Kaoeiri/Keiana-L3-Test4.8-8B-4 #vicgalle/Roleplay-Llama-3-8B #conversational #base_model-Kaoeiri/Keiana-L3-Test4.8-8B-4 #base_model-vicgalle/Roleplay-Llama-3-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Keiana-L3-Test4.9-8B-5
# Keep in mind that it's not yet tested, and I unsure if would work as planned.
Keiana-L3-Test4.9-8B-5 is a merge of the following models using LazyMergekit:
* Kaoeiri/Keiana-L3-Test4.8-8B-4
* vicgalle/Roleplay-Llama-3-8B
## Configuration
## Usage
| [
"# Keiana-L3-Test4.9-8B-5",
"# Keep in mind that it's not yet tested, and I unsure if would work as planned.\n\n\nKeiana-L3-Test4.9-8B-5 is a merge of the following models using LazyMergekit:\n* Kaoeiri/Keiana-L3-Test4.8-8B-4\n* vicgalle/Roleplay-Llama-3-8B",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #Kaoeiri/Keiana-L3-Test4.8-8B-4 #vicgalle/Roleplay-Llama-3-8B #conversational #base_model-Kaoeiri/Keiana-L3-Test4.8-8B-4 #base_model-vicgalle/Roleplay-Llama-3-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Keiana-L3-Test4.9-8B-5",
"# Keep in mind that it's not yet tested, and I unsure if would work as planned.\n\n\nKeiana-L3-Test4.9-8B-5 is a merge of the following models using LazyMergekit:\n* Kaoeiri/Keiana-L3-Test4.8-8B-4\n* vicgalle/Roleplay-Llama-3-8B",
"## Configuration",
"## Usage"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-0
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-160m", "model-index": [{"name": "robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-0", "results": []}]} | AlignmentResearch/robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-0 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-25T03:47:21+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-160m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-0
This model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-0\n\nThis model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-160m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-160m_mz-130_IMDB_n-its-10-seed-0\n\nThis model is a fine-tuned version of EleutherAI/pythia-160m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| {"library_name": "peft", "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"} | bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned-adapters_Aleatoric_tiny_0.8_Seed104 | null | [
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-04-25T03:47:31+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.7.0.dev0"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.