pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # shawgpt-ft This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4459 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9892 | 1.0 | 18 | 1.7747 | | 1.4194 | 2.0 | 36 | 1.5826 | | 1.2718 | 3.0 | 54 | 1.4822 | | 1.1567 | 4.0 | 72 | 1.4264 | | 1.0813 | 5.0 | 90 | 1.4069 | | 1.0171 | 6.0 | 108 | 1.4098 | | 0.9646 | 7.0 | 126 | 1.4167 | | 0.9317 | 8.0 | 144 | 1.4269 | | 0.895 | 9.0 | 162 | 1.4340 | | 0.876 | 10.0 | 180 | 1.4459 | ### Framework versions - PEFT 0.10.0 - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "model-index": [{"name": "shawgpt-ft", "results": []}]}
elixirofdeath/shawgpt-ft
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-04-15T16:23:50+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us
shawgpt-ft ========== This model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.2-GPTQ on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.4459 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2 * num\_epochs: 10 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.38.2 * Pytorch 2.1.0+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-to-image
diffusers
# Pokemon Trainer Sprite PixelArt <Gallery /> ([CivitAI](https://civitai.com/models/159333)) ## Model description <h1 id="heading-524">Pokémon Trainer Sprite Generator</h1><p>LIKE THIS MODEL? GIVE A ❤️</p><p>Trained on Pokémon sprites of 96x96 and BLIP image captions.</p><h2 id="heading-525">Quick Tips</h2><ul><li><p><strong>No Trigger Word Needed:</strong> This model does not require a trigger word.</p></li><li><p><strong>Art Styles:</strong> You can use the tags <code>gen1</code> and <code>gen2</code> to generate different art styles.</p></li><li><p><strong>Trainer Recognition:</strong> The model can recognize some trainer names, such as Ash and Bruno.</p></li><li><p>This LoRA has been tested with Dreamshaper and RealisticVision, but I belive that it should work well with other models too.</p></li></ul><p></p><h3 id="heading-146">Model Versions</h3><p>This Pokémon Trainer Sprite LoRA is available in two different versions: 768x768 and 512x512. Both models are trained on images upscaled from their original 96x96 size and are specifically designed to produce sprites at their respective resolutions. The choice between these two versions allows users to customize their experience based on their specific needs.</p><p></p><p>For the best results, we recommend using the 768x768 version. Set your desired resolution to 768x768, and then downscale the result 8 times to achieve a 96x96 sprite. This downscaling process can be performed using any image editing program, but for a smoother post-processing experience, we recommend using a plug-in available at (<a target="_blank" rel="ugc" href="https://github.com/mrreplicart/sd-webui-pixelart">https://github.com/mrreplicart/sd-webui-pixelart</a>).</p><p></p><p><span style="color:rgb(255, 255, 255)">Enjoy your Pokémon Trainer sprite!</span></p><p></p><p></p> ## Download model Weights for this model are available in Safetensors format. [Download](/sWizad/pokemon-trainer-sprite-pixelart/tree/main) them in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('sWizad/pokemon-trainer-sprite-pixelart', weight_name='pk_trainer_xl_v1.safetensors') image = pipeline('1girl, solo, hood, simple background').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
{"license": "other", "tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora", "migrated", "pixel art", "sprites", "pixel", "style", "pokemon", "game cg", "video game art"], "license_name": "bespoke-lora-trained-license", "license_link": "https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=True", "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "widget": [{"text": "harley quinn, simple background", "output": {"url": "9737393.jpeg"}}, {"text": "a woman with blonde hair, wearing a white summer dress, simple background", "output": {"url": "9737398.jpeg"}}, {"text": "maleficent, evil fairy, simple background", "output": {"url": "9737407.jpeg"}}, {"text": "Daenerys Targaryen, Game of Thrones, simple background", "output": {"url": "9737466.jpeg"}}, {"text": "joker, simple background", "output": {"url": "9737478.jpeg"}}, {"text": "jack sparrow, pirates of caribbean ", "output": {"url": "9737508.jpeg"}}, {"text": "iron man ", "output": {"url": "9737483.jpeg"}}, {"text": "deadpool ", "output": {"url": "9737471.jpeg"}}, {"text": "neo the matrix ", "output": {"url": "9737470.jpeg"}}, {"text": "charlie chaplin ", "output": {"url": "9737472.jpeg"}}, {"text": "Super Mario stepped on a turtle ", "output": {"url": "9737429.jpeg"}}, {"text": "link of zelda, with a sword ", "output": {"url": "9737481.jpeg"}}, {"text": "1girl angel with 2 large angel wings and a halo, perfect curves, wearing a white dress, ", "output": {"url": "9737509.jpeg"}}, {"text": "woman, beautiful kimono, sakura ", "output": {"url": "9737499.jpeg"}}, {"text": "1girl, glasses, witch, hat, blue flame ", "output": {"url": "9737502.jpeg"}}, {"text": "1girl, solo, hood, "}]}
sWizad/pokemon-trainer-sprite-pixelart
null
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "migrated", "pixel art", "sprites", "pixel", "style", "pokemon", "game cg", "video game art", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:other", "has_space", "region:us" ]
null
2024-04-15T16:27:10+00:00
[]
[]
TAGS #diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #migrated #pixel art #sprites #pixel #style #pokemon #game cg #video game art #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-other #has_space #region-us
# Pokemon Trainer Sprite PixelArt <Gallery /> (CivitAI) ## Model description <h1 id="heading-524">Pokémon Trainer Sprite Generator</h1><p>LIKE THIS MODEL? GIVE A ️</p><p>Trained on Pokémon sprites of 96x96 and BLIP image captions.</p><h2 id="heading-525">Quick Tips</h2><ul><li><p><strong>No Trigger Word Needed:</strong> This model does not require a trigger word.</p></li><li><p><strong>Art Styles:</strong> You can use the tags <code>gen1</code> and <code>gen2</code> to generate different art styles.</p></li><li><p><strong>Trainer Recognition:</strong> The model can recognize some trainer names, such as Ash and Bruno.</p></li><li><p>This LoRA has been tested with Dreamshaper and RealisticVision, but I belive that it should work well with other models too.</p></li></ul><p></p><h3 id="heading-146">Model Versions</h3><p>This Pokémon Trainer Sprite LoRA is available in two different versions: 768x768 and 512x512. Both models are trained on images upscaled from their original 96x96 size and are specifically designed to produce sprites at their respective resolutions. The choice between these two versions allows users to customize their experience based on their specific needs.</p><p></p><p>For the best results, we recommend using the 768x768 version. Set your desired resolution to 768x768, and then downscale the result 8 times to achieve a 96x96 sprite. This downscaling process can be performed using any image editing program, but for a smoother post-processing experience, we recommend using a plug-in available at (<a target="_blank" rel="ugc" href="URL/URL style="color:rgb(255, 255, 255)">Enjoy your Pokémon Trainer sprite!</span></p><p></p><p></p> ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab. ## Use it with the diffusers library For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers
[ "# Pokemon Trainer Sprite PixelArt \n\n<Gallery />\n\n\n\n(CivitAI)", "## Model description\n\n<h1 id=\"heading-524\">Pokémon Trainer Sprite Generator</h1><p>LIKE THIS MODEL? GIVE A ️</p><p>Trained on Pokémon sprites of 96x96 and BLIP image captions.</p><h2 id=\"heading-525\">Quick Tips</h2><ul><li><p><strong>No Trigger Word Needed:</strong> This model does not require a trigger word.</p></li><li><p><strong>Art Styles:</strong> You can use the tags <code>gen1</code> and <code>gen2</code> to generate different art styles.</p></li><li><p><strong>Trainer Recognition:</strong> The model can recognize some trainer names, such as Ash and Bruno.</p></li><li><p>This LoRA has been tested with Dreamshaper and RealisticVision, but I belive that it should work well with other models too.</p></li></ul><p></p><h3 id=\"heading-146\">Model Versions</h3><p>This Pokémon Trainer Sprite LoRA is available in two different versions: 768x768 and 512x512. Both models are trained on images upscaled from their original 96x96 size and are specifically designed to produce sprites at their respective resolutions. The choice between these two versions allows users to customize their experience based on their specific needs.</p><p></p><p>For the best results, we recommend using the 768x768 version. Set your desired resolution to 768x768, and then downscale the result 8 times to achieve a 96x96 sprite. This downscaling process can be performed using any image editing program, but for a smoother post-processing experience, we recommend using a plug-in available at (<a target=\"_blank\" rel=\"ugc\" href=\"URL/URL style=\"color:rgb(255, 255, 255)\">Enjoy your Pokémon Trainer sprite!</span></p><p></p><p></p>", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Use it with the diffusers library\n\n\n\nFor more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers" ]
[ "TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #migrated #pixel art #sprites #pixel #style #pokemon #game cg #video game art #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-other #has_space #region-us \n", "# Pokemon Trainer Sprite PixelArt \n\n<Gallery />\n\n\n\n(CivitAI)", "## Model description\n\n<h1 id=\"heading-524\">Pokémon Trainer Sprite Generator</h1><p>LIKE THIS MODEL? GIVE A ️</p><p>Trained on Pokémon sprites of 96x96 and BLIP image captions.</p><h2 id=\"heading-525\">Quick Tips</h2><ul><li><p><strong>No Trigger Word Needed:</strong> This model does not require a trigger word.</p></li><li><p><strong>Art Styles:</strong> You can use the tags <code>gen1</code> and <code>gen2</code> to generate different art styles.</p></li><li><p><strong>Trainer Recognition:</strong> The model can recognize some trainer names, such as Ash and Bruno.</p></li><li><p>This LoRA has been tested with Dreamshaper and RealisticVision, but I belive that it should work well with other models too.</p></li></ul><p></p><h3 id=\"heading-146\">Model Versions</h3><p>This Pokémon Trainer Sprite LoRA is available in two different versions: 768x768 and 512x512. Both models are trained on images upscaled from their original 96x96 size and are specifically designed to produce sprites at their respective resolutions. The choice between these two versions allows users to customize their experience based on their specific needs.</p><p></p><p>For the best results, we recommend using the 768x768 version. Set your desired resolution to 768x768, and then downscale the result 8 times to achieve a 96x96 sprite. This downscaling process can be performed using any image editing program, but for a smoother post-processing experience, we recommend using a plug-in available at (<a target=\"_blank\" rel=\"ugc\" href=\"URL/URL style=\"color:rgb(255, 255, 255)\">Enjoy your Pokémon Trainer sprite!</span></p><p></p><p></p>", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Use it with the diffusers library\n\n\n\nFor more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Vienamese This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_16_0 vi dataset. It achieves the following results on the evaluation set: - Loss: 0.6705 - Wer: 24.5680 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 2500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0174 | 33.0 | 500 | 0.6207 | 24.6696 | | 0.0045 | 66.0 | 1000 | 0.6705 | 24.5680 | | 0.0027 | 99.01 | 1500 | 0.6945 | 25.2795 | | 0.002 | 133.0 | 2000 | 0.7079 | 26.4790 | | 0.0018 | 166.0 | 2500 | 0.7127 | 26.3976 | ### Framework versions - Transformers 4.37.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.2.dev0 - Tokenizers 0.15.0
{"language": ["vi"], "license": "apache-2.0", "tags": ["whisper-event", "generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_16_0"], "metrics": ["wer"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper Small Vienamese", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "mozilla-foundation/common_voice_16_0 vi", "type": "mozilla-foundation/common_voice_16_0", "config": "vi", "split": "test", "args": "vi"}, "metrics": [{"type": "wer", "value": 24.56800162634682, "name": "Wer"}]}]}]}
arun100/whisper-small-vi-2
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "vi", "dataset:mozilla-foundation/common_voice_16_0", "base_model:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-15T16:28:30+00:00
[]
[ "vi" ]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #whisper-event #generated_from_trainer #vi #dataset-mozilla-foundation/common_voice_16_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us
Whisper Small Vienamese ======================= This model is a fine-tuned version of openai/whisper-small on the mozilla-foundation/common\_voice\_16\_0 vi dataset. It achieves the following results on the evaluation set: * Loss: 0.6705 * Wer: 24.5680 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-06 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 50 * training\_steps: 2500 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.37.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.16.2.dev0 * Tokenizers 0.15.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 50\n* training\\_steps: 2500\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.37.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.2.dev0\n* Tokenizers 0.15.0" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #whisper-event #generated_from_trainer #vi #dataset-mozilla-foundation/common_voice_16_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 50\n* training\\_steps: 2500\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.37.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.2.dev0\n* Tokenizers 0.15.0" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Tobistd/lora-13b
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-15T16:32:37+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["trl", "dpo"]}
kai-oh/mistral-7b-dpo-v3-hf
null
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "dpo", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T16:32:59+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #trl #dpo #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #trl #dpo #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
null
# **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
{"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Cartpole_New", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "1731.70 +/- 887.41", "name": "mean_reward", "verified": false}]}]}]}
pdx97/Cartpole_New
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
null
2024-04-15T16:33:27+00:00
[]
[]
TAGS #CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
# Reinforce Agent playing CartPole-v1 This is a trained model of a Reinforce agent playing CartPole-v1 . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
[ "# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
[ "TAGS\n#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n", "# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
TungChill/vinallama-peft-7b-math-solver
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-15T16:33:27+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_notata-seqsight_4096_512_27M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_27M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_27M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset. It achieves the following results on the evaluation set: - Loss: 0.5518 - F1 Score: 0.7270 - Accuracy: 0.7270 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6071 | 9.52 | 200 | 0.5450 | 0.7315 | 0.7317 | | 0.5388 | 19.05 | 400 | 0.5361 | 0.7383 | 0.7383 | | 0.5073 | 28.57 | 600 | 0.5366 | 0.7439 | 0.7439 | | 0.4786 | 38.1 | 800 | 0.5393 | 0.7472 | 0.7471 | | 0.4544 | 47.62 | 1000 | 0.5340 | 0.7509 | 0.7509 | | 0.4359 | 57.14 | 1200 | 0.5453 | 0.7474 | 0.7477 | | 0.4225 | 66.67 | 1400 | 0.5573 | 0.7495 | 0.7496 | | 0.4101 | 76.19 | 1600 | 0.5487 | 0.7464 | 0.7464 | | 0.3986 | 85.71 | 1800 | 0.5792 | 0.7387 | 0.7402 | | 0.3883 | 95.24 | 2000 | 0.5825 | 0.7466 | 0.7467 | | 0.378 | 104.76 | 2200 | 0.5858 | 0.7477 | 0.7477 | | 0.3677 | 114.29 | 2400 | 0.6293 | 0.7363 | 0.7381 | | 0.359 | 123.81 | 2600 | 0.5917 | 0.7432 | 0.7432 | | 0.3483 | 133.33 | 2800 | 0.6135 | 0.7425 | 0.7426 | | 0.3405 | 142.86 | 3000 | 0.6240 | 0.7373 | 0.7383 | | 0.3309 | 152.38 | 3200 | 0.6215 | 0.7379 | 0.7381 | | 0.3229 | 161.9 | 3400 | 0.6631 | 0.7392 | 0.7402 | | 0.314 | 171.43 | 3600 | 0.6738 | 0.7394 | 0.7405 | | 0.3054 | 180.95 | 3800 | 0.6849 | 0.7305 | 0.7322 | | 0.2988 | 190.48 | 4000 | 0.6853 | 0.7313 | 0.7326 | | 0.2925 | 200.0 | 4200 | 0.6576 | 0.7360 | 0.7362 | | 0.2844 | 209.52 | 4400 | 0.6832 | 0.7293 | 0.7302 | | 0.2791 | 219.05 | 4600 | 0.7047 | 0.7320 | 0.7332 | | 0.2736 | 228.57 | 4800 | 0.6978 | 0.7302 | 0.7307 | | 0.2673 | 238.1 | 5000 | 0.7100 | 0.7286 | 0.7294 | | 0.2633 | 247.62 | 5200 | 0.6959 | 0.7310 | 0.7313 | | 0.2561 | 257.14 | 5400 | 0.7307 | 0.7291 | 0.7300 | | 0.2517 | 266.67 | 5600 | 0.7482 | 0.7289 | 0.7298 | | 0.2476 | 276.19 | 5800 | 0.7762 | 0.7264 | 0.7277 | | 0.2425 | 285.71 | 6000 | 0.7455 | 0.7277 | 0.7283 | | 0.2383 | 295.24 | 6200 | 0.7643 | 0.7260 | 0.7268 | | 0.2349 | 304.76 | 6400 | 0.7796 | 0.7265 | 0.7279 | | 0.2321 | 314.29 | 6600 | 0.7872 | 0.7258 | 0.7272 | | 0.2284 | 323.81 | 6800 | 0.7758 | 0.7306 | 0.7313 | | 0.2248 | 333.33 | 7000 | 0.7942 | 0.7291 | 0.7298 | | 0.2219 | 342.86 | 7200 | 0.7972 | 0.7254 | 0.7268 | | 0.2183 | 352.38 | 7400 | 0.7972 | 0.7293 | 0.7300 | | 0.2161 | 361.9 | 7600 | 0.8059 | 0.7285 | 0.7294 | | 0.2127 | 371.43 | 7800 | 0.7990 | 0.7261 | 0.7270 | | 0.2113 | 380.95 | 8000 | 0.8044 | 0.7274 | 0.7283 | | 0.2079 | 390.48 | 8200 | 0.8066 | 0.7254 | 0.7260 | | 0.2073 | 400.0 | 8400 | 0.8116 | 0.7284 | 0.7288 | | 0.2062 | 409.52 | 8600 | 0.8237 | 0.7277 | 0.7287 | | 0.2035 | 419.05 | 8800 | 0.8205 | 0.7251 | 0.7262 | | 0.2024 | 428.57 | 9000 | 0.8299 | 0.7244 | 0.7255 | | 0.2007 | 438.1 | 9200 | 0.8260 | 0.7252 | 0.7260 | | 0.1997 | 447.62 | 9400 | 0.8304 | 0.7259 | 0.7268 | | 0.1997 | 457.14 | 9600 | 0.8372 | 0.7243 | 0.7255 | | 0.1979 | 466.67 | 9800 | 0.8309 | 0.7254 | 0.7262 | | 0.1979 | 476.19 | 10000 | 0.8322 | 0.7237 | 0.7247 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_27M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_4096_512_27M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_4096_512_27M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_27M", "region:us" ]
null
2024-04-15T16:34:17+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us
GUE\_prom\_prom\_core\_notata-seqsight\_4096\_512\_27M-L32\_all =============================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_27M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_notata dataset. It achieves the following results on the evaluation set: * Loss: 0.5518 * F1 Score: 0.7270 * Accuracy: 0.7270 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_notata-seqsight_4096_512_27M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_27M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_27M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset. It achieves the following results on the evaluation set: - Loss: 0.4965 - F1 Score: 0.8754 - Accuracy: 0.8754 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 1536 - eval_batch_size: 1536 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5207 | 7.14 | 200 | 0.4102 | 0.8104 | 0.8108 | | 0.3992 | 14.29 | 400 | 0.3780 | 0.8294 | 0.8295 | | 0.346 | 21.43 | 600 | 0.3306 | 0.8550 | 0.8553 | | 0.2931 | 28.57 | 800 | 0.3126 | 0.8696 | 0.8696 | | 0.2607 | 35.71 | 1000 | 0.3126 | 0.8703 | 0.8705 | | 0.238 | 42.86 | 1200 | 0.3216 | 0.8690 | 0.8690 | | 0.2179 | 50.0 | 1400 | 0.3243 | 0.8715 | 0.8715 | | 0.1985 | 57.14 | 1600 | 0.3380 | 0.8675 | 0.8675 | | 0.1851 | 64.29 | 1800 | 0.3432 | 0.8709 | 0.8709 | | 0.1718 | 71.43 | 2000 | 0.3585 | 0.8658 | 0.8658 | | 0.1624 | 78.57 | 2200 | 0.3526 | 0.8730 | 0.8730 | | 0.1524 | 85.71 | 2400 | 0.3708 | 0.8732 | 0.8732 | | 0.1453 | 92.86 | 2600 | 0.3630 | 0.8775 | 0.8775 | | 0.1395 | 100.0 | 2800 | 0.3572 | 0.8742 | 0.8743 | | 0.1318 | 107.14 | 3000 | 0.3847 | 0.8756 | 0.8756 | | 0.1277 | 114.29 | 3200 | 0.3783 | 0.8736 | 0.8736 | | 0.1237 | 121.43 | 3400 | 0.4004 | 0.8747 | 0.8747 | | 0.1183 | 128.57 | 3600 | 0.4130 | 0.8728 | 0.8728 | | 0.1156 | 135.71 | 3800 | 0.4304 | 0.8718 | 0.8719 | | 0.111 | 142.86 | 4000 | 0.4452 | 0.8708 | 0.8709 | | 0.1081 | 150.0 | 4200 | 0.4117 | 0.8739 | 0.8739 | | 0.105 | 157.14 | 4400 | 0.4191 | 0.8743 | 0.8743 | | 0.1015 | 164.29 | 4600 | 0.4076 | 0.8747 | 0.8747 | | 0.0998 | 171.43 | 4800 | 0.4255 | 0.8741 | 0.8741 | | 0.096 | 178.57 | 5000 | 0.4170 | 0.8700 | 0.8700 | | 0.0944 | 185.71 | 5200 | 0.4451 | 0.8739 | 0.8739 | | 0.093 | 192.86 | 5400 | 0.4494 | 0.8700 | 0.8700 | | 0.0892 | 200.0 | 5600 | 0.4447 | 0.8734 | 0.8734 | | 0.0879 | 207.14 | 5800 | 0.4387 | 0.8754 | 0.8754 | | 0.0877 | 214.29 | 6000 | 0.4489 | 0.8709 | 0.8709 | | 0.0862 | 221.43 | 6200 | 0.4413 | 0.8766 | 0.8766 | | 0.0844 | 228.57 | 6400 | 0.4482 | 0.8743 | 0.8743 | | 0.0833 | 235.71 | 6600 | 0.4478 | 0.8709 | 0.8709 | | 0.0821 | 242.86 | 6800 | 0.4521 | 0.8726 | 0.8726 | | 0.0801 | 250.0 | 7000 | 0.4728 | 0.8705 | 0.8705 | | 0.0794 | 257.14 | 7200 | 0.4559 | 0.8717 | 0.8717 | | 0.0784 | 264.29 | 7400 | 0.4691 | 0.8711 | 0.8711 | | 0.0773 | 271.43 | 7600 | 0.4665 | 0.8753 | 0.8753 | | 0.0768 | 278.57 | 7800 | 0.4512 | 0.8786 | 0.8787 | | 0.075 | 285.71 | 8000 | 0.4813 | 0.8736 | 0.8736 | | 0.0741 | 292.86 | 8200 | 0.4745 | 0.8715 | 0.8715 | | 0.0736 | 300.0 | 8400 | 0.4731 | 0.8730 | 0.8730 | | 0.0725 | 307.14 | 8600 | 0.4605 | 0.8758 | 0.8758 | | 0.0719 | 314.29 | 8800 | 0.4732 | 0.8728 | 0.8728 | | 0.0717 | 321.43 | 9000 | 0.4801 | 0.8745 | 0.8745 | | 0.0708 | 328.57 | 9200 | 0.4864 | 0.8753 | 0.8753 | | 0.0703 | 335.71 | 9400 | 0.4813 | 0.8739 | 0.8739 | | 0.071 | 342.86 | 9600 | 0.4805 | 0.8724 | 0.8724 | | 0.0697 | 350.0 | 9800 | 0.4828 | 0.8724 | 0.8724 | | 0.0703 | 357.14 | 10000 | 0.4832 | 0.8734 | 0.8734 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_27M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_4096_512_27M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_4096_512_27M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_27M", "region:us" ]
null
2024-04-15T16:34:17+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us
GUE\_prom\_prom\_300\_notata-seqsight\_4096\_512\_27M-L32\_all ============================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_27M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_notata dataset. It achieves the following results on the evaluation set: * Loss: 0.4965 * F1 Score: 0.8754 * Accuracy: 0.8754 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 1536 * eval\_batch\_size: 1536 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_all-seqsight_4096_512_27M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_27M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_27M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.5810 - F1 Score: 0.7307 - Accuracy: 0.7307 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6203 | 8.33 | 200 | 0.5813 | 0.6933 | 0.6939 | | 0.5549 | 16.67 | 400 | 0.5670 | 0.7125 | 0.7127 | | 0.5292 | 25.0 | 600 | 0.5626 | 0.7139 | 0.7142 | | 0.5066 | 33.33 | 800 | 0.5617 | 0.7205 | 0.7209 | | 0.489 | 41.67 | 1000 | 0.5678 | 0.7257 | 0.7258 | | 0.473 | 50.0 | 1200 | 0.5808 | 0.7249 | 0.725 | | 0.46 | 58.33 | 1400 | 0.5762 | 0.7198 | 0.7201 | | 0.45 | 66.67 | 1600 | 0.5738 | 0.7200 | 0.7204 | | 0.441 | 75.0 | 1800 | 0.5794 | 0.7278 | 0.7279 | | 0.4326 | 83.33 | 2000 | 0.5867 | 0.7235 | 0.7236 | | 0.4247 | 91.67 | 2200 | 0.5958 | 0.7133 | 0.7139 | | 0.4173 | 100.0 | 2400 | 0.5762 | 0.7235 | 0.7235 | | 0.4093 | 108.33 | 2600 | 0.5918 | 0.7191 | 0.7193 | | 0.4026 | 116.67 | 2800 | 0.5992 | 0.7208 | 0.7208 | | 0.394 | 125.0 | 3000 | 0.6131 | 0.7172 | 0.7172 | | 0.3872 | 133.33 | 3200 | 0.6033 | 0.7126 | 0.7130 | | 0.3808 | 141.67 | 3400 | 0.6282 | 0.7073 | 0.7083 | | 0.3716 | 150.0 | 3600 | 0.6245 | 0.7110 | 0.7123 | | 0.3648 | 158.33 | 3800 | 0.6449 | 0.7087 | 0.7100 | | 0.3617 | 166.67 | 4000 | 0.6298 | 0.7085 | 0.7090 | | 0.3539 | 175.0 | 4200 | 0.6397 | 0.7140 | 0.7145 | | 0.3468 | 183.33 | 4400 | 0.6483 | 0.7150 | 0.7154 | | 0.34 | 191.67 | 4600 | 0.6614 | 0.7108 | 0.7115 | | 0.3353 | 200.0 | 4800 | 0.6441 | 0.7139 | 0.7142 | | 0.3279 | 208.33 | 5000 | 0.6638 | 0.7070 | 0.7074 | | 0.3255 | 216.67 | 5200 | 0.6722 | 0.7079 | 0.7088 | | 0.3212 | 225.0 | 5400 | 0.6809 | 0.7108 | 0.7115 | | 0.3139 | 233.33 | 5600 | 0.6879 | 0.7129 | 0.7133 | | 0.3105 | 241.67 | 5800 | 0.6939 | 0.7070 | 0.7078 | | 0.3055 | 250.0 | 6000 | 0.6889 | 0.7103 | 0.7105 | | 0.3018 | 258.33 | 6200 | 0.6967 | 0.7094 | 0.7101 | | 0.2973 | 266.67 | 6400 | 0.7125 | 0.7105 | 0.7113 | | 0.2907 | 275.0 | 6600 | 0.7144 | 0.7065 | 0.7074 | | 0.29 | 283.33 | 6800 | 0.7149 | 0.7053 | 0.7059 | | 0.2846 | 291.67 | 7000 | 0.7159 | 0.7063 | 0.7069 | | 0.2825 | 300.0 | 7200 | 0.7137 | 0.7055 | 0.7057 | | 0.2797 | 308.33 | 7400 | 0.7336 | 0.7053 | 0.7068 | | 0.2766 | 316.67 | 7600 | 0.7338 | 0.7046 | 0.7054 | | 0.2741 | 325.0 | 7800 | 0.7489 | 0.7029 | 0.7047 | | 0.2724 | 333.33 | 8000 | 0.7255 | 0.7039 | 0.7047 | | 0.2707 | 341.67 | 8200 | 0.7413 | 0.7027 | 0.7032 | | 0.2672 | 350.0 | 8400 | 0.7366 | 0.7037 | 0.7042 | | 0.2651 | 358.33 | 8600 | 0.7438 | 0.7048 | 0.7052 | | 0.2642 | 366.67 | 8800 | 0.7376 | 0.7030 | 0.7039 | | 0.2613 | 375.0 | 9000 | 0.7602 | 0.7017 | 0.7029 | | 0.26 | 383.33 | 9200 | 0.7496 | 0.7008 | 0.7019 | | 0.2591 | 391.67 | 9400 | 0.7475 | 0.7014 | 0.7024 | | 0.2585 | 400.0 | 9600 | 0.7492 | 0.7035 | 0.7042 | | 0.2575 | 408.33 | 9800 | 0.7560 | 0.7040 | 0.7051 | | 0.259 | 416.67 | 10000 | 0.7493 | 0.7037 | 0.7046 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_27M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_4096_512_27M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_4096_512_27M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_27M", "region:us" ]
null
2024-04-15T16:34:17+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us
GUE\_prom\_prom\_core\_all-seqsight\_4096\_512\_27M-L32\_all ============================================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_27M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_all dataset. It achieves the following results on the evaluation set: * Loss: 0.5810 * F1 Score: 0.7307 * Accuracy: 0.7307 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_tata-seqsight_4096_512_27M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_27M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_27M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset. It achieves the following results on the evaluation set: - Loss: 1.9531 - F1 Score: 0.7585 - Accuracy: 0.7586 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:| | 0.5196 | 66.67 | 200 | 0.6506 | 0.7226 | 0.7227 | | 0.2541 | 133.33 | 400 | 0.8420 | 0.7390 | 0.7390 | | 0.1531 | 200.0 | 600 | 0.9914 | 0.7405 | 0.7406 | | 0.1082 | 266.67 | 800 | 1.1191 | 0.7290 | 0.7292 | | 0.0846 | 333.33 | 1000 | 1.2113 | 0.7270 | 0.7276 | | 0.0701 | 400.0 | 1200 | 1.2767 | 0.7324 | 0.7325 | | 0.0585 | 466.67 | 1400 | 1.2995 | 0.7210 | 0.7210 | | 0.0493 | 533.33 | 1600 | 1.3673 | 0.7340 | 0.7341 | | 0.0433 | 600.0 | 1800 | 1.4361 | 0.7222 | 0.7227 | | 0.0377 | 666.67 | 2000 | 1.4681 | 0.7274 | 0.7276 | | 0.0341 | 733.33 | 2200 | 1.6181 | 0.7339 | 0.7341 | | 0.0313 | 800.0 | 2400 | 1.7195 | 0.7372 | 0.7374 | | 0.0287 | 866.67 | 2600 | 1.6097 | 0.7357 | 0.7357 | | 0.0247 | 933.33 | 2800 | 1.8208 | 0.7369 | 0.7374 | | 0.023 | 1000.0 | 3000 | 1.7749 | 0.7325 | 0.7325 | | 0.0213 | 1066.67 | 3200 | 1.8422 | 0.7371 | 0.7374 | | 0.0206 | 1133.33 | 3400 | 1.8454 | 0.7357 | 0.7357 | | 0.0194 | 1200.0 | 3600 | 1.9469 | 0.7368 | 0.7374 | | 0.0175 | 1266.67 | 3800 | 1.8298 | 0.7373 | 0.7374 | | 0.0174 | 1333.33 | 4000 | 1.8110 | 0.7390 | 0.7390 | | 0.0162 | 1400.0 | 4200 | 1.8495 | 0.7419 | 0.7423 | | 0.016 | 1466.67 | 4400 | 1.8815 | 0.7390 | 0.7390 | | 0.0148 | 1533.33 | 4600 | 2.0872 | 0.7439 | 0.7439 | | 0.0146 | 1600.0 | 4800 | 1.9182 | 0.7320 | 0.7325 | | 0.0137 | 1666.67 | 5000 | 1.8847 | 0.7422 | 0.7423 | | 0.0134 | 1733.33 | 5200 | 1.8337 | 0.7438 | 0.7439 | | 0.0128 | 1800.0 | 5400 | 1.7239 | 0.7406 | 0.7406 | | 0.0126 | 1866.67 | 5600 | 1.8929 | 0.7471 | 0.7471 | | 0.0129 | 1933.33 | 5800 | 1.8823 | 0.7405 | 0.7406 | | 0.0117 | 2000.0 | 6000 | 1.9395 | 0.7406 | 0.7406 | | 0.0113 | 2066.67 | 6200 | 2.0179 | 0.7421 | 0.7423 | | 0.0115 | 2133.33 | 6400 | 1.8934 | 0.7405 | 0.7406 | | 0.011 | 2200.0 | 6600 | 1.9815 | 0.7373 | 0.7374 | | 0.0101 | 2266.67 | 6800 | 1.8747 | 0.7308 | 0.7308 | | 0.0102 | 2333.33 | 7000 | 1.9115 | 0.7390 | 0.7390 | | 0.01 | 2400.0 | 7200 | 1.8885 | 0.7373 | 0.7374 | | 0.0098 | 2466.67 | 7400 | 1.9814 | 0.7383 | 0.7390 | | 0.0099 | 2533.33 | 7600 | 1.8807 | 0.7339 | 0.7341 | | 0.0092 | 2600.0 | 7800 | 1.9910 | 0.7389 | 0.7390 | | 0.0091 | 2666.67 | 8000 | 1.9983 | 0.7422 | 0.7423 | | 0.0092 | 2733.33 | 8200 | 1.9764 | 0.7357 | 0.7357 | | 0.0089 | 2800.0 | 8400 | 1.8920 | 0.7371 | 0.7374 | | 0.0086 | 2866.67 | 8600 | 2.0643 | 0.7373 | 0.7374 | | 0.0082 | 2933.33 | 8800 | 1.9863 | 0.7340 | 0.7341 | | 0.0084 | 3000.0 | 9000 | 1.9830 | 0.7406 | 0.7406 | | 0.0081 | 3066.67 | 9200 | 1.9894 | 0.7356 | 0.7357 | | 0.0079 | 3133.33 | 9400 | 1.9909 | 0.7390 | 0.7390 | | 0.0082 | 3200.0 | 9600 | 1.9874 | 0.7357 | 0.7357 | | 0.0079 | 3266.67 | 9800 | 2.0275 | 0.7373 | 0.7374 | | 0.008 | 3333.33 | 10000 | 2.0341 | 0.7373 | 0.7374 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_27M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_4096_512_27M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_4096_512_27M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_27M", "region:us" ]
null
2024-04-15T16:34:17+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us
GUE\_prom\_prom\_core\_tata-seqsight\_4096\_512\_27M-L32\_all ============================================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_27M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_tata dataset. It achieves the following results on the evaluation set: * Loss: 1.9531 * F1 Score: 0.7585 * Accuracy: 0.7586 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
null
## Description This repo contains GGUF files for the original model. ### Files - [WizardLM-2-7B_Q2_K.gguf](WizardLM-2-7B_Q2_K.gguf) (2.72 GB) - smallest, significant quality loss - not recommended for most purposes - [WizardLM-2-7B_Q3_K_S.gguf](WizardLM-2-7B_Q3_K_S.gguf) (3.16 GB) - very small, high quality loss - [WizardLM-2-7B_Q3_K_M.gguf](WizardLM-2-7B_Q3_K_M.gguf) (3.52 GB) - very small, high quality loss - [WizardLM-2-7B_Q3_K_L.gguf](WizardLM-2-7B_Q3_K_L.gguf) (3.82 GB) - small, substantial quality loss - [WizardLM-2-7B_Q4_K_S.gguf](WizardLM-2-7B_Q4_K_S.gguf) (4.14 GB) - small, greater quality loss - [WizardLM-2-7B_Q4_K_M.gguf](WizardLM-2-7B_Q4_K_M.gguf) (4.37 GB) - medium, balanced quality - recommended - [WizardLM-2-7B_Q5_K_S.gguf](WizardLM-2-7B_Q5_K_S.gguf) (5 GB) - large, low quality loss - recommended - [WizardLM-2-7B_Q5_K_M.gguf](WizardLM-2-7B_Q5_K_M.gguf) (5.13 GB) - large, very low quality loss - recommended - [WizardLM-2-7B_Q6_K.gguf](WizardLM-2-7B_Q6_K.gguf) (5.94 GB) - very large, extremely low quality loss - [WizardLM-2-7B_Q8_0.gguf](WizardLM-2-7B_Q8_0.gguf) (7.7 GB) - very large, extremely low quality loss - not recommended ## Original model description We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B. - WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works and consistently outperforms all the existing state-of-the-art opensource models. - WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. - WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models. For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper. ## Model Details * **Model name**: WizardLM-2 7B * **Developed by**: WizardLM@Microsoft AI * **Base model**: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) * **Parameters**: 7B * **Language(s)**: Multilingual * **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2) * **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM) * **Paper**: WizardLM-2 (Upcoming) * **License**: Apache2.0 ## Model Capacities **MT-Bench** We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> **Human Preferences Evaluation** We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. We report the win:loss rate without tie: - WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314. - WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat. - WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Method Overview We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Usage ❗<b>Note for model system prompts usage:</b> <b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s> USER: Who are you? ASSISTANT: I am WizardLM.</s>...... ``` <b> Inference WizardLM-2 Demo Script</b> We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
{"language": ["en"], "license": "apache-2.0", "tags": ["gguf", "mistral"], "base_model": "microsoft/WizardLM-2-7B", "inference": false, "model_type": "mistral"}
RikudouSage/WizardLM-2-7B-GGUF
null
[ "gguf", "mistral", "en", "base_model:microsoft/WizardLM-2-7B", "license:apache-2.0", "region:us" ]
null
2024-04-15T16:34:51+00:00
[]
[ "en" ]
TAGS #gguf #mistral #en #base_model-microsoft/WizardLM-2-7B #license-apache-2.0 #region-us
## Description This repo contains GGUF files for the original model. ### Files - WizardLM-2-7B_Q2_K.gguf (2.72 GB) - smallest, significant quality loss - not recommended for most purposes - WizardLM-2-7B_Q3_K_S.gguf (3.16 GB) - very small, high quality loss - WizardLM-2-7B_Q3_K_M.gguf (3.52 GB) - very small, high quality loss - WizardLM-2-7B_Q3_K_L.gguf (3.82 GB) - small, substantial quality loss - WizardLM-2-7B_Q4_K_S.gguf (4.14 GB) - small, greater quality loss - WizardLM-2-7B_Q4_K_M.gguf (4.37 GB) - medium, balanced quality - recommended - WizardLM-2-7B_Q5_K_S.gguf (5 GB) - large, low quality loss - recommended - WizardLM-2-7B_Q5_K_M.gguf (5.13 GB) - large, very low quality loss - recommended - WizardLM-2-7B_Q6_K.gguf (5.94 GB) - very large, extremely low quality loss - WizardLM-2-7B_Q8_0.gguf (7.7 GB) - very large, extremely low quality loss - not recommended ## Original model description We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B. - WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works and consistently outperforms all the existing state-of-the-art opensource models. - WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. - WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models. For more details of WizardLM-2 please read our release blog post and upcoming paper. ## Model Details * Model name: WizardLM-2 7B * Developed by: WizardLM@Microsoft AI * Base model: mistralai/Mistral-7B-v0.1 * Parameters: 7B * Language(s): Multilingual * Blog: Introducing WizardLM-2 * Repository: URL * Paper: WizardLM-2 (Upcoming) * License: Apache2.0 ## Model Capacities MT-Bench We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales. <p align="center" width="100%"> <a ><img src="URL alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> Human Preferences Evaluation We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. We report the win:loss rate without tie: - WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314. - WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat. - WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta. <p align="center" width="100%"> <a ><img src="URL alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Method Overview We built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system. <p align="center" width="100%"> <a ><img src="URL alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Usage <b>Note for model system prompts usage:</b> <b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following: <b> Inference WizardLM-2 Demo Script</b> We provide a WizardLM-2 inference demo code on our github.
[ "## Description\n\nThis repo contains GGUF files for the original model.", "### Files\n\n- WizardLM-2-7B_Q2_K.gguf (2.72 GB) - smallest, significant quality loss - not recommended for most purposes\n- WizardLM-2-7B_Q3_K_S.gguf (3.16 GB) - very small, high quality loss\n- WizardLM-2-7B_Q3_K_M.gguf (3.52 GB) - very small, high quality loss\n- WizardLM-2-7B_Q3_K_L.gguf (3.82 GB) - small, substantial quality loss\n- WizardLM-2-7B_Q4_K_S.gguf (4.14 GB) - small, greater quality loss\n- WizardLM-2-7B_Q4_K_M.gguf (4.37 GB) - medium, balanced quality - recommended\n- WizardLM-2-7B_Q5_K_S.gguf (5 GB) - large, low quality loss - recommended\n- WizardLM-2-7B_Q5_K_M.gguf (5.13 GB) - large, very low quality loss - recommended\n- WizardLM-2-7B_Q6_K.gguf (5.94 GB) - very large, extremely low quality loss\n- WizardLM-2-7B_Q8_0.gguf (7.7 GB) - very large, extremely low quality loss - not recommended", "## Original model description\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.", "## Model Details\n\n* Model name: WizardLM-2 7B\n* Developed by: WizardLM@Microsoft AI\n* Base model: mistralai/Mistral-7B-v0.1\n* Parameters: 7B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0", "## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>", "## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>", "## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github." ]
[ "TAGS\n#gguf #mistral #en #base_model-microsoft/WizardLM-2-7B #license-apache-2.0 #region-us \n", "## Description\n\nThis repo contains GGUF files for the original model.", "### Files\n\n- WizardLM-2-7B_Q2_K.gguf (2.72 GB) - smallest, significant quality loss - not recommended for most purposes\n- WizardLM-2-7B_Q3_K_S.gguf (3.16 GB) - very small, high quality loss\n- WizardLM-2-7B_Q3_K_M.gguf (3.52 GB) - very small, high quality loss\n- WizardLM-2-7B_Q3_K_L.gguf (3.82 GB) - small, substantial quality loss\n- WizardLM-2-7B_Q4_K_S.gguf (4.14 GB) - small, greater quality loss\n- WizardLM-2-7B_Q4_K_M.gguf (4.37 GB) - medium, balanced quality - recommended\n- WizardLM-2-7B_Q5_K_S.gguf (5 GB) - large, low quality loss - recommended\n- WizardLM-2-7B_Q5_K_M.gguf (5.13 GB) - large, very low quality loss - recommended\n- WizardLM-2-7B_Q6_K.gguf (5.94 GB) - very large, extremely low quality loss\n- WizardLM-2-7B_Q8_0.gguf (7.7 GB) - very large, extremely low quality loss - not recommended", "## Original model description\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.", "## Model Details\n\n* Model name: WizardLM-2 7B\n* Developed by: WizardLM@Microsoft AI\n* Base model: mistralai/Mistral-7B-v0.1\n* Parameters: 7B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0", "## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>", "## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>", "## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_all-seqsight_4096_512_27M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_27M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_27M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.5964 - F1 Score: 0.8299 - Accuracy: 0.8299 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5481 | 8.33 | 200 | 0.4707 | 0.7808 | 0.7821 | | 0.4469 | 16.67 | 400 | 0.4292 | 0.8012 | 0.8015 | | 0.3969 | 25.0 | 600 | 0.4035 | 0.8205 | 0.8206 | | 0.3555 | 33.33 | 800 | 0.3884 | 0.8321 | 0.8321 | | 0.3285 | 41.67 | 1000 | 0.3913 | 0.8361 | 0.8361 | | 0.3085 | 50.0 | 1200 | 0.4033 | 0.8361 | 0.8361 | | 0.2898 | 58.33 | 1400 | 0.3912 | 0.8415 | 0.8416 | | 0.2733 | 66.67 | 1600 | 0.4039 | 0.8383 | 0.8383 | | 0.2581 | 75.0 | 1800 | 0.4047 | 0.8364 | 0.8365 | | 0.2481 | 83.33 | 2000 | 0.4128 | 0.8390 | 0.8390 | | 0.236 | 91.67 | 2200 | 0.4310 | 0.8405 | 0.8405 | | 0.2259 | 100.0 | 2400 | 0.4194 | 0.8382 | 0.8382 | | 0.2177 | 108.33 | 2600 | 0.4483 | 0.8372 | 0.8372 | | 0.2109 | 116.67 | 2800 | 0.4473 | 0.8365 | 0.8365 | | 0.2019 | 125.0 | 3000 | 0.4506 | 0.8344 | 0.8345 | | 0.1981 | 133.33 | 3200 | 0.4663 | 0.8365 | 0.8365 | | 0.193 | 141.67 | 3400 | 0.4911 | 0.8339 | 0.8340 | | 0.1871 | 150.0 | 3600 | 0.4562 | 0.8394 | 0.8394 | | 0.1816 | 158.33 | 3800 | 0.4763 | 0.8390 | 0.8390 | | 0.1752 | 166.67 | 4000 | 0.4993 | 0.8400 | 0.8400 | | 0.1712 | 175.0 | 4200 | 0.4927 | 0.8341 | 0.8343 | | 0.1695 | 183.33 | 4400 | 0.4913 | 0.8404 | 0.8404 | | 0.1619 | 191.67 | 4600 | 0.4942 | 0.8385 | 0.8385 | | 0.1592 | 200.0 | 4800 | 0.5082 | 0.8359 | 0.8360 | | 0.1582 | 208.33 | 5000 | 0.5166 | 0.8363 | 0.8363 | | 0.1538 | 216.67 | 5200 | 0.5017 | 0.8389 | 0.8389 | | 0.1494 | 225.0 | 5400 | 0.5160 | 0.8385 | 0.8385 | | 0.1474 | 233.33 | 5600 | 0.5172 | 0.8375 | 0.8375 | | 0.147 | 241.67 | 5800 | 0.5161 | 0.8390 | 0.8390 | | 0.1438 | 250.0 | 6000 | 0.5193 | 0.8373 | 0.8373 | | 0.1404 | 258.33 | 6200 | 0.5186 | 0.8381 | 0.8382 | | 0.1386 | 266.67 | 6400 | 0.5209 | 0.8368 | 0.8368 | | 0.1361 | 275.0 | 6600 | 0.5217 | 0.8408 | 0.8409 | | 0.1351 | 283.33 | 6800 | 0.5322 | 0.8409 | 0.8409 | | 0.1337 | 291.67 | 7000 | 0.5546 | 0.8390 | 0.8390 | | 0.1304 | 300.0 | 7200 | 0.5390 | 0.8400 | 0.8400 | | 0.1306 | 308.33 | 7400 | 0.5425 | 0.8393 | 0.8394 | | 0.1277 | 316.67 | 7600 | 0.5521 | 0.8408 | 0.8409 | | 0.1273 | 325.0 | 7800 | 0.5450 | 0.8407 | 0.8407 | | 0.1253 | 333.33 | 8000 | 0.5519 | 0.8404 | 0.8404 | | 0.124 | 341.67 | 8200 | 0.5567 | 0.8381 | 0.8382 | | 0.1242 | 350.0 | 8400 | 0.5554 | 0.8375 | 0.8375 | | 0.1223 | 358.33 | 8600 | 0.5625 | 0.8385 | 0.8385 | | 0.1209 | 366.67 | 8800 | 0.5715 | 0.8398 | 0.8399 | | 0.1198 | 375.0 | 9000 | 0.5567 | 0.8419 | 0.8419 | | 0.1193 | 383.33 | 9200 | 0.5666 | 0.8422 | 0.8422 | | 0.1184 | 391.67 | 9400 | 0.5711 | 0.8410 | 0.8410 | | 0.1187 | 400.0 | 9600 | 0.5704 | 0.8395 | 0.8395 | | 0.1177 | 408.33 | 9800 | 0.5662 | 0.8412 | 0.8412 | | 0.1173 | 416.67 | 10000 | 0.5669 | 0.8415 | 0.8416 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_27M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_4096_512_27M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_4096_512_27M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_27M", "region:us" ]
null
2024-04-15T16:35:02+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us
GUE\_prom\_prom\_300\_all-seqsight\_4096\_512\_27M-L32\_all =========================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_27M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_all dataset. It achieves the following results on the evaluation set: * Loss: 0.5964 * F1 Score: 0.8299 * Accuracy: 0.8299 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
null
# Desumor/WizardLM-2-7B-Q6_K-GGUF This model was converted to GGUF format from [`microsoft/WizardLM-2-7B`](https://huggingface.co/microsoft/WizardLM-2-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/microsoft/WizardLM-2-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo Desumor/WizardLM-2-7B-Q6_K-GGUF --model wizardlm-2-7b.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo Desumor/WizardLM-2-7B-Q6_K-GGUF --model wizardlm-2-7b.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m wizardlm-2-7b.Q6_K.gguf -n 128 ```
{"license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"]}
Desumor/WizardLM-2-7B-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "license:apache-2.0", "region:us" ]
null
2024-04-15T16:35:35+00:00
[]
[]
TAGS #gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us
# Desumor/WizardLM-2-7B-Q6_K-GGUF This model was converted to GGUF format from 'microsoft/WizardLM-2-7B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# Desumor/WizardLM-2-7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'microsoft/WizardLM-2-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us \n", "# Desumor/WizardLM-2-7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'microsoft/WizardLM-2-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
{"library_name": "peft", "base_model": "meta-llama/Llama-2-7b-chat-hf"}
VikrantRamesh/Llama_CN_finetuned
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-04-15T16:35:53+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.1.dev0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.1.dev0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.1.dev0" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Pavan178/my-awesome-model
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-15T16:37:16+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Sokhatiuk/gemma-2b-ner-brand
null
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T16:37:47+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
baris-yazici/binary_liar_4epoch_stabilityai_stablelm-2-zephyr-1_6b_PROMPT_TUNING_CAUSAL_LM
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-15T16:38:51+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
⚠️ Warning: Preview Model ⚠️ This model is currently at 50% completion and serves as a preview. It has not yet been fully optimized for general conversational tasks. We are releasing this preview to provide insights and solicit community feedback to improve the model. Stay tuned for our future release. # OpenBuddy - Open Multilingual Chatbot GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy) Website and Demo: [https://openbuddy.ai](https://openbuddy.ai) Evaluation result of this model: [Evaluation.txt](Evaluation.txt) ![Demo](https://raw.githubusercontent.com/OpenBuddy/OpenBuddy/main/media/demo.png) # Copyright Notice Base model: Mixtral 8x22B License: Apache 2.0 # Prompt Format We recommend using the fast tokenizer from `transformers`, which should be enabled by default in the `transformers` and `vllm` libraries. Other implementations including `sentencepiece` may not work as expected, especially for special tokens like `<|role|>`, `<|says|>` and `<|end|>`. ``` <|role|>system<|says|>You(assistant) are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human(user). Always answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. You cannot access the internet, but you have vast knowledge, cutoff: 2023-04. You are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), not related to GPT or OpenAI.<|end|> <|role|>user<|says|>History input 1<|end|> <|role|>assistant<|says|>History output 1<|end|> <|role|>user<|says|>History input 2<|end|> <|role|>assistant<|says|>History output 2<|end|> <|role|>user<|says|>Current input<|end|> <|role|>assistant<|says|> ``` This format is also defined in `tokenizer_config.json`, which means you can directly use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the [vllm documentation](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html). ## Disclaimer All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. ## 免责声明 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
{"language": ["zh", "en", "fr", "de", "ja", "ko", "it", "ru", "fi"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mixtral"], "pipeline_tag": "text-generation", "inference": false}
OpenBuddy/openbuddy-mixtral-22bx8-preview0-65k
null
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "zh", "en", "fr", "de", "ja", "ko", "it", "ru", "fi", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T16:39:09+00:00
[]
[ "zh", "en", "fr", "de", "ja", "ko", "it", "ru", "fi" ]
TAGS #transformers #safetensors #mixtral #text-generation #conversational #zh #en #fr #de #ja #ko #it #ru #fi #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
️ Warning: Preview Model ️ This model is currently at 50% completion and serves as a preview. It has not yet been fully optimized for general conversational tasks. We are releasing this preview to provide insights and solicit community feedback to improve the model. Stay tuned for our future release. # OpenBuddy - Open Multilingual Chatbot GitHub and Usage Guide: URL Website and Demo: URL Evaluation result of this model: URL !Demo # Copyright Notice Base model: Mixtral 8x22B License: Apache 2.0 # Prompt Format We recommend using the fast tokenizer from 'transformers', which should be enabled by default in the 'transformers' and 'vllm' libraries. Other implementations including 'sentencepiece' may not work as expected, especially for special tokens like '<|role|>', '<|says|>' and '<|end|>'. This format is also defined in 'tokenizer_config.json', which means you can directly use 'vllm' to deploy an OpenAI-like API service. For more information, please refer to the vllm documentation. ## Disclaimer All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. ## 免责声明 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
[ "# OpenBuddy - Open Multilingual Chatbot\n\nGitHub and Usage Guide: URL\n\nWebsite and Demo: URL\n\nEvaluation result of this model: URL\n\n!Demo", "# Copyright Notice\n\nBase model: Mixtral 8x22B\n\nLicense: Apache 2.0", "# Prompt Format\n\nWe recommend using the fast tokenizer from 'transformers', which should be enabled by default in the 'transformers' and 'vllm' libraries. Other implementations including 'sentencepiece' may not work as expected, especially for special tokens like '<|role|>', '<|says|>' and '<|end|>'.\n\n\n\n\nThis format is also defined in 'tokenizer_config.json', which means you can directly use 'vllm' to deploy an OpenAI-like API service. For more information, please refer to the vllm documentation.", "## Disclaimer\n\nAll OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.\n\nOpenBuddy is provided \"as-is\" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.\n\nBy using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.", "## 免责声明\n\n所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。\n\nOpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。\n\n使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。" ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #conversational #zh #en #fr #de #ja #ko #it #ru #fi #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n", "# OpenBuddy - Open Multilingual Chatbot\n\nGitHub and Usage Guide: URL\n\nWebsite and Demo: URL\n\nEvaluation result of this model: URL\n\n!Demo", "# Copyright Notice\n\nBase model: Mixtral 8x22B\n\nLicense: Apache 2.0", "# Prompt Format\n\nWe recommend using the fast tokenizer from 'transformers', which should be enabled by default in the 'transformers' and 'vllm' libraries. Other implementations including 'sentencepiece' may not work as expected, especially for special tokens like '<|role|>', '<|says|>' and '<|end|>'.\n\n\n\n\nThis format is also defined in 'tokenizer_config.json', which means you can directly use 'vllm' to deploy an OpenAI-like API service. For more information, please refer to the vllm documentation.", "## Disclaimer\n\nAll OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.\n\nOpenBuddy is provided \"as-is\" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.\n\nBy using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.", "## 免责声明\n\n所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。\n\nOpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。\n\n使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper whisper-large-v3 ar1 - Mohamed Shaaban This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the Common standard ar Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.1983 - Wer: 50.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-----:| | 0.6276 | 1.0 | 1 | 1.5308 | 100.0 | | 0.6286 | 2.0 | 2 | 0.5920 | 0.0 | | 0.2312 | 3.0 | 3 | 0.1197 | 0.0 | | 0.0463 | 4.0 | 4 | 0.0939 | 0.0 | | 0.02 | 5.0 | 5 | 0.0918 | 50.0 | | 0.0112 | 6.0 | 6 | 0.0955 | 50.0 | | 0.0046 | 7.0 | 7 | 0.1133 | 50.0 | | 0.0022 | 8.0 | 8 | 0.1343 | 50.0 | | 0.0011 | 9.0 | 9 | 0.1518 | 50.0 | | 0.0005 | 10.0 | 10 | 0.1655 | 50.0 | | 0.0003 | 11.0 | 11 | 0.1758 | 50.0 | | 0.0002 | 12.0 | 12 | 0.1835 | 50.0 | | 0.0002 | 13.0 | 13 | 0.1890 | 50.0 | | 0.0001 | 14.0 | 14 | 0.1929 | 50.0 | | 0.0001 | 15.0 | 15 | 0.1954 | 50.0 | | 0.0001 | 16.0 | 16 | 0.1970 | 50.0 | | 0.0001 | 17.0 | 17 | 0.1978 | 50.0 | | 0.0001 | 18.0 | 18 | 0.1982 | 50.0 | | 0.0001 | 19.0 | 19 | 0.1983 | 50.0 | | 0.0001 | 20.0 | 20 | 0.1983 | 50.0 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"language": ["ar"], "license": "apache-2.0", "tags": ["whisper-event", "generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_11_0"], "metrics": ["wer"], "base_model": "openai/whisper-large-v3", "model-index": [{"name": "Whisper whisper-large-v3\t ar1 - Mohamed Shaaban", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common standard ar Voice 11.0", "type": "mozilla-foundation/common_voice_11_0"}, "metrics": [{"type": "wer", "value": 50.0, "name": "Wer"}]}]}]}
Mohamedshaaban2001/MSDC-whisper-large-v3-55
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "ar", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-large-v3", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-15T16:40:14+00:00
[]
[ "ar" ]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #whisper-event #generated_from_trainer #ar #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-large-v3 #license-apache-2.0 #model-index #endpoints_compatible #region-us
Whisper whisper-large-v3 ar1 - Mohamed Shaaban ============================================== This model is a fine-tuned version of openai/whisper-large-v3 on the Common standard ar Voice 11.0 dataset. It achieves the following results on the evaluation set: * Loss: 0.1983 * Wer: 50.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1 * num\_epochs: 20 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1\n* num\\_epochs: 20", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #whisper-event #generated_from_trainer #ar #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-large-v3 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1\n* num\\_epochs: 20", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
# Uploaded model - **Developed by:** haripritam - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "gguf"], "base_model": "unsloth/gemma-2b-it-bnb-4bit"}
haripritam/gemma-2b-it-4bit
null
[ "transformers", "gguf", "gemma", "text-generation-inference", "unsloth", "en", "base_model:unsloth/gemma-2b-it-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-15T16:40:29+00:00
[]
[ "en" ]
TAGS #transformers #gguf #gemma #text-generation-inference #unsloth #en #base_model-unsloth/gemma-2b-it-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: haripritam - License: apache-2.0 - Finetuned from model : unsloth/gemma-2b-it-bnb-4bit This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: haripritam\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-it-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #gguf #gemma #text-generation-inference #unsloth #en #base_model-unsloth/gemma-2b-it-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: haripritam\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-it-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon7binstructApril15 This model is a fine-tuned version of [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4366 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2351 | 0.51 | 200 | 0.4366 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "tiiuae/falcon-7b-instruct", "model-index": [{"name": "falcon7binstructApril15", "results": []}]}
Ray011/falcon7binstructApril15
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:tiiuae/falcon-7b-instruct", "license:apache-2.0", "region:us" ]
null
2024-04-15T16:45:07+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-tiiuae/falcon-7b-instruct #license-apache-2.0 #region-us
falcon7binstructApril15 ======================= This model is a fine-tuned version of tiiuae/falcon-7b-instruct on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.4366 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 2 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 8 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.03 * training\_steps: 200 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.39.3 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* training\\_steps: 200", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-tiiuae/falcon-7b-instruct #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* training\\_steps: 200", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
amperie/flan-t5-large-ft-vp
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-15T16:45:23+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
token-classification
transformers
XLM-RoBERTa model for punctuation restoration. This is based on [Oliver Guhr's work](https://huggingface.co/oliverguhr/fullstop-punctuation-multilang-large). Original author: https://huggingface.co/kredor.
{"language": ["bg", "en", "de", "fr", "es", "it", "pl", "nl", "cs", "pt", "sl", "el", "sv", "hu", "ro", "sk", "da"]}
mukowaty/punctuate-16
null
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "bg", "en", "de", "fr", "es", "it", "pl", "nl", "cs", "pt", "sl", "el", "sv", "hu", "ro", "sk", "da", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T16:45:44+00:00
[]
[ "bg", "en", "de", "fr", "es", "it", "pl", "nl", "cs", "pt", "sl", "el", "sv", "hu", "ro", "sk", "da" ]
TAGS #transformers #pytorch #xlm-roberta #token-classification #bg #en #de #fr #es #it #pl #nl #cs #pt #sl #el #sv #hu #ro #sk #da #autotrain_compatible #endpoints_compatible #region-us
XLM-RoBERTa model for punctuation restoration. This is based on Oliver Guhr's work. Original author: URL
[]
[ "TAGS\n#transformers #pytorch #xlm-roberta #token-classification #bg #en #de #fr #es #it #pl #nl #cs #pt #sl #el #sv #hu #ro #sk #da #autotrain_compatible #endpoints_compatible #region-us \n" ]
null
transformers
# Uploaded model - **Developed by:** eruzak - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"}
eruzak/unsloth_mistral_predict_prompt_RL_v5
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-15T16:45:44+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: eruzak - License: apache-2.0 - Finetuned from model : unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: eruzak\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: eruzak\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Lugaborg/Kainazzo
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T16:46:03+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# DanTagGen - gamma DanTagGen(Danbooru Tag Generator) is inspired from p1atdev's dart project. But with different arch, dataset, format and different training strategy. ## Difference between versions alpha: pretrain on 2M dataset, smaller batch size. Limited ability<br> beta: pretrain on 5.3M dataset, larger batch size. More stable, better ability with only a few information provided.<br> gamma: finetuned from beta, with 3.6M dataset (union of all posts after id 5,000,000 and top25% fav count posts) ## Model arch This version of DTG is trained from scratch with 400M param LLaMA arch.(In my personal preference I will call it NanoLLaMA) Since it is llama arch. Theoritically it should be able to be used in any LLaMA inference interface. This repo also provided converted FP16 gguf model and quantized 8bit/6bit gguf models. Basically it is recommended to use llama.cpp or llama-cpp-python to run this model. Which will be very fast. ## Format ```python3 prompt = f""" rating: {rating or '<|empty|>'} artist: {artist.strip() or '<|empty|>'} characters: {characters.strip() or '<|empty|>'} copyrights: {copyrights.strip() or '<|empty|>'} aspect ratio: {f"{aspect_ratio:.1f}" or '<|empty|>'} target: {'<|' + target + '|>' if target else '<|long|>'} general: {", ".join(special_tags)}, {general.strip().strip(",")}<|input_end|> """ ``` for example: ``` rating: safe artist: <|empty|> characters: <|empty|> copyrights: <|empty|> aspect ratio: 1.0 target: <|short|> general: 1girl, solo, dragon girl, dragon horns, dragon tail<|input_end|> ``` And you may get something like: ``` rating: safe artist: <|empty|> characters: <|empty|> copyrights: <|empty|> aspect ratio: 1.0 target: <|short|> general: 1girl, solo, dragon girl, dragon horns, dragon tail<|input_end|>open mouth, red eyes, long hair, pointy ears, tail, black hair, chinese clothes, simple background, dragon, hair between eyes, horns, china dress, dress, looking at viewer, breasts ``` ## Utilities HF space: https://huggingface.co/spaces/KBlueLeaf/DTG-demo <br> SD-WebUI extension (Forge compatible): https://github.com/KohakuBlueleaf/z-a1111-sd-webui-dtg <br> Third Party ComfyUI Node: https://github.com/toyxyz/a1111-sd-webui-dtg_comfyui
{"language": ["en"], "license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["not-for-all-audiences", "art"], "datasets": ["KBlueLeaf/danbooru2023-sqlite"], "pipeline_tag": "text-generation", "widget": [{"text": "rating: safe\nartist: <|empty|>\ncharacters: <|empty|>\ncopyrights: <|empty|>\naspect ratio: 1.0\ntarget: <|short|>\ngeneral: 1girl, solo, dragon girl, dragon horns, dragon tail<|input_end|>"}]}
KBlueLeaf/DanTagGen-gamma
null
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "not-for-all-audiences", "art", "en", "dataset:KBlueLeaf/danbooru2023-sqlite", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2024-04-15T16:48:14+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #gguf #llama #text-generation #not-for-all-audiences #art #en #dataset-KBlueLeaf/danbooru2023-sqlite #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# DanTagGen - gamma DanTagGen(Danbooru Tag Generator) is inspired from p1atdev's dart project. But with different arch, dataset, format and different training strategy. ## Difference between versions alpha: pretrain on 2M dataset, smaller batch size. Limited ability<br> beta: pretrain on 5.3M dataset, larger batch size. More stable, better ability with only a few information provided.<br> gamma: finetuned from beta, with 3.6M dataset (union of all posts after id 5,000,000 and top25% fav count posts) ## Model arch This version of DTG is trained from scratch with 400M param LLaMA arch.(In my personal preference I will call it NanoLLaMA) Since it is llama arch. Theoritically it should be able to be used in any LLaMA inference interface. This repo also provided converted FP16 gguf model and quantized 8bit/6bit gguf models. Basically it is recommended to use URL or llama-cpp-python to run this model. Which will be very fast. ## Format for example: And you may get something like: ## Utilities HF space: URL <br> SD-WebUI extension (Forge compatible): URL <br> Third Party ComfyUI Node: URL
[ "# DanTagGen - gamma\nDanTagGen(Danbooru Tag Generator) is inspired from p1atdev's dart project.\nBut with different arch, dataset, format and different training strategy.", "## Difference between versions\nalpha: pretrain on 2M dataset, smaller batch size. Limited ability<br>\nbeta: pretrain on 5.3M dataset, larger batch size. More stable, better ability with only a few information provided.<br>\ngamma: finetuned from beta, with 3.6M dataset (union of all posts after id 5,000,000 and top25% fav count posts)", "## Model arch\nThis version of DTG is trained from scratch with 400M param LLaMA arch.(In my personal preference I will call it NanoLLaMA)\nSince it is llama arch. Theoritically it should be able to be used in any LLaMA inference interface.\n\nThis repo also provided converted FP16 gguf model and quantized 8bit/6bit gguf models.\nBasically it is recommended to use URL or llama-cpp-python to run this model. Which will be very fast.", "## Format\n\n\nfor example:\n\n\nAnd you may get something like:", "## Utilities\nHF space: URL <br>\nSD-WebUI extension (Forge compatible): URL <br>\nThird Party ComfyUI Node: URL" ]
[ "TAGS\n#transformers #safetensors #gguf #llama #text-generation #not-for-all-audiences #art #en #dataset-KBlueLeaf/danbooru2023-sqlite #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# DanTagGen - gamma\nDanTagGen(Danbooru Tag Generator) is inspired from p1atdev's dart project.\nBut with different arch, dataset, format and different training strategy.", "## Difference between versions\nalpha: pretrain on 2M dataset, smaller batch size. Limited ability<br>\nbeta: pretrain on 5.3M dataset, larger batch size. More stable, better ability with only a few information provided.<br>\ngamma: finetuned from beta, with 3.6M dataset (union of all posts after id 5,000,000 and top25% fav count posts)", "## Model arch\nThis version of DTG is trained from scratch with 400M param LLaMA arch.(In my personal preference I will call it NanoLLaMA)\nSince it is llama arch. Theoritically it should be able to be used in any LLaMA inference interface.\n\nThis repo also provided converted FP16 gguf model and quantized 8bit/6bit gguf models.\nBasically it is recommended to use URL or llama-cpp-python to run this model. Which will be very fast.", "## Format\n\n\nfor example:\n\n\nAnd you may get something like:", "## Utilities\nHF space: URL <br>\nSD-WebUI extension (Forge compatible): URL <br>\nThird Party ComfyUI Node: URL" ]
feature-extraction
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_bge_ver12 This model is a fine-tuned version of [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "BAAI/bge-m3", "model-index": [{"name": "finetuned_bge_ver12", "results": []}]}
comet24082002/finetuned_bge_ver12
null
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "feature-extraction", "generated_from_trainer", "base_model:BAAI/bge-m3", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-15T16:49:23+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #xlm-roberta #feature-extraction #generated_from_trainer #base_model-BAAI/bge-m3 #license-mit #endpoints_compatible #region-us
# finetuned_bge_ver12 This model is a fine-tuned version of BAAI/bge-m3 on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# finetuned_bge_ver12\n\nThis model is a fine-tuned version of BAAI/bge-m3 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 15.0\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #xlm-roberta #feature-extraction #generated_from_trainer #base_model-BAAI/bge-m3 #license-mit #endpoints_compatible #region-us \n", "# finetuned_bge_ver12\n\nThis model is a fine-tuned version of BAAI/bge-m3 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 15.0\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
# Llama2 fine tuned in Intel Hardware using peft and Lora **Description :** Meta's Llama 2 is a transformer-based model tailored for converting natural language instructions into Python code snippets. This model has been optimized for efficient deployment on resource-constrained hardware through techniques such as LORA (Low-Rank Adaptation) and QLORA (Quantized Low-Rank Adaptation), enabling 4-bit quantization without sacrificing performance. Leveraging advanced optimization libraries, such as Intel's Accelerate and Extension for PyTorch, Meta's Llama 2 offers streamlined fine-tuning and inference on Intel Xeon Scalable processors. **Usage :** To utilize Meta's Llama 2 finetuned using the python code snippets, simply load the model using the Hugging Face Transformers library. Ensure compatibility with the prompt template structure: s [inst] instruction [\inst] answer s. Fine-tune the model using the Hugging Face Trainer class, specifying training configurations and leveraging Intel hardware and oneAPI optimization libraries for enhanced performance. **Use in Transformers** ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Smd-Arshad/Llama-python-finetuned") model = AutoModelForCausalLM.from_pretrained("Smd-Arshad/Llama-python-finetuned") ```
{"license": "llama2"}
Smd-Arshad/Llama-python-finetuned
null
[ "transformers", "pytorch", "llama", "text-generation", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T16:50:28+00:00
[]
[]
TAGS #transformers #pytorch #llama #text-generation #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Llama2 fine tuned in Intel Hardware using peft and Lora Description : Meta's Llama 2 is a transformer-based model tailored for converting natural language instructions into Python code snippets. This model has been optimized for efficient deployment on resource-constrained hardware through techniques such as LORA (Low-Rank Adaptation) and QLORA (Quantized Low-Rank Adaptation), enabling 4-bit quantization without sacrificing performance. Leveraging advanced optimization libraries, such as Intel's Accelerate and Extension for PyTorch, Meta's Llama 2 offers streamlined fine-tuning and inference on Intel Xeon Scalable processors. Usage : To utilize Meta's Llama 2 finetuned using the python code snippets, simply load the model using the Hugging Face Transformers library. Ensure compatibility with the prompt template structure: s [inst] instruction [\inst] answer s. Fine-tune the model using the Hugging Face Trainer class, specifying training configurations and leveraging Intel hardware and oneAPI optimization libraries for enhanced performance. Use in Transformers
[ "# Llama2 fine tuned in Intel Hardware using peft and Lora\n\nDescription : Meta's Llama 2 is a transformer-based model tailored for converting natural language instructions into Python code snippets. This model has been optimized for efficient deployment on resource-constrained hardware through techniques such as LORA (Low-Rank Adaptation) and QLORA (Quantized Low-Rank Adaptation), enabling 4-bit quantization without sacrificing performance. Leveraging advanced optimization libraries, such as Intel's Accelerate and Extension for PyTorch, Meta's Llama 2 offers streamlined fine-tuning and inference on Intel Xeon Scalable processors.\n\nUsage : To utilize Meta's Llama 2 finetuned using the python code snippets, simply load the model using the Hugging Face Transformers library. Ensure compatibility with the prompt template structure: s [inst] instruction [\\inst] answer s. Fine-tune the model using the Hugging Face Trainer class, specifying training configurations and leveraging Intel hardware and oneAPI optimization libraries for enhanced performance.\n\nUse in Transformers" ]
[ "TAGS\n#transformers #pytorch #llama #text-generation #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Llama2 fine tuned in Intel Hardware using peft and Lora\n\nDescription : Meta's Llama 2 is a transformer-based model tailored for converting natural language instructions into Python code snippets. This model has been optimized for efficient deployment on resource-constrained hardware through techniques such as LORA (Low-Rank Adaptation) and QLORA (Quantized Low-Rank Adaptation), enabling 4-bit quantization without sacrificing performance. Leveraging advanced optimization libraries, such as Intel's Accelerate and Extension for PyTorch, Meta's Llama 2 offers streamlined fine-tuning and inference on Intel Xeon Scalable processors.\n\nUsage : To utilize Meta's Llama 2 finetuned using the python code snippets, simply load the model using the Hugging Face Transformers library. Ensure compatibility with the prompt template structure: s [inst] instruction [\\inst] answer s. Fine-tune the model using the Hugging Face Trainer class, specifying training configurations and leveraging Intel hardware and oneAPI optimization libraries for enhanced performance.\n\nUse in Transformers" ]
text-generation
transformers
# [MaziyarPanahi/WizardLM-2-7B-GGUF](https://huggingface.co/MaziyarPanahi/WizardLM-2-7B-GGUF) - Model creator: [microsoft](https://huggingface.co/microsoft) - Original model: [microsoft/WizardLM-2-7B](https://huggingface.co/microsoft/WizardLM-2-7B) ## Description [MaziyarPanahi/WizardLM-2-7B-GGUF](https://huggingface.co/MaziyarPanahi/WizardLM-2-7B-GGUF) contains GGUF format model files for [microsoft/WizardLM-2-7B](https://huggingface.co/microsoft/WizardLM-2-7B). ## Prompt template ``` {system_prompt} USER: {prompt} ASSISTANT: </s> ``` or ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s> USER: {prompt} ASSISTANT: </s>...... ``` Taken from the original README --- --- license: apache-2.0 --- <p style="font-size:20px;" align="center"> 🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p> <p align="center"> 🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> </p> ## News 🔥🔥🔥 [2024/04/15] We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B. - WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works and consistently outperforms all the existing state-of-the-art opensource models. - WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. - WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models. For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper. ## Model Details * **Model name**: WizardLM-2 7B * **Developed by**: WizardLM@Microsoft AI * **Base model**: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) * **Parameters**: 7B * **Language(s)**: Multilingual * **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2) * **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM) * **Paper**: WizardLM-2 (Upcoming) * **License**: Apache2.0 ## Model Capacities **MT-Bench** We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> **Human Preferences Evaluation** We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. We report the win:loss rate without tie: - WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314. - WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat. - WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Method Overview We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Usage ❗<b>Note for model system prompts usage:</b> <b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s> USER: Who are you? ASSISTANT: I am WizardLM.</s>...... ``` <b> Inference WizardLM-2 Demo Script</b> We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github. --- ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/WizardLM-2-7B-GGUF](https://huggingface.co/MaziyarPanahi/WizardLM-2-7B-GGUF) and below it, a specific filename to download, such as: WizardLM-2-7B-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/WizardLM-2-7B-GGUF WizardLM-2-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/WizardLM-2-7B-GGUF](https://huggingface.co/MaziyarPanahi/WizardLM-2-7B-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/WizardLM-2-7B-GGUF WizardLM-2-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m WizardLM-2-7B.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://github.com/abetlen/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./WizardLM-2-7B.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./WizardLM-2-7B.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
{"tags": ["quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "transformers", "safetensors", "mistral", "text-generation", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "text-generation"], "model_name": "WizardLM-2-7B-GGUF", "base_model": "microsoft/WizardLM-2-7B", "inference": false, "model_creator": "microsoft", "pipeline_tag": "text-generation", "quantized_by": "MaziyarPanahi"}
MaziyarPanahi/WizardLM-2-7B-GGUF
null
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:microsoft/WizardLM-2-7B" ]
null
2024-04-15T16:51:17+00:00
[ "2304.12244", "2306.08568", "2308.09583" ]
[]
TAGS #transformers #gguf #mistral #quantized #2-bit #3-bit #4-bit #5-bit #6-bit #8-bit #GGUF #safetensors #text-generation #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us #base_model-microsoft/WizardLM-2-7B
# MaziyarPanahi/WizardLM-2-7B-GGUF - Model creator: microsoft - Original model: microsoft/WizardLM-2-7B ## Description MaziyarPanahi/WizardLM-2-7B-GGUF contains GGUF format model files for microsoft/WizardLM-2-7B. ## Prompt template or Taken from the original README --- --- license: apache-2.0 --- <p style="font-size:20px;" align="center"> <a href="URL target="_blank">WizardLM-2 Release Blog</a> </p> <p align="center"> <a href="URL target="_blank">HF Repo</a> • <a href="URL target="_blank">Github Repo</a> • <a href="URL target="_blank">Twitter</a> • <a href="URL target="_blank">[WizardLM]</a> • <a href="URL target="_blank">[WizardCoder]</a> • <a href="URL target="_blank">[WizardMath]</a> <br> </p> <p align="center"> Join our <a href="URL target="_blank">Discord</a> </p> ## News [2024/04/15] We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B. - WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works and consistently outperforms all the existing state-of-the-art opensource models. - WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. - WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models. For more details of WizardLM-2 please read our release blog post and upcoming paper. ## Model Details * Model name: WizardLM-2 7B * Developed by: WizardLM@Microsoft AI * Base model: mistralai/Mistral-7B-v0.1 * Parameters: 7B * Language(s): Multilingual * Blog: Introducing WizardLM-2 * Repository: URL * Paper: WizardLM-2 (Upcoming) * License: Apache2.0 ## Model Capacities MT-Bench We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales. <p align="center" width="100%"> <a ><img src="URL alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> Human Preferences Evaluation We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. We report the win:loss rate without tie: - WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314. - WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat. - WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta. <p align="center" width="100%"> <a ><img src="URL alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Method Overview We built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system. <p align="center" width="100%"> <a ><img src="URL alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Usage <b>Note for model system prompts usage:</b> <b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following: <b> Inference WizardLM-2 Demo Script</b> We provide a WizardLM-2 inference demo code on our github. --- ## How to use Thanks to TheBloke for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL. Here is an incomplete list of clients and libraries that are known to support GGUF: * URL. The source project for GGUF. Offers a CLI and a server option. * text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection. * URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use. * ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files Note for manual downloaders: You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * URL ### In 'text-generation-webui' Under Download Model, you can enter the model repo: MaziyarPanahi/WizardLM-2-7B-GGUF and below it, a specific filename to download, such as: WizardLM-2-7B-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the 'huggingface-hub' Python library: Then you can download any individual model file to the current directory, at high speed, with a command like this: </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: For more documentation on downloading with 'huggingface-cli', please see: HF -> Hub Python Library -> Download files -> Download from the CLI. To accelerate downloads on fast connections (1Gbit/s or higher), install 'hf_transfer': And set environment variable 'HF_HUB_ENABLE_HF_TRANSFER' to '1': Windows Command Line users: You can set the environment variable by running 'set HF_HUB_ENABLE_HF_TRANSFER=1' before the download command. </details> ## Example 'URL' command Make sure you are using 'URL' from commit d0cee0d or later. Change '-ngl 32' to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change '-c 32768' to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by URL automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the '-p <PROMPT>' argument with '-i -ins' For other parameters and how to use them, please refer to the URL documentation ## How to run in 'text-generation-webui' Further instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 ‐ Model URL. ## How to run from Python code You can use GGUF models from Python using the llama-cpp-python or ctransformers libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: llama-cpp-python docs. #### First install the package Run one of the following commands, according to your system: #### Simple llama-cpp-python example code ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * LangChain + llama-cpp-python * LangChain + ctransformers
[ "# MaziyarPanahi/WizardLM-2-7B-GGUF\n- Model creator: microsoft\n- Original model: microsoft/WizardLM-2-7B", "## Description\nMaziyarPanahi/WizardLM-2-7B-GGUF contains GGUF format model files for microsoft/WizardLM-2-7B.", "## Prompt template\n\n\n\nor\n\n\n\nTaken from the original README\n---\n---\nlicense: apache-2.0\n---\n\n\n\n\n<p style=\"font-size:20px;\" align=\"center\">\n <a href=\"URL target=\"_blank\">WizardLM-2 Release Blog</a> </p>\n<p align=\"center\">\n <a href=\"URL target=\"_blank\">HF Repo</a> • <a href=\"URL target=\"_blank\">Github Repo</a> • <a href=\"URL target=\"_blank\">Twitter</a> • <a href=\"URL target=\"_blank\">[WizardLM]</a> • <a href=\"URL target=\"_blank\">[WizardCoder]</a> • <a href=\"URL target=\"_blank\">[WizardMath]</a> <br>\n</p>\n<p align=\"center\">\n Join our <a href=\"URL target=\"_blank\">Discord</a>\n</p>", "## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.", "## Model Details\n\n* Model name: WizardLM-2 7B\n* Developed by: WizardLM@Microsoft AI\n* Base model: mistralai/Mistral-7B-v0.1\n* Parameters: 7B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0", "## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>", "## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>", "## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github.\n\n---", "## How to use\nThanks to TheBloke for preparing an amazing README on how to use GGUF models:", "### About GGUF\n\nGGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.\n\nHere is an incomplete list of clients and libraries that are known to support GGUF:\n\n* URL. The source project for GGUF. Offers a CLI and a server option.\n* text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.\n* KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.\n* GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.\n* LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.\n* LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.\n* URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.\n* llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.\n* candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.\n* ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.", "### Explanation of quantisation methods\n\n<details>\n <summary>Click to see details</summary>\n\nThe new methods available are:\n\n* GGML_TYPE_Q2_K - \"type-1\" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)\n* GGML_TYPE_Q3_K - \"type-0\" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.\n* GGML_TYPE_Q4_K - \"type-1\" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.\n* GGML_TYPE_Q5_K - \"type-1\" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw\n* GGML_TYPE_Q6_K - \"type-0\" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw", "## How to download GGUF files\n\nNote for manual downloaders: You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.\n\nThe following clients/libraries will automatically download models for you, providing a list of available models to choose from:\n\n* LM Studio\n* LoLLMS Web UI\n* URL", "### In 'text-generation-webui'\n\nUnder Download Model, you can enter the model repo: MaziyarPanahi/WizardLM-2-7B-GGUF and below it, a specific filename to download, such as: WizardLM-2-7B-GGUF.Q4_K_M.gguf.\n\nThen click Download.", "### On the command line, including multiple files at once\n\nI recommend using the 'huggingface-hub' Python library:\n\n\n\nThen you can download any individual model file to the current directory, at high speed, with a command like this:\n\n\n</details>\n<details>\n <summary>More advanced huggingface-cli download usage (click to read)</summary>\n\nYou can also download multiple files at once with a pattern:\n\n\n\nFor more documentation on downloading with 'huggingface-cli', please see: HF -> Hub Python Library -> Download files -> Download from the CLI.\n\nTo accelerate downloads on fast connections (1Gbit/s or higher), install 'hf_transfer':\n\n\n\nAnd set environment variable 'HF_HUB_ENABLE_HF_TRANSFER' to '1':\n\n\n\nWindows Command Line users: You can set the environment variable by running 'set HF_HUB_ENABLE_HF_TRANSFER=1' before the download command.\n</details>", "## Example 'URL' command\n\nMake sure you are using 'URL' from commit d0cee0d or later.\n\n\n\nChange '-ngl 32' to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.\n\nChange '-c 32768' to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by URL automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.\n\nIf you want to have a chat-style conversation, replace the '-p <PROMPT>' argument with '-i -ins'\n\nFor other parameters and how to use them, please refer to the URL documentation", "## How to run in 'text-generation-webui'\n\nFurther instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 ‐ Model URL.", "## How to run from Python code\n\nYou can use GGUF models from Python using the llama-cpp-python or ctransformers libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.", "### How to load this model in Python code, using llama-cpp-python\n\nFor full documentation, please see: llama-cpp-python docs.", "#### First install the package\n\nRun one of the following commands, according to your system:", "#### Simple llama-cpp-python example code", "## How to use with LangChain\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n* LangChain + llama-cpp-python\n* LangChain + ctransformers" ]
[ "TAGS\n#transformers #gguf #mistral #quantized #2-bit #3-bit #4-bit #5-bit #6-bit #8-bit #GGUF #safetensors #text-generation #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us #base_model-microsoft/WizardLM-2-7B \n", "# MaziyarPanahi/WizardLM-2-7B-GGUF\n- Model creator: microsoft\n- Original model: microsoft/WizardLM-2-7B", "## Description\nMaziyarPanahi/WizardLM-2-7B-GGUF contains GGUF format model files for microsoft/WizardLM-2-7B.", "## Prompt template\n\n\n\nor\n\n\n\nTaken from the original README\n---\n---\nlicense: apache-2.0\n---\n\n\n\n\n<p style=\"font-size:20px;\" align=\"center\">\n <a href=\"URL target=\"_blank\">WizardLM-2 Release Blog</a> </p>\n<p align=\"center\">\n <a href=\"URL target=\"_blank\">HF Repo</a> • <a href=\"URL target=\"_blank\">Github Repo</a> • <a href=\"URL target=\"_blank\">Twitter</a> • <a href=\"URL target=\"_blank\">[WizardLM]</a> • <a href=\"URL target=\"_blank\">[WizardCoder]</a> • <a href=\"URL target=\"_blank\">[WizardMath]</a> <br>\n</p>\n<p align=\"center\">\n Join our <a href=\"URL target=\"_blank\">Discord</a>\n</p>", "## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.", "## Model Details\n\n* Model name: WizardLM-2 7B\n* Developed by: WizardLM@Microsoft AI\n* Base model: mistralai/Mistral-7B-v0.1\n* Parameters: 7B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0", "## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>", "## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>", "## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github.\n\n---", "## How to use\nThanks to TheBloke for preparing an amazing README on how to use GGUF models:", "### About GGUF\n\nGGUF is a new format introduced by the URL team on August 21st 2023. It is a replacement for GGML, which is no longer supported by URL.\n\nHere is an incomplete list of clients and libraries that are known to support GGUF:\n\n* URL. The source project for GGUF. Offers a CLI and a server option.\n* text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.\n* KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.\n* GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.\n* LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.\n* LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.\n* URL, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.\n* llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.\n* candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.\n* ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.", "### Explanation of quantisation methods\n\n<details>\n <summary>Click to see details</summary>\n\nThe new methods available are:\n\n* GGML_TYPE_Q2_K - \"type-1\" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)\n* GGML_TYPE_Q3_K - \"type-0\" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.\n* GGML_TYPE_Q4_K - \"type-1\" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.\n* GGML_TYPE_Q5_K - \"type-1\" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw\n* GGML_TYPE_Q6_K - \"type-0\" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw", "## How to download GGUF files\n\nNote for manual downloaders: You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.\n\nThe following clients/libraries will automatically download models for you, providing a list of available models to choose from:\n\n* LM Studio\n* LoLLMS Web UI\n* URL", "### In 'text-generation-webui'\n\nUnder Download Model, you can enter the model repo: MaziyarPanahi/WizardLM-2-7B-GGUF and below it, a specific filename to download, such as: WizardLM-2-7B-GGUF.Q4_K_M.gguf.\n\nThen click Download.", "### On the command line, including multiple files at once\n\nI recommend using the 'huggingface-hub' Python library:\n\n\n\nThen you can download any individual model file to the current directory, at high speed, with a command like this:\n\n\n</details>\n<details>\n <summary>More advanced huggingface-cli download usage (click to read)</summary>\n\nYou can also download multiple files at once with a pattern:\n\n\n\nFor more documentation on downloading with 'huggingface-cli', please see: HF -> Hub Python Library -> Download files -> Download from the CLI.\n\nTo accelerate downloads on fast connections (1Gbit/s or higher), install 'hf_transfer':\n\n\n\nAnd set environment variable 'HF_HUB_ENABLE_HF_TRANSFER' to '1':\n\n\n\nWindows Command Line users: You can set the environment variable by running 'set HF_HUB_ENABLE_HF_TRANSFER=1' before the download command.\n</details>", "## Example 'URL' command\n\nMake sure you are using 'URL' from commit d0cee0d or later.\n\n\n\nChange '-ngl 32' to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.\n\nChange '-c 32768' to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by URL automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.\n\nIf you want to have a chat-style conversation, replace the '-p <PROMPT>' argument with '-i -ins'\n\nFor other parameters and how to use them, please refer to the URL documentation", "## How to run in 'text-generation-webui'\n\nFurther instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 ‐ Model URL.", "## How to run from Python code\n\nYou can use GGUF models from Python using the llama-cpp-python or ctransformers libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.", "### How to load this model in Python code, using llama-cpp-python\n\nFor full documentation, please see: llama-cpp-python docs.", "#### First install the package\n\nRun one of the following commands, according to your system:", "#### Simple llama-cpp-python example code", "## How to use with LangChain\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n* LangChain + llama-cpp-python\n* LangChain + ctransformers" ]
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "-195.09 +/- 102.68", "name": "mean_reward", "verified": false}]}]}]}
b-r-a-n/ppo-LunarLander-v2
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-15T16:51:47+00:00
[]
[]
TAGS #stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# PPO Agent playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. ## Usage (with Stable-baselines3) TODO: Add your code
[ "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ "TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
null
null
<p style="font-size:20px;" align="center"> GGUF / IQ / Imatrix for microsoft/WizardLM-2-7B </p> <p style="font-size:20px;" align="center"> Original Description Below: </p> <p style="font-size:20px;" align="center"> 🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p> <p align="center"> 🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> </p> ## News 🔥🔥🔥 [2024/04/15] We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B. - WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works and consistently outperforms all the existing state-of-the-art opensource models. - WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. - WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models. For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper. ## Model Details * **Model name**: WizardLM-2 7B * **Developed by**: WizardLM@Microsoft AI * **Base model**: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) * **Parameters**: 7B * **Language(s)**: Multilingual * **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2) * **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM) * **Paper**: WizardLM-2 (Upcoming) * **License**: Apache2.0 ## Model Capacities **MT-Bench** We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> **Human Preferences Evaluation** We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. We report the win:loss rate without tie: - WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314. - WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat. - WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Method Overview We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Usage ❗<b>Note for model system prompts usage:</b> <b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s> USER: Who are you? ASSISTANT: I am WizardLM.</s>...... ``` <b> Inference WizardLM-2 Demo Script</b> We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
{"license": "apache-2.0", "tags": ["GGUF", "Imatrix", "Quantized"]}
ABX-AI/WizardLM-2-7B-GGUF-IQ-Imatrix
null
[ "gguf", "GGUF", "Imatrix", "Quantized", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "license:apache-2.0", "region:us" ]
null
2024-04-15T16:52:14+00:00
[ "2304.12244", "2306.08568", "2308.09583" ]
[]
TAGS #gguf #GGUF #Imatrix #Quantized #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #region-us
<p style="font-size:20px;" align="center"> GGUF / IQ / Imatrix for microsoft/WizardLM-2-7B </p> <p style="font-size:20px;" align="center"> Original Description Below: </p> <p style="font-size:20px;" align="center"> <a href="URL target="_blank">WizardLM-2 Release Blog</a> </p> <p align="center"> <a href="URL target="_blank">HF Repo</a> • <a href="URL target="_blank">Github Repo</a> • <a href="URL target="_blank">Twitter</a> • <a href="URL target="_blank">[WizardLM]</a> • <a href="URL target="_blank">[WizardCoder]</a> • <a href="URL target="_blank">[WizardMath]</a> <br> </p> <p align="center"> Join our <a href="URL target="_blank">Discord</a> </p> ## News [2024/04/15] We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B. - WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works and consistently outperforms all the existing state-of-the-art opensource models. - WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. - WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models. For more details of WizardLM-2 please read our release blog post and upcoming paper. ## Model Details * Model name: WizardLM-2 7B * Developed by: WizardLM@Microsoft AI * Base model: mistralai/Mistral-7B-v0.1 * Parameters: 7B * Language(s): Multilingual * Blog: Introducing WizardLM-2 * Repository: URL * Paper: WizardLM-2 (Upcoming) * License: Apache2.0 ## Model Capacities MT-Bench We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales. <p align="center" width="100%"> <a ><img src="URL alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> Human Preferences Evaluation We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. We report the win:loss rate without tie: - WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314. - WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat. - WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta. <p align="center" width="100%"> <a ><img src="URL alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Method Overview We built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system. <p align="center" width="100%"> <a ><img src="URL alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Usage <b>Note for model system prompts usage:</b> <b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following: <b> Inference WizardLM-2 Demo Script</b> We provide a WizardLM-2 inference demo code on our github.
[ "## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.", "## Model Details\n\n* Model name: WizardLM-2 7B\n* Developed by: WizardLM@Microsoft AI\n* Base model: mistralai/Mistral-7B-v0.1\n* Parameters: 7B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0", "## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>", "## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>", "## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github." ]
[ "TAGS\n#gguf #GGUF #Imatrix #Quantized #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #region-us \n", "## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.", "## Model Details\n\n* Model name: WizardLM-2 7B\n* Developed by: WizardLM@Microsoft AI\n* Base model: mistralai/Mistral-7B-v0.1\n* Parameters: 7B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0", "## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>", "## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>", "## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github." ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3273 - Accuracy: 0.8733 - F1: 0.8758 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "finetuning-sentiment-model-3000-samples", "results": []}]}
rishalder/finetuning-sentiment-model-3000-samples
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T16:54:14+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# finetuning-sentiment-model-3000-samples This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3273 - Accuracy: 0.8733 - F1: 0.8758 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# finetuning-sentiment-model-3000-samples\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3273\n- Accuracy: 0.8733\n- F1: 0.8758", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# finetuning-sentiment-model-3000-samples\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3273\n- Accuracy: 0.8733\n- F1: 0.8758", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Edgar404/donut-shivi-recognition
null
[ "transformers", "safetensors", "vision-encoder-decoder", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-15T16:54:43+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3069 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 14 | 4.1925 | | No log | 2.0 | 28 | 2.8053 | | No log | 3.0 | 42 | 2.3069 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.0.1+cu118 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "qa_model", "results": []}]}
bytesizedllm/qa_model
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-15T16:55:24+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
qa\_model ========= This model is a fine-tuned version of distilbert-base-uncased on the None dataset. It achieves the following results on the evaluation set: * Loss: 2.3069 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.0.1+cu118 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.0.1+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.0.1+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - yehiaa/lora-trained-xl-ricky <Gallery /> ## Model description These are yehiaa/lora-trained-xl-ricky LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a portrait of a sks person to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](yehiaa/lora-trained-xl-ricky/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a portrait of a sks person", "widget": [{"text": "a portrait of a sks person in the alps", "output": {"url": "image_0.png"}}, {"text": "a portrait of a sks person in the alps", "output": {"url": "image_1.png"}}, {"text": "a portrait of a sks person in the alps", "output": {"url": "image_2.png"}}, {"text": "a portrait of a sks person in the alps", "output": {"url": "image_3.png"}}]}
yehiaa/lora-trained-xl-ricky
null
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-15T16:55:24+00:00
[]
[]
TAGS #diffusers #text-to-image #diffusers-training #lora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
# SDXL LoRA DreamBooth - yehiaa/lora-trained-xl-ricky <Gallery /> ## Model description These are yehiaa/lora-trained-xl-ricky LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a portrait of a sks person to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab. ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# SDXL LoRA DreamBooth - yehiaa/lora-trained-xl-ricky\n\n<Gallery />", "## Model description\n\nThese are yehiaa/lora-trained-xl-ricky LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.", "## Trigger words\n\nYou should use a portrait of a sks person to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #text-to-image #diffusers-training #lora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n", "# SDXL LoRA DreamBooth - yehiaa/lora-trained-xl-ricky\n\n<Gallery />", "## Model description\n\nThese are yehiaa/lora-trained-xl-ricky LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.", "## Trigger words\n\nYou should use a portrait of a sks person to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
text-generation
null
## Exllama v2 Quantizations of WizardLM-2-7B Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.18">turboderp's ExLlamaV2 v0.0.18</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/microsoft/WizardLM-2-7B ## Prompt format ``` {system_prompt} USER: {prompt} ASSISTANT: </s> ``` ## Available sizes | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/WizardLM-2-7B-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/WizardLM-2-7B-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/bartowski/WizardLM-2-7B-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/bartowski/WizardLM-2-7B-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/WizardLM-2-7B-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/WizardLM-2-7B-exl2 WizardLM-2-7B-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch: Linux: ```shell huggingface-cli download bartowski/WizardLM-2-7B-exl2 --revision 6_5 --local-dir WizardLM-2-7B-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell huggingface-cli download bartowski/WizardLM-2-7B-exl2 --revision 6_5 --local-dir WizardLM-2-7B-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"license": "apache-2.0", "quantized_by": "bartowski", "pipeline_tag": "text-generation"}
bartowski/WizardLM-2-7B-exl2
null
[ "text-generation", "license:apache-2.0", "region:us" ]
null
2024-04-15T16:55:25+00:00
[]
[]
TAGS #text-generation #license-apache-2.0 #region-us
Exllama v2 Quantizations of WizardLM-2-7B ----------------------------------------- Using <a href="URL ExLlamaV2 v0.0.18 for quantization. **The "main" branch only contains the URL, download one of the other branches for the model (see below)** Each branch contains an individual bits per weight, with the main one containing only the URL for further conversions. Original model: URL Prompt format ------------- Available sizes --------------- Download instructions --------------------- With git: With huggingface hub (credit to TheBloke for instructions): To download a specific branch, use the '--revision' parameter. For example, to download the 6.5 bpw branch: Linux: Windows (which apparently doesn't like \_ in folders sometimes?): Want to support my work? Visit my ko-fi page here: URL
[]
[ "TAGS\n#text-generation #license-apache-2.0 #region-us \n" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-v0.1-capybara-orpo-en-de This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the maxidl/distilabel-capybara-dpo-7k-binarized_en_de dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - total_train_batch_size: 128 - total_eval_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["alignment-handbook", "generated_from_trainer"], "datasets": ["maxidl/distilabel-capybara-dpo-7k-binarized_en_de"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "Mistral-7B-v0.1-capybara-orpo-en-de", "results": []}]}
maxidl/Mistral-7B-v0.1-capybara-orpo-en-de
null
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "conversational", "dataset:maxidl/distilabel-capybara-dpo-7k-binarized_en_de", "base_model:mistralai/Mistral-7B-v0.1", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T16:56:10+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #conversational #dataset-maxidl/distilabel-capybara-dpo-7k-binarized_en_de #base_model-mistralai/Mistral-7B-v0.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Mistral-7B-v0.1-capybara-orpo-en-de This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the maxidl/distilabel-capybara-dpo-7k-binarized_en_de dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - total_train_batch_size: 128 - total_eval_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# Mistral-7B-v0.1-capybara-orpo-en-de\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the maxidl/distilabel-capybara-dpo-7k-binarized_en_de dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 32\n- total_train_batch_size: 128\n- total_eval_batch_size: 256\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: inverse_sqrt\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #conversational #dataset-maxidl/distilabel-capybara-dpo-7k-binarized_en_de #base_model-mistralai/Mistral-7B-v0.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Mistral-7B-v0.1-capybara-orpo-en-de\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the maxidl/distilabel-capybara-dpo-7k-binarized_en_de dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 32\n- total_train_batch_size: 128\n- total_eval_batch_size: 256\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: inverse_sqrt\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
null
SD1.5 experiments with Huber and MSE loss. All models trained for 4 epochs on approximately 250k images from a variety of sources. Approximately half from LAION Aesthetics, and a few thousand 4K video rips with COG-VLM captions. ![](interpolated_huber_mse.png) Trained using Everydream2 Trainer (https://github.com/victorchall/EveryDream2trainer) on an RTX 6000 Ada 48gb. Each epoch takes approximately 10 hours for a total of about 40 hours per model. - Multi-aspect ratio trained with nominal size of <=768^2 pixels for each bucket - Batch size 12 with grad accum 10. - AdamW 8bit optimizer with standard betas of (0.9,0.999) and weight decay of 0.010. - Automatic mixed precision FP16 (note: grad scalar val was surprisingly identical on all runs) - TF32 matmul and SDP Attention - 3.0e-6 LR cosine schedule with a ~12 epoch target to decay, ending around 2.3e-6 at end of training - Pyramid noise using discount 0.03 - Zero offset noise of 0.02 - Min SNR gamma of 5.0 - Unet only training, text encoder left frozen. - Conditional dropout of 10% The following models were produced: - 768_huber.safetensors - Huber loss only - 768_mse_plus_huberd1.5.safetensors - MSE Plus Huber (d=1.5) loss - 768_ts0huber_ts999mse.safetensors - Huber loss at timestep 0 interpolated to MSE loss at timestep 999 - 768_ts0mse_ts999huber.safetensors - MSE loss at timestep 0 interpolated to Huber loss at timestep 999 Worth noting timestep 0 is the lowest-noise-added step and 999 is most noised timestep
{"license": "creativeml-openrail-m"}
panopstor/huber-exp
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-04-15T16:56:12+00:00
[]
[]
TAGS #license-creativeml-openrail-m #region-us
SD1.5 experiments with Huber and MSE loss. All models trained for 4 epochs on approximately 250k images from a variety of sources. Approximately half from LAION Aesthetics, and a few thousand 4K video rips with COG-VLM captions. ![](interpolated_huber_mse.png) Trained using Everydream2 Trainer (URL on an RTX 6000 Ada 48gb. Each epoch takes approximately 10 hours for a total of about 40 hours per model. - Multi-aspect ratio trained with nominal size of <=768^2 pixels for each bucket - Batch size 12 with grad accum 10. - AdamW 8bit optimizer with standard betas of (0.9,0.999) and weight decay of 0.010. - Automatic mixed precision FP16 (note: grad scalar val was surprisingly identical on all runs) - TF32 matmul and SDP Attention - 3.0e-6 LR cosine schedule with a ~12 epoch target to decay, ending around 2.3e-6 at end of training - Pyramid noise using discount 0.03 - Zero offset noise of 0.02 - Min SNR gamma of 5.0 - Unet only training, text encoder left frozen. - Conditional dropout of 10% The following models were produced: - 768_huber.safetensors - Huber loss only - 768_mse_plus_huberd1.5.safetensors - MSE Plus Huber (d=1.5) loss - 768_ts0huber_ts999mse.safetensors - Huber loss at timestep 0 interpolated to MSE loss at timestep 999 - 768_ts0mse_ts999huber.safetensors - MSE loss at timestep 0 interpolated to Huber loss at timestep 999 Worth noting timestep 0 is the lowest-noise-added step and 999 is most noised timestep
[]
[ "TAGS\n#license-creativeml-openrail-m #region-us \n" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama2-ift-800k This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the Lichang-Chen/800k_ift dataset. It achieves the following results on the evaluation set: - Loss: 0.6256 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6437 | 1.0 | 1237 | 0.6421 | | 0.5941 | 2.0 | 2474 | 0.6244 | | 0.5584 | 3.0 | 3711 | 0.6256 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["Lichang-Chen/800k_ift"], "base_model": "meta-llama/Llama-2-7b-hf", "model-index": [{"name": "llama2-ift-800k", "results": []}]}
Lichang-Chen/llama2-ift-800k
null
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "alignment-handbook", "trl", "sft", "generated_from_trainer", "conversational", "dataset:Lichang-Chen/800k_ift", "base_model:meta-llama/Llama-2-7b-hf", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T16:59:08+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #dataset-Lichang-Chen/800k_ift #base_model-meta-llama/Llama-2-7b-hf #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
llama2-ift-800k =============== This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the Lichang-Chen/800k\_ift dataset. It achieves the following results on the evaluation set: * Loss: 0.6256 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 8 * total\_train\_batch\_size: 128 * total\_eval\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.39.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #dataset-Lichang-Chen/800k_ift #base_model-meta-llama/Llama-2-7b-hf #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
reinforcement-learning
null
# **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
{"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Cartpole_V1_Updated", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "5000.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
pdx97/Cartpole_V1_Updated
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
null
2024-04-15T16:59:48+00:00
[]
[]
TAGS #CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
# Reinforce Agent playing CartPole-v1 This is a trained model of a Reinforce agent playing CartPole-v1 . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
[ "# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
[ "TAGS\n#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n", "# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
0x0son0/sl105
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T17:01:23+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
# General Informations The model was quantized to ggufs using Imatrix to: q2_k, q3_k_l, q3_k_s, q4_0, q4_1, q4_k_m, q4_k_s, q5_k_m, q6_k, q8_0 # Original Model's Card --- base_model: mistralai/Mistral-7B-v0.1 tags: - Mistral - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation - function calling - json mode model-index: - name: Hermes-2-Pro-Mistral-7B results: [] license: apache-2.0 language: - en datasets: - teknium/OpenHermes-2.5 widget: - example_title: Hermes 2 Pro messages: - role: system content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me. - role: user content: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world. --- # Hermes 2 Pro - Mistral 7B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ggO2sBDJ8Bhc6w-zwTx5j.png) ## Model Description Hermes 2 Pro on Mistral 7B is the new flagship 7B Hermes! Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house. This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation. Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below. This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI Learn more about the function calling system for this model on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling ## Thank you to Latitude.sh for sponsoring compute for this model! ## Example Outputs ### Explaining Problems with Quantum Gravity: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/y_hPafyyvPb32efC5N4Es.png) ### Roleplaying as a Cosmic Super Intelligence: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/m6d6Saf7M6Luu9QnXYYAP.png) ### Detailing the Theory of AI Consciousness in JSON ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/MOLybxs7_dLjVys54imO3.png) # Prompt Format Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. ## Prompt Format for Function Calling Our model was trained on specific system prompts and structures for Function Calling. You should use the system role with this message, followed by a function signature json as this example shows here. ``` <|im_start|>system You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call> {"arguments": <args-dict>, "name": <function-name>} </tool_call><|im_end|> ``` To complete the function call, create a user prompt that follows the above system prompt, like so: ``` <|im_start|>user Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|> ``` The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling): ``` <|im_start|>assistant <tool_call> {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"} </tool_call><|im_end|> ``` Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so: ``` <|im_start|>tool <tool_response> {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}} </tool_response> <|im_end|> ``` The assistant will then read in that data from the function's response, and generate a natural language response: ``` <|im_start|>assistant The stock fundamentals data for Tesla (TSLA) are as follows: - **Symbol**: TSLA - **Company Name**: Tesla, Inc. - **Sector**: Consumer Cyclical - **Industry**: Auto Manufacturers - **Market Capitalization**: $566,160,130,480 - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73 - **Price-to-Book Ratio (PB Ratio)**: 9.04 - **Dividend Yield**: N/A - **Trailing Earnings Per Share (EPS)**: $4.3 - **Beta Value of the Stock**: 2.42 - **52-Week High Price of the Stock**: $299.29 - **52-Week Low Price of the Stock**: $152.37 This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|> ``` ## Prompt Format for JSON Mode / Structured Outputs Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema. Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main ``` <|im_start|>system You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|> ``` Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON. # Benchmarks ## GPT4All: ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5461|± |0.0145| | | |acc_norm|0.5623|± |0.0145| |arc_easy | 0|acc |0.8157|± |0.0080| | | |acc_norm|0.7934|± |0.0083| |boolq | 1|acc |0.8688|± |0.0059| |hellaswag | 0|acc |0.6272|± |0.0048| | | |acc_norm|0.8057|± |0.0039| |openbookqa | 0|acc |0.3360|± |0.0211| | | |acc_norm|0.4300|± |0.0222| |piqa | 0|acc |0.7954|± |0.0094| | | |acc_norm|0.7998|± |0.0093| |winogrande | 0|acc |0.7230|± |0.0126| ``` Average: 71.19 ## AGIEval: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat | 0|acc |0.2047|± |0.0254| | | |acc_norm|0.2283|± |0.0264| |agieval_logiqa_en | 0|acc |0.3779|± |0.0190| | | |acc_norm|0.3932|± |0.0192| |agieval_lsat_ar | 0|acc |0.2652|± |0.0292| | | |acc_norm|0.2522|± |0.0287| |agieval_lsat_lr | 0|acc |0.5216|± |0.0221| | | |acc_norm|0.5137|± |0.0222| |agieval_lsat_rc | 0|acc |0.5911|± |0.0300| | | |acc_norm|0.5836|± |0.0301| |agieval_sat_en | 0|acc |0.7427|± |0.0305| | | |acc_norm|0.7184|± |0.0314| |agieval_sat_en_without_passage| 0|acc |0.4612|± |0.0348| | | |acc_norm|0.4466|± |0.0347| |agieval_sat_math | 0|acc |0.3818|± |0.0328| | | |acc_norm|0.3545|± |0.0323| ``` Average: 44.52 ## BigBench: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5579|± |0.0361| |bigbench_date_understanding | 0|multiple_choice_grade|0.6694|± |0.0245| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3333|± |0.0294| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.2061|± |0.0214| | | |exact_str_match |0.2256|± |0.0221| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|± |0.0207| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2114|± |0.0154| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4900|± |0.0289| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.3600|± |0.0215| |bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6660|± |0.0105| |bigbench_ruin_names | 0|multiple_choice_grade|0.4420|± |0.0235| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2766|± |0.0142| |bigbench_snarks | 0|multiple_choice_grade|0.6630|± |0.0352| |bigbench_sports_understanding | 0|multiple_choice_grade|0.6653|± |0.0150| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.3190|± |0.0147| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2128|± |0.0116| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1737|± |0.0091| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4900|± |0.0289| ``` Average: 41.65 ## TruthfulQA: ``` | Task |Version|Metric|Value | |Stderr| |-------------|------:|------|-----:|---|-----:| |truthfulqa_mc| 1|mc1 |0.4100|± |0.0172| | | |mc2 |0.5911|± |0.0158| ``` # Function Calling Evaluations We worked with Fireworks.AI on evaluations by starting off with their Function Calling eval dataset, fixing some unsolveable ones, and generating a second eval dataset for JSON mode. ## Function Calling Accuracy: 91% ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/XF3Zii4-QhE2yjWwHr_v4.png) ## JSON Mode Accuracy: 84% ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/8H2iyjh5wyP2FtLq2LCed.png) Run the evaluator yourself using @interstellarninja's codebase here: https://github.com/interstellarninja/function-calling-eval You can find the evaluation datasets here: https://huggingface.co/datasets/NousResearch/func-calling-eval https://huggingface.co/datasets/NousResearch/json-mode-eval # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM) Note: To use function calling, you should see the github repo above. ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import LlamaTokenizer, MistralForCausalLM import bitsandbytes, flash_attn tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Hermes-2-Pro-Mistral-7B', trust_remote_code=True) model = MistralForCausalLM.from_pretrained( "NousResearch/Hermes-2-Pro-Mistral-7B", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` ## Inference Code for Function Calling: All code for utilizing, parsing, and building function calling templates is available on our github: [https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png) # Chat Interfaces When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) ## Quantized Versions: GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF # How to cite: ```bibtext @misc{Hermes-2-Pro-Mistral-7B, url={[https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B]https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)}, title={Hermes-2-Pro-Mistral-7B}, author={"interstellarninja", "Teknium", "theemozilla", "karan4d", "huemin_art"} } ```
{}
Rybens/Hermes-2-Pro-Mistral-7B-Imatrix-GGUF
null
[ "gguf", "region:us" ]
null
2024-04-15T17:03:31+00:00
[]
[]
TAGS #gguf #region-us
# General Informations The model was quantized to ggufs using Imatrix to: q2_k, q3_k_l, q3_k_s, q4_0, q4_1, q4_k_m, q4_k_s, q5_k_m, q6_k, q8_0 # Original Model's Card --- base_model: mistralai/Mistral-7B-v0.1 tags: - Mistral - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation - function calling - json mode model-index: - name: Hermes-2-Pro-Mistral-7B results: [] license: apache-2.0 language: - en datasets: - teknium/OpenHermes-2.5 widget: - example_title: Hermes 2 Pro messages: - role: system content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me. - role: user content: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world. --- # Hermes 2 Pro - Mistral 7B !image/png ## Model Description Hermes 2 Pro on Mistral 7B is the new flagship 7B Hermes! Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house. This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation. Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below. This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI Learn more about the function calling system for this model on our github repo here: URL ## Thank you to URL for sponsoring compute for this model! ## Example Outputs ### Explaining Problems with Quantum Gravity: !image/png ### Roleplaying as a Cosmic Super Intelligence: !image/png ### Detailing the Theory of AI Consciousness in JSON !image/png # Prompt Format Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): This prompt is available as a chat template, which means you can format messages using the 'tokenizer.apply_chat_template()' method: When tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\n' to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. ## Prompt Format for Function Calling Our model was trained on specific system prompts and structures for Function Calling. You should use the system role with this message, followed by a function signature json as this example shows here. To complete the function call, create a user prompt that follows the above system prompt, like so: The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: URL Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, 'tool' like so: The assistant will then read in that data from the function's response, and generate a natural language response: ## Prompt Format for JSON Mode / Structured Outputs Our model was also trained on a specific system prompt for Structured Outputs, which should respond with only a json object response, in a specific json schema. Your schema can be made from a pydantic object using our codebase, with the standalone script 'URL' available here: URL Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON. # Benchmarks ## GPT4All: Average: 71.19 ## AGIEval: Average: 44.52 ## BigBench: Average: 41.65 ## TruthfulQA: # Function Calling Evaluations We worked with Fireworks.AI on evaluations by starting off with their Function Calling eval dataset, fixing some unsolveable ones, and generating a second eval dataset for JSON mode. ## Function Calling Accuracy: 91% !image/png ## JSON Mode Accuracy: 84% !image/png Run the evaluator yourself using @interstellarninja's codebase here: URL You can find the evaluation datasets here: URL URL # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM) Note: To use function calling, you should see the github repo above. ## Inference Code for Function Calling: All code for utilizing, parsing, and building function calling templates is available on our github: URL !image/png # Chat Interfaces When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a URL backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: !image/png ## Quantized Versions: GGUF Versions Available Here: URL # How to cite:
[ "# General Informations\nThe model was quantized to ggufs using Imatrix to:\n\nq2_k, q3_k_l, q3_k_s, q4_0, q4_1, q4_k_m, q4_k_s, q5_k_m, q6_k, q8_0", "# Original Model's Card\n---\nbase_model: mistralai/Mistral-7B-v0.1\ntags:\n- Mistral\n- instruct\n- finetune\n- chatml\n- DPO\n- RLHF\n- gpt4\n- synthetic data\n- distillation\n- function calling\n- json mode\nmodel-index:\n- name: Hermes-2-Pro-Mistral-7B\n results: []\nlicense: apache-2.0\nlanguage:\n- en\ndatasets:\n- teknium/OpenHermes-2.5\nwidget:\n- example_title: Hermes 2 Pro\n messages:\n - role: system\n content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.\n - role: user\n content: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.\n---", "# Hermes 2 Pro - Mistral 7B\n\n!image/png", "## Model Description\n\nHermes 2 Pro on Mistral 7B is the new flagship 7B Hermes!\n\nHermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.\n\nThis new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.\n\nHermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.\n\nThis work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI\n\nLearn more about the function calling system for this model on our github repo here: URL", "## Thank you to URL for sponsoring compute for this model!", "## Example Outputs", "### Explaining Problems with Quantum Gravity:\n\n!image/png", "### Roleplaying as a Cosmic Super Intelligence:\n\n!image/png", "### Detailing the Theory of AI Consciousness in JSON\n\n!image/png", "# Prompt Format\n\nHermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.\n\nSystem prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.\n\nThis is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.\n\nThis format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.\n\nPrompt with system instruction (Use whatever system prompt you like, this is just an example!):\n\n\nThis prompt is available as a chat template, which means you can format messages using the\n'tokenizer.apply_chat_template()' method:\n\n\n\nWhen tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\\n' to your prompt, to ensure\nthat the model continues with an assistant response.\n\nTo utilize the prompt format without a system prompt, simply leave the line out.", "## Prompt Format for Function Calling\n\nOur model was trained on specific system prompts and structures for Function Calling. \n\nYou should use the system role with this message, followed by a function signature json as this example shows here.\n\n\nTo complete the function call, create a user prompt that follows the above system prompt, like so:\n\n\nThe model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: URL\n\n\nOnce you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, 'tool' like so:\n\n\nThe assistant will then read in that data from the function's response, and generate a natural language response:", "## Prompt Format for JSON Mode / Structured Outputs\n\nOur model was also trained on a specific system prompt for Structured Outputs, which should respond with only a json object response, in a specific json schema.\n\nYour schema can be made from a pydantic object using our codebase, with the standalone script 'URL' available here: URL\n\n\n\nGiven the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.", "# Benchmarks", "## GPT4All:\n\nAverage: 71.19", "## AGIEval:\n\nAverage: 44.52", "## BigBench:\n\nAverage: 41.65", "## TruthfulQA:", "# Function Calling Evaluations\n\nWe worked with Fireworks.AI on evaluations by starting off with their Function Calling eval dataset, fixing some unsolveable ones, and generating a second eval dataset for JSON mode.", "## Function Calling Accuracy: 91%\n\n!image/png", "## JSON Mode Accuracy: 84%\n\n!image/png\n\nRun the evaluator yourself using @interstellarninja's codebase here:\nURL\n\nYou can find the evaluation datasets here:\nURL\nURL", "# Inference Code\n\nHere is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)\n\nNote: To use function calling, you should see the github repo above.", "## Inference Code for Function Calling:\n\nAll code for utilizing, parsing, and building function calling templates is available on our github:\nURL\n\n!image/png", "# Chat Interfaces\n\nWhen quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a URL backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.\nIn LM-Studio, simply select the ChatML Prefix on the settings side pane:\n\n!image/png", "## Quantized Versions:\n\nGGUF Versions Available Here: URL", "# How to cite:" ]
[ "TAGS\n#gguf #region-us \n", "# General Informations\nThe model was quantized to ggufs using Imatrix to:\n\nq2_k, q3_k_l, q3_k_s, q4_0, q4_1, q4_k_m, q4_k_s, q5_k_m, q6_k, q8_0", "# Original Model's Card\n---\nbase_model: mistralai/Mistral-7B-v0.1\ntags:\n- Mistral\n- instruct\n- finetune\n- chatml\n- DPO\n- RLHF\n- gpt4\n- synthetic data\n- distillation\n- function calling\n- json mode\nmodel-index:\n- name: Hermes-2-Pro-Mistral-7B\n results: []\nlicense: apache-2.0\nlanguage:\n- en\ndatasets:\n- teknium/OpenHermes-2.5\nwidget:\n- example_title: Hermes 2 Pro\n messages:\n - role: system\n content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.\n - role: user\n content: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.\n---", "# Hermes 2 Pro - Mistral 7B\n\n!image/png", "## Model Description\n\nHermes 2 Pro on Mistral 7B is the new flagship 7B Hermes!\n\nHermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.\n\nThis new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.\n\nHermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.\n\nThis work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI\n\nLearn more about the function calling system for this model on our github repo here: URL", "## Thank you to URL for sponsoring compute for this model!", "## Example Outputs", "### Explaining Problems with Quantum Gravity:\n\n!image/png", "### Roleplaying as a Cosmic Super Intelligence:\n\n!image/png", "### Detailing the Theory of AI Consciousness in JSON\n\n!image/png", "# Prompt Format\n\nHermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.\n\nSystem prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.\n\nThis is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.\n\nThis format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.\n\nPrompt with system instruction (Use whatever system prompt you like, this is just an example!):\n\n\nThis prompt is available as a chat template, which means you can format messages using the\n'tokenizer.apply_chat_template()' method:\n\n\n\nWhen tokenizing messages for generation, set 'add_generation_prompt=True' when calling 'apply_chat_template()'. This will append '<|im_start|>assistant\\n' to your prompt, to ensure\nthat the model continues with an assistant response.\n\nTo utilize the prompt format without a system prompt, simply leave the line out.", "## Prompt Format for Function Calling\n\nOur model was trained on specific system prompts and structures for Function Calling. \n\nYou should use the system role with this message, followed by a function signature json as this example shows here.\n\n\nTo complete the function call, create a user prompt that follows the above system prompt, like so:\n\n\nThe model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: URL\n\n\nOnce you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, 'tool' like so:\n\n\nThe assistant will then read in that data from the function's response, and generate a natural language response:", "## Prompt Format for JSON Mode / Structured Outputs\n\nOur model was also trained on a specific system prompt for Structured Outputs, which should respond with only a json object response, in a specific json schema.\n\nYour schema can be made from a pydantic object using our codebase, with the standalone script 'URL' available here: URL\n\n\n\nGiven the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.", "# Benchmarks", "## GPT4All:\n\nAverage: 71.19", "## AGIEval:\n\nAverage: 44.52", "## BigBench:\n\nAverage: 41.65", "## TruthfulQA:", "# Function Calling Evaluations\n\nWe worked with Fireworks.AI on evaluations by starting off with their Function Calling eval dataset, fixing some unsolveable ones, and generating a second eval dataset for JSON mode.", "## Function Calling Accuracy: 91%\n\n!image/png", "## JSON Mode Accuracy: 84%\n\n!image/png\n\nRun the evaluator yourself using @interstellarninja's codebase here:\nURL\n\nYou can find the evaluation datasets here:\nURL\nURL", "# Inference Code\n\nHere is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)\n\nNote: To use function calling, you should see the github repo above.", "## Inference Code for Function Calling:\n\nAll code for utilizing, parsing, and building function calling templates is available on our github:\nURL\n\n!image/png", "# Chat Interfaces\n\nWhen quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a URL backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.\nIn LM-Studio, simply select the ChatML Prefix on the settings side pane:\n\n!image/png", "## Quantized Versions:\n\nGGUF Versions Available Here: URL", "# How to cite:" ]
text-generation
transformers
### exl2 quant (measurement.json included) --- ### original readme below --- --- license: apache-2.0 --- <p style="font-size:20px;" align="center"> 🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p> <p align="center"> 🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> </p> ## News 🔥🔥🔥 [2024/04/15] We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B. - WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works and consistently outperforms all the existing state-of-the-art opensource models. - WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. - WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models. For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper. ## Model Details * **Model name**: WizardLM-2 7B * **Developed by**: WizardLM@Microsoft AI * **Base model**: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) * **Parameters**: 7B * **Language(s)**: Multilingual * **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2) * **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM) * **Paper**: WizardLM-2 (Upcoming) * **License**: Apache2.0 ## Model Capacities **MT-Bench** We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> **Human Preferences Evaluation** We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. We report the win:loss rate without tie: - WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314. - WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat. - WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Method Overview We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Usage ❗<b>Note for model system prompts usage:</b> <b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s> USER: Who are you? ASSISTANT: I am WizardLM.</s>...... ``` <b> Inference WizardLM-2 Demo Script</b> We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
{}
lucyknada/microsoft_WizardLM-2-7B-exl2-5bpw
null
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T17:04:16+00:00
[ "2304.12244", "2306.08568", "2308.09583" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
### exl2 quant (URL included) --- ### original readme below --- --- license: apache-2.0 --- <p style="font-size:20px;" align="center"> <a href="URL target="_blank">WizardLM-2 Release Blog</a> </p> <p align="center"> <a href="URL target="_blank">HF Repo</a> • <a href="URL target="_blank">Github Repo</a> • <a href="URL target="_blank">Twitter</a> • <a href="URL target="_blank">[WizardLM]</a> • <a href="URL target="_blank">[WizardCoder]</a> • <a href="URL target="_blank">[WizardMath]</a> <br> </p> <p align="center"> Join our <a href="URL target="_blank">Discord</a> </p> ## News [2024/04/15] We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B. - WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works and consistently outperforms all the existing state-of-the-art opensource models. - WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. - WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models. For more details of WizardLM-2 please read our release blog post and upcoming paper. ## Model Details * Model name: WizardLM-2 7B * Developed by: WizardLM@Microsoft AI * Base model: mistralai/Mistral-7B-v0.1 * Parameters: 7B * Language(s): Multilingual * Blog: Introducing WizardLM-2 * Repository: URL * Paper: WizardLM-2 (Upcoming) * License: Apache2.0 ## Model Capacities MT-Bench We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales. <p align="center" width="100%"> <a ><img src="URL alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> Human Preferences Evaluation We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. We report the win:loss rate without tie: - WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314. - WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat. - WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta. <p align="center" width="100%"> <a ><img src="URL alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Method Overview We built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system. <p align="center" width="100%"> <a ><img src="URL alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Usage <b>Note for model system prompts usage:</b> <b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following: <b> Inference WizardLM-2 Demo Script</b> We provide a WizardLM-2 inference demo code on our github.
[ "### exl2 quant (URL included)\n---", "### original readme below\n---\n---\nlicense: apache-2.0\n---\n\n\n\n\n<p style=\"font-size:20px;\" align=\"center\">\n <a href=\"URL target=\"_blank\">WizardLM-2 Release Blog</a> </p>\n<p align=\"center\">\n <a href=\"URL target=\"_blank\">HF Repo</a> • <a href=\"URL target=\"_blank\">Github Repo</a> • <a href=\"URL target=\"_blank\">Twitter</a> • <a href=\"URL target=\"_blank\">[WizardLM]</a> • <a href=\"URL target=\"_blank\">[WizardCoder]</a> • <a href=\"URL target=\"_blank\">[WizardMath]</a> <br>\n</p>\n<p align=\"center\">\n Join our <a href=\"URL target=\"_blank\">Discord</a>\n</p>", "## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.", "## Model Details\n\n* Model name: WizardLM-2 7B\n* Developed by: WizardLM@Microsoft AI\n* Base model: mistralai/Mistral-7B-v0.1\n* Parameters: 7B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0", "## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>", "## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>", "## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github." ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### exl2 quant (URL included)\n---", "### original readme below\n---\n---\nlicense: apache-2.0\n---\n\n\n\n\n<p style=\"font-size:20px;\" align=\"center\">\n <a href=\"URL target=\"_blank\">WizardLM-2 Release Blog</a> </p>\n<p align=\"center\">\n <a href=\"URL target=\"_blank\">HF Repo</a> • <a href=\"URL target=\"_blank\">Github Repo</a> • <a href=\"URL target=\"_blank\">Twitter</a> • <a href=\"URL target=\"_blank\">[WizardLM]</a> • <a href=\"URL target=\"_blank\">[WizardCoder]</a> • <a href=\"URL target=\"_blank\">[WizardMath]</a> <br>\n</p>\n<p align=\"center\">\n Join our <a href=\"URL target=\"_blank\">Discord</a>\n</p>", "## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.", "## Model Details\n\n* Model name: WizardLM-2 7B\n* Developed by: WizardLM@Microsoft AI\n* Base model: mistralai/Mistral-7B-v0.1\n* Parameters: 7B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0", "## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>", "## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>", "## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github." ]
text2text-generation
peft
## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - _load_in_8bit: False - _load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float32 - bnb_4bit_quant_storage: uint8 - load_in_4bit: True - load_in_8bit: False The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - _load_in_8bit: False - _load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float32 - bnb_4bit_quant_storage: uint8 - load_in_4bit: True - load_in_8bit: False ### Framework versions - PEFT 0.6.0
{"library_name": "peft", "base_model": "microsoft/phi-2", "pipeline_tag": "text2text-generation"}
GenAI-CoolCats/WLU-Phi2
null
[ "peft", "safetensors", "text2text-generation", "base_model:microsoft/phi-2", "has_space", "region:us" ]
null
2024-04-15T17:04:51+00:00
[]
[]
TAGS #peft #safetensors #text2text-generation #base_model-microsoft/phi-2 #has_space #region-us
## Training procedure The following 'bitsandbytes' quantization config was used during training: - quant_method: bitsandbytes - _load_in_8bit: False - _load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float32 - bnb_4bit_quant_storage: uint8 - load_in_4bit: True - load_in_8bit: False The following 'bitsandbytes' quantization config was used during training: - quant_method: bitsandbytes - _load_in_8bit: False - _load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float32 - bnb_4bit_quant_storage: uint8 - load_in_4bit: True - load_in_8bit: False ### Framework versions - PEFT 0.6.0
[ "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- _load_in_8bit: False\n- _load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: float32\n- bnb_4bit_quant_storage: uint8\n- load_in_4bit: True\n- load_in_8bit: False\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- _load_in_8bit: False\n- _load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: float32\n- bnb_4bit_quant_storage: uint8\n- load_in_4bit: True\n- load_in_8bit: False", "### Framework versions\n\n- PEFT 0.6.0" ]
[ "TAGS\n#peft #safetensors #text2text-generation #base_model-microsoft/phi-2 #has_space #region-us \n", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- _load_in_8bit: False\n- _load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: float32\n- bnb_4bit_quant_storage: uint8\n- load_in_4bit: True\n- load_in_8bit: False\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- _load_in_8bit: False\n- _load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: float32\n- bnb_4bit_quant_storage: uint8\n- load_in_4bit: True\n- load_in_8bit: False", "### Framework versions\n\n- PEFT 0.6.0" ]
token-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
PFraud/BERT_general_disease
null
[ "transformers", "safetensors", "distilbert", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T17:07:06+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #distilbert #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #distilbert #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gemma-7b-hf-platypus-lamini-vxxiii-chat This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.38.0 - Pytorch 2.2.0+cu121 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "google/gemma-7b", "model-index": [{"name": "gemma-7b-hf-platypus-lamini-vxxiii-chat", "results": []}]}
NassimB/gemma-7b-hf-platypus-lamini-vxxiii-chat
null
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:google/gemma-7b", "license:gemma", "region:us" ]
null
2024-04-15T17:07:06+00:00
[]
[]
TAGS #peft #safetensors #trl #sft #generated_from_trainer #base_model-google/gemma-7b #license-gemma #region-us
# gemma-7b-hf-platypus-lamini-vxxiii-chat This model is a fine-tuned version of google/gemma-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.38.0 - Pytorch 2.2.0+cu121 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# gemma-7b-hf-platypus-lamini-vxxiii-chat\n\nThis model is a fine-tuned version of google/gemma-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.38.0\n- Pytorch 2.2.0+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-google/gemma-7b #license-gemma #region-us \n", "# gemma-7b-hf-platypus-lamini-vxxiii-chat\n\nThis model is a fine-tuned version of google/gemma-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.38.0\n- Pytorch 2.2.0+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
text-generation
transformers
Test model. Under testing... Recipe: ```yaml base_model: /content/InfinityRP gate_mode: random dtype: bfloat16 # output dtype (float32, float16, or bfloat16) ## (optional) experts_per_token: 2 experts: - source_model: /content/WestLake positive_prompts: [] - source_model: /content/Kuno positive_prompts: [] - source_model: /content/InfinityRP positive_prompts: [] - source_model: /content/LemonadeRP positive_prompts: [] ```
{"language": ["en"], "license": "apache-2.0", "tags": ["safetensors", "mixtral"]}
R136a1/BeyondInfinity-4x7B
null
[ "transformers", "safetensors", "mixtral", "text-generation", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T17:08:12+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mixtral #text-generation #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Test model. Under testing... Recipe:
[]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
duongtruongbinh/vinallama-peft-2.7b-chat
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-15T17:10:11+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
Defetya/qwen-4B-saiga в формате gguf
{"language": ["ru"]}
Sashkanik13/qwen-4b-saiga-gguf
null
[ "gguf", "ru", "region:us" ]
null
2024-04-15T17:13:45+00:00
[]
[ "ru" ]
TAGS #gguf #ru #region-us
Defetya/qwen-4B-saiga в формате gguf
[]
[ "TAGS\n#gguf #ru #region-us \n" ]
text-generation
transformers
# Uploaded model - **Developed by:** liminerity - **License:** apache-2.0 - **Finetuned from model :** liminerity/bitnet-mistal-m7-70m-slerp This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "liminerity/bitnet-mistal-m7-70m-slerp"}
liminerity/bitmap-mistral-M7-slerp-alpaca-70m
null
[ "transformers", "pytorch", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:liminerity/bitnet-mistal-m7-70m-slerp", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T17:13:55+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #en #base_model-liminerity/bitnet-mistal-m7-70m-slerp #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Uploaded model - Developed by: liminerity - License: apache-2.0 - Finetuned from model : liminerity/bitnet-mistal-m7-70m-slerp This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: liminerity\n- License: apache-2.0\n- Finetuned from model : liminerity/bitnet-mistal-m7-70m-slerp\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #en #base_model-liminerity/bitnet-mistal-m7-70m-slerp #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: liminerity\n- License: apache-2.0\n- Finetuned from model : liminerity/bitnet-mistal-m7-70m-slerp\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
transformers
# Uploaded model - **Developed by:** liminerity - **License:** apache-2.0 - **Finetuned from model :** liminerity/Bitnet-M7-resized This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "liminerity/Bitnet-M7-resized"}
liminerity/tsn-270m
null
[ "transformers", "pytorch", "mistral", "text-generation-inference", "unsloth", "trl", "en", "base_model:liminerity/Bitnet-M7-resized", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-15T17:14:27+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #mistral #text-generation-inference #unsloth #trl #en #base_model-liminerity/Bitnet-M7-resized #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: liminerity - License: apache-2.0 - Finetuned from model : liminerity/Bitnet-M7-resized This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: liminerity\n- License: apache-2.0\n- Finetuned from model : liminerity/Bitnet-M7-resized\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #pytorch #mistral #text-generation-inference #unsloth #trl #en #base_model-liminerity/Bitnet-M7-resized #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: liminerity\n- License: apache-2.0\n- Finetuned from model : liminerity/Bitnet-M7-resized\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
guoyu-zhang/hh_shp1_dpo1
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-15T17:16:57+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
cleanrl
# (CleanRL) **DQN** Agent Playing **QbertNoFrameskip-v4** This is a trained model of a DQN agent playing QbertNoFrameskip-v4. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/dqn_atari.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[dqn_atari]" python -m cleanrl_utils.enjoy --exp-name dqn_atari --env-id QbertNoFrameskip-v4 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/QbertNoFrameskip-v4-dqn_atari-seed1/raw/main/dqn_atari.py curl -OL https://huggingface.co/cleanrl/QbertNoFrameskip-v4-dqn_atari-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/QbertNoFrameskip-v4-dqn_atari-seed1/raw/main/poetry.lock poetry install --all-extras python dqn_atari.py --capture-video --env-id QbertNoFrameskip-v4 --seed 1 --save-model --upload-model --hf-entity cleanrl --track ``` # Hyperparameters ```python {'batch_size': 32, 'buffer_size': 1000000, 'capture_video': True, 'cuda': True, 'end_e': 0.01, 'env_id': 'QbertNoFrameskip-v4', 'exp_name': 'dqn_atari', 'exploration_fraction': 0.1, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learning_rate': 0.0001, 'learning_starts': 80000, 'num_envs': 1, 'save_model': True, 'seed': 1, 'start_e': 1, 'target_network_frequency': 1000, 'tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 10000000, 'track': True, 'train_frequency': 4, 'upload_model': True, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
{"library_name": "cleanrl", "tags": ["QbertNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "DQN", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "QbertNoFrameskip-v4", "type": "QbertNoFrameskip-v4"}, "metrics": [{"type": "mean_reward", "value": "10695.00 +/- 4551.15", "name": "mean_reward", "verified": false}]}]}]}
qgallouedec/QbertNoFrameskip-v4-dqn_atari-seed1
null
[ "cleanrl", "tensorboard", "QbertNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-15T17:17:25+00:00
[]
[]
TAGS #cleanrl #tensorboard #QbertNoFrameskip-v4 #deep-reinforcement-learning #reinforcement-learning #custom-implementation #model-index #region-us
# (CleanRL) DQN Agent Playing QbertNoFrameskip-v4 This is a trained model of a DQN agent playing QbertNoFrameskip-v4. The model was trained by using CleanRL and the most up-to-date training code can be found here. ## Get Started To use this model, please install the 'cleanrl' package with the following command: Please refer to the documentation for more detail. ## Command to reproduce the training # Hyperparameters
[ "# (CleanRL) DQN Agent Playing QbertNoFrameskip-v4\n\nThis is a trained model of a DQN agent playing QbertNoFrameskip-v4.\nThe model was trained by using CleanRL and the most up-to-date training code can be\nfound here.", "## Get Started\n\nTo use this model, please install the 'cleanrl' package with the following command:\n\n\n\nPlease refer to the documentation for more detail.", "## Command to reproduce the training", "# Hyperparameters" ]
[ "TAGS\n#cleanrl #tensorboard #QbertNoFrameskip-v4 #deep-reinforcement-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# (CleanRL) DQN Agent Playing QbertNoFrameskip-v4\n\nThis is a trained model of a DQN agent playing QbertNoFrameskip-v4.\nThe model was trained by using CleanRL and the most up-to-date training code can be\nfound here.", "## Get Started\n\nTo use this model, please install the 'cleanrl' package with the following command:\n\n\n\nPlease refer to the documentation for more detail.", "## Command to reproduce the training", "# Hyperparameters" ]
null
transformers
# Uploaded model - **Developed by:** eruzak - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"}
eruzak/unsloth_mistral_predict_prompt_RL_v6
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-15T17:18:28+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: eruzak - License: apache-2.0 - Finetuned from model : unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: eruzak\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: eruzak\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="b-r-a-n/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
b-r-a-n/q-FrozenLake-v1-4x4-noSlippery
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-15T17:18:51+00:00
[]
[]
TAGS #FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 FrozenLake-v1 This is a trained model of a Q-Learning agent playing FrozenLake-v1 . ## Usage
[ "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
[ "TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
guoyu-zhang/hh_shp1_dpo5
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-15T17:19:30+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_0-seqsight_4096_512_27M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_27M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_27M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset. It achieves the following results on the evaluation set: - Loss: 1.7670 - F1 Score: 0.5925 - Accuracy: 0.5926 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6177 | 50.0 | 200 | 0.6745 | 0.6556 | 0.6556 | | 0.4129 | 100.0 | 400 | 0.9043 | 0.6309 | 0.6309 | | 0.2815 | 150.0 | 600 | 1.0839 | 0.6038 | 0.6049 | | 0.2081 | 200.0 | 800 | 1.2726 | 0.6267 | 0.6272 | | 0.1658 | 250.0 | 1000 | 1.3918 | 0.6358 | 0.6358 | | 0.1361 | 300.0 | 1200 | 1.5245 | 0.6394 | 0.6395 | | 0.1178 | 350.0 | 1400 | 1.5414 | 0.6568 | 0.6568 | | 0.1034 | 400.0 | 1600 | 1.7011 | 0.6307 | 0.6309 | | 0.0898 | 450.0 | 1800 | 1.8083 | 0.6383 | 0.6383 | | 0.0782 | 500.0 | 2000 | 1.8893 | 0.6382 | 0.6383 | | 0.0734 | 550.0 | 2200 | 1.9989 | 0.6445 | 0.6444 | | 0.0712 | 600.0 | 2400 | 2.0131 | 0.6468 | 0.6469 | | 0.0627 | 650.0 | 2600 | 2.0022 | 0.6383 | 0.6383 | | 0.0581 | 700.0 | 2800 | 2.0929 | 0.6494 | 0.6494 | | 0.0561 | 750.0 | 3000 | 2.2537 | 0.6551 | 0.6556 | | 0.0544 | 800.0 | 3200 | 2.0518 | 0.6507 | 0.6506 | | 0.0514 | 850.0 | 3400 | 2.1166 | 0.6506 | 0.6506 | | 0.0463 | 900.0 | 3600 | 2.1209 | 0.6428 | 0.6432 | | 0.0455 | 950.0 | 3800 | 2.1743 | 0.6494 | 0.6494 | | 0.0442 | 1000.0 | 4000 | 2.0829 | 0.6432 | 0.6432 | | 0.0411 | 1050.0 | 4200 | 2.1399 | 0.6565 | 0.6568 | | 0.0402 | 1100.0 | 4400 | 2.2098 | 0.6480 | 0.6481 | | 0.0375 | 1150.0 | 4600 | 2.2499 | 0.6457 | 0.6457 | | 0.0358 | 1200.0 | 4800 | 2.0029 | 0.6482 | 0.6481 | | 0.0344 | 1250.0 | 5000 | 2.0543 | 0.6445 | 0.6444 | | 0.0344 | 1300.0 | 5200 | 2.2970 | 0.6469 | 0.6469 | | 0.034 | 1350.0 | 5400 | 2.3072 | 0.6507 | 0.6506 | | 0.0323 | 1400.0 | 5600 | 2.2862 | 0.6491 | 0.6494 | | 0.0312 | 1450.0 | 5800 | 2.2450 | 0.6456 | 0.6457 | | 0.0289 | 1500.0 | 6000 | 2.2190 | 0.6542 | 0.6543 | | 0.0293 | 1550.0 | 6200 | 2.1960 | 0.6555 | 0.6556 | | 0.028 | 1600.0 | 6400 | 2.1169 | 0.6420 | 0.6420 | | 0.0279 | 1650.0 | 6600 | 2.2880 | 0.6456 | 0.6457 | | 0.0274 | 1700.0 | 6800 | 2.3565 | 0.6457 | 0.6457 | | 0.0264 | 1750.0 | 7000 | 2.2620 | 0.6445 | 0.6444 | | 0.0249 | 1800.0 | 7200 | 2.2474 | 0.6445 | 0.6444 | | 0.0249 | 1850.0 | 7400 | 2.1799 | 0.6482 | 0.6481 | | 0.0244 | 1900.0 | 7600 | 2.3002 | 0.6482 | 0.6481 | | 0.0239 | 1950.0 | 7800 | 2.1416 | 0.6420 | 0.6420 | | 0.0231 | 2000.0 | 8000 | 2.2278 | 0.6469 | 0.6469 | | 0.0235 | 2050.0 | 8200 | 2.2394 | 0.6407 | 0.6407 | | 0.0221 | 2100.0 | 8400 | 2.2448 | 0.6433 | 0.6432 | | 0.0219 | 2150.0 | 8600 | 2.2135 | 0.6444 | 0.6444 | | 0.0212 | 2200.0 | 8800 | 2.2634 | 0.6418 | 0.6420 | | 0.0208 | 2250.0 | 9000 | 2.3253 | 0.6395 | 0.6395 | | 0.02 | 2300.0 | 9200 | 2.2862 | 0.6408 | 0.6407 | | 0.0207 | 2350.0 | 9400 | 2.2978 | 0.6469 | 0.6469 | | 0.0203 | 2400.0 | 9600 | 2.2755 | 0.6432 | 0.6432 | | 0.0191 | 2450.0 | 9800 | 2.3183 | 0.6445 | 0.6444 | | 0.0195 | 2500.0 | 10000 | 2.3087 | 0.6445 | 0.6444 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_27M", "model-index": [{"name": "GUE_mouse_0-seqsight_4096_512_27M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_0-seqsight_4096_512_27M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_27M", "region:us" ]
null
2024-04-15T17:19:35+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us
GUE\_mouse\_0-seqsight\_4096\_512\_27M-L32\_all =============================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_27M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset. It achieves the following results on the evaluation set: * Loss: 1.7670 * F1 Score: 0.5925 * Accuracy: 0.5926 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_1-seqsight_4096_512_27M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_27M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_27M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.5154 - F1 Score: 0.8110 - Accuracy: 0.8113 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5449 | 7.41 | 200 | 0.4682 | 0.7692 | 0.7703 | | 0.465 | 14.81 | 400 | 0.4437 | 0.7876 | 0.7877 | | 0.4343 | 22.22 | 600 | 0.4261 | 0.7971 | 0.7972 | | 0.4077 | 29.63 | 800 | 0.4173 | 0.7995 | 0.7996 | | 0.387 | 37.04 | 1000 | 0.4147 | 0.8058 | 0.8058 | | 0.3693 | 44.44 | 1200 | 0.3998 | 0.8122 | 0.8123 | | 0.3528 | 51.85 | 1400 | 0.4024 | 0.8140 | 0.8144 | | 0.3392 | 59.26 | 1600 | 0.4043 | 0.8135 | 0.8136 | | 0.3297 | 66.67 | 1800 | 0.4070 | 0.8135 | 0.8139 | | 0.3175 | 74.07 | 2000 | 0.4138 | 0.8174 | 0.8176 | | 0.3086 | 81.48 | 2200 | 0.4116 | 0.8166 | 0.8169 | | 0.299 | 88.89 | 2400 | 0.4199 | 0.8202 | 0.8203 | | 0.2896 | 96.3 | 2600 | 0.4079 | 0.8190 | 0.8193 | | 0.2809 | 103.7 | 2800 | 0.4250 | 0.8202 | 0.8206 | | 0.2728 | 111.11 | 3000 | 0.4141 | 0.8132 | 0.8133 | | 0.2647 | 118.52 | 3200 | 0.4410 | 0.8171 | 0.8178 | | 0.2575 | 125.93 | 3400 | 0.4337 | 0.8196 | 0.8197 | | 0.2488 | 133.33 | 3600 | 0.4399 | 0.8198 | 0.8199 | | 0.2404 | 140.74 | 3800 | 0.4589 | 0.8201 | 0.8203 | | 0.2349 | 148.15 | 4000 | 0.4394 | 0.8220 | 0.8222 | | 0.227 | 155.56 | 4200 | 0.4698 | 0.8175 | 0.8179 | | 0.2218 | 162.96 | 4400 | 0.4563 | 0.8187 | 0.8187 | | 0.2161 | 170.37 | 4600 | 0.4655 | 0.8232 | 0.8233 | | 0.2105 | 177.78 | 4800 | 0.4687 | 0.8220 | 0.8221 | | 0.2055 | 185.19 | 5000 | 0.4673 | 0.8188 | 0.8190 | | 0.2001 | 192.59 | 5200 | 0.4785 | 0.8177 | 0.8178 | | 0.1956 | 200.0 | 5400 | 0.4910 | 0.8233 | 0.8236 | | 0.1897 | 207.41 | 5600 | 0.4969 | 0.8189 | 0.8191 | | 0.1862 | 214.81 | 5800 | 0.5068 | 0.8203 | 0.8206 | | 0.1821 | 222.22 | 6000 | 0.5007 | 0.8179 | 0.8179 | | 0.1777 | 229.63 | 6200 | 0.4982 | 0.8230 | 0.8231 | | 0.1745 | 237.04 | 6400 | 0.5146 | 0.8194 | 0.8196 | | 0.171 | 244.44 | 6600 | 0.5167 | 0.8197 | 0.8199 | | 0.1695 | 251.85 | 6800 | 0.5336 | 0.8183 | 0.8185 | | 0.1676 | 259.26 | 7000 | 0.5170 | 0.8210 | 0.8212 | | 0.1631 | 266.67 | 7200 | 0.5313 | 0.8183 | 0.8187 | | 0.1594 | 274.07 | 7400 | 0.5298 | 0.8174 | 0.8176 | | 0.1578 | 281.48 | 7600 | 0.5435 | 0.8203 | 0.8205 | | 0.155 | 288.89 | 7800 | 0.5297 | 0.8195 | 0.8196 | | 0.1538 | 296.3 | 8000 | 0.5415 | 0.8174 | 0.8175 | | 0.1507 | 303.7 | 8200 | 0.5447 | 0.8191 | 0.8191 | | 0.1507 | 311.11 | 8400 | 0.5428 | 0.8191 | 0.8193 | | 0.1477 | 318.52 | 8600 | 0.5547 | 0.8179 | 0.8181 | | 0.1476 | 325.93 | 8800 | 0.5500 | 0.8200 | 0.8202 | | 0.1456 | 333.33 | 9000 | 0.5561 | 0.8176 | 0.8178 | | 0.1455 | 340.74 | 9200 | 0.5455 | 0.8198 | 0.8199 | | 0.1441 | 348.15 | 9400 | 0.5487 | 0.8182 | 0.8184 | | 0.1439 | 355.56 | 9600 | 0.5498 | 0.8193 | 0.8194 | | 0.1428 | 362.96 | 9800 | 0.5506 | 0.8188 | 0.8190 | | 0.1425 | 370.37 | 10000 | 0.5517 | 0.8178 | 0.8179 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_27M", "model-index": [{"name": "GUE_mouse_1-seqsight_4096_512_27M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_1-seqsight_4096_512_27M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_27M", "region:us" ]
null
2024-04-15T17:20:04+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us
GUE\_mouse\_1-seqsight\_4096\_512\_27M-L32\_all =============================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_27M on the mahdibaghbanzadeh/GUE\_mouse\_1 dataset. It achieves the following results on the evaluation set: * Loss: 0.5154 * F1 Score: 0.8110 * Accuracy: 0.8113 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_4-seqsight_4096_512_27M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_27M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_27M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset. It achieves the following results on the evaluation set: - Loss: 1.5524 - F1 Score: 0.5831 - Accuracy: 0.5831 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6538 | 25.0 | 200 | 0.6911 | 0.5910 | 0.5948 | | 0.5518 | 50.0 | 400 | 0.7787 | 0.5767 | 0.5767 | | 0.4801 | 75.0 | 600 | 0.8590 | 0.5780 | 0.5778 | | 0.4267 | 100.0 | 800 | 0.9022 | 0.5800 | 0.5799 | | 0.3888 | 125.0 | 1000 | 0.9925 | 0.5832 | 0.5852 | | 0.3608 | 150.0 | 1200 | 1.0026 | 0.5864 | 0.5863 | | 0.3359 | 175.0 | 1400 | 1.0430 | 0.5780 | 0.5778 | | 0.3155 | 200.0 | 1600 | 1.0714 | 0.5832 | 0.5831 | | 0.2971 | 225.0 | 1800 | 1.0767 | 0.5794 | 0.5794 | | 0.2792 | 250.0 | 2000 | 1.1160 | 0.5863 | 0.5874 | | 0.2632 | 275.0 | 2200 | 1.1393 | 0.5879 | 0.5884 | | 0.2447 | 300.0 | 2400 | 1.2084 | 0.5838 | 0.5836 | | 0.2301 | 325.0 | 2600 | 1.2985 | 0.5761 | 0.5762 | | 0.2197 | 350.0 | 2800 | 1.3145 | 0.5877 | 0.5879 | | 0.2058 | 375.0 | 3000 | 1.2854 | 0.5896 | 0.5895 | | 0.1935 | 400.0 | 3200 | 1.3026 | 0.5911 | 0.5916 | | 0.1838 | 425.0 | 3400 | 1.3489 | 0.5948 | 0.5948 | | 0.1756 | 450.0 | 3600 | 1.3137 | 0.5852 | 0.5858 | | 0.1654 | 475.0 | 3800 | 1.4183 | 0.5853 | 0.5852 | | 0.1589 | 500.0 | 4000 | 1.3911 | 0.5842 | 0.5842 | | 0.1526 | 525.0 | 4200 | 1.3736 | 0.5873 | 0.5879 | | 0.1462 | 550.0 | 4400 | 1.4303 | 0.5890 | 0.5890 | | 0.1396 | 575.0 | 4600 | 1.5019 | 0.5879 | 0.5879 | | 0.1332 | 600.0 | 4800 | 1.4536 | 0.5880 | 0.5879 | | 0.1276 | 625.0 | 5000 | 1.4756 | 0.5906 | 0.5937 | | 0.1237 | 650.0 | 5200 | 1.5054 | 0.5890 | 0.5895 | | 0.1185 | 675.0 | 5400 | 1.5920 | 0.5854 | 0.5863 | | 0.116 | 700.0 | 5600 | 1.5062 | 0.5917 | 0.5916 | | 0.1101 | 725.0 | 5800 | 1.5990 | 0.5922 | 0.5921 | | 0.1073 | 750.0 | 6000 | 1.5887 | 0.5974 | 0.5975 | | 0.1031 | 775.0 | 6200 | 1.5906 | 0.5912 | 0.5911 | | 0.1005 | 800.0 | 6400 | 1.5618 | 0.5891 | 0.5890 | | 0.0972 | 825.0 | 6600 | 1.5682 | 0.5921 | 0.5921 | | 0.0959 | 850.0 | 6800 | 1.6765 | 0.5906 | 0.5905 | | 0.0937 | 875.0 | 7000 | 1.6483 | 0.5944 | 0.5943 | | 0.0901 | 900.0 | 7200 | 1.6015 | 0.5964 | 0.5964 | | 0.0876 | 925.0 | 7400 | 1.6284 | 0.5895 | 0.5895 | | 0.0858 | 950.0 | 7600 | 1.6699 | 0.5944 | 0.5948 | | 0.0837 | 975.0 | 7800 | 1.6507 | 0.5874 | 0.5879 | | 0.0829 | 1000.0 | 8000 | 1.7032 | 0.5851 | 0.5852 | | 0.0816 | 1025.0 | 8200 | 1.6678 | 0.5893 | 0.5895 | | 0.0792 | 1050.0 | 8400 | 1.6958 | 0.5909 | 0.5911 | | 0.0778 | 1075.0 | 8600 | 1.7117 | 0.5854 | 0.5852 | | 0.077 | 1100.0 | 8800 | 1.7010 | 0.5853 | 0.5852 | | 0.0752 | 1125.0 | 9000 | 1.7145 | 0.5862 | 0.5863 | | 0.076 | 1150.0 | 9200 | 1.7306 | 0.5864 | 0.5863 | | 0.0748 | 1175.0 | 9400 | 1.7023 | 0.5847 | 0.5847 | | 0.0745 | 1200.0 | 9600 | 1.7054 | 0.5843 | 0.5842 | | 0.072 | 1225.0 | 9800 | 1.7242 | 0.5885 | 0.5884 | | 0.0728 | 1250.0 | 10000 | 1.7150 | 0.5890 | 0.5890 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_27M", "model-index": [{"name": "GUE_mouse_4-seqsight_4096_512_27M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_4-seqsight_4096_512_27M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_27M", "region:us" ]
null
2024-04-15T17:20:24+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us
GUE\_mouse\_4-seqsight\_4096\_512\_27M-L32\_all =============================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_27M on the mahdibaghbanzadeh/GUE\_mouse\_4 dataset. It achieves the following results on the evaluation set: * Loss: 1.5524 * F1 Score: 0.5831 * Accuracy: 0.5831 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-large-384-22k-1k-finetuned-eurosat This model is a fine-tuned version of [facebook/convnext-large-384-22k-1k](https://huggingface.co/facebook/convnext-large-384-22k-1k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0012 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 40 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2737 | 0.97 | 22 | 0.0849 | | 0.0416 | 1.98 | 45 | 0.0055 | | 0.0096 | 2.99 | 68 | 0.0012 | | 0.0018 | 3.87 | 88 | 0.0029 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "facebook/convnext-large-384-22k-1k", "model-index": [{"name": "convnext-large-384-22k-1k-finetuned-eurosat", "results": []}]}
t1msan/convnext-large-384-22k-1k-finetuned-eurosat
null
[ "transformers", "tensorboard", "safetensors", "convnext", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/convnext-large-384-22k-1k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T17:20:29+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #convnext #image-classification #generated_from_trainer #dataset-imagefolder #base_model-facebook/convnext-large-384-22k-1k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
convnext-large-384-22k-1k-finetuned-eurosat =========================================== This model is a fine-tuned version of facebook/convnext-large-384-22k-1k on the imagefolder dataset. It achieves the following results on the evaluation set: * Loss: 0.0012 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 10 * eval\_batch\_size: 10 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 40 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 4 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.1.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 10\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 40\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 4", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #convnext #image-classification #generated_from_trainer #dataset-imagefolder #base_model-facebook/convnext-large-384-22k-1k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 10\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 40\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 4", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/DreadPoor/SpicedFettuccine-7B-slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/SpicedFettuccine-7B-slerp-GGUF/resolve/main/SpicedFettuccine-7B-slerp.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/SpicedFettuccine-7B-slerp-GGUF/resolve/main/SpicedFettuccine-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/SpicedFettuccine-7B-slerp-GGUF/resolve/main/SpicedFettuccine-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/SpicedFettuccine-7B-slerp-GGUF/resolve/main/SpicedFettuccine-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/SpicedFettuccine-7B-slerp-GGUF/resolve/main/SpicedFettuccine-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/SpicedFettuccine-7B-slerp-GGUF/resolve/main/SpicedFettuccine-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/SpicedFettuccine-7B-slerp-GGUF/resolve/main/SpicedFettuccine-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/SpicedFettuccine-7B-slerp-GGUF/resolve/main/SpicedFettuccine-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/SpicedFettuccine-7B-slerp-GGUF/resolve/main/SpicedFettuccine-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SpicedFettuccine-7B-slerp-GGUF/resolve/main/SpicedFettuccine-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SpicedFettuccine-7B-slerp-GGUF/resolve/main/SpicedFettuccine-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/SpicedFettuccine-7B-slerp-GGUF/resolve/main/SpicedFettuccine-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/SpicedFettuccine-7B-slerp-GGUF/resolve/main/SpicedFettuccine-7B-slerp.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/SpicedFettuccine-7B-slerp-GGUF/resolve/main/SpicedFettuccine-7B-slerp.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "tags": ["merge", "mergekit", "lazymergekit", "cgato/Thespis-Mistral-7b-v0.7", "Epiculous/Fett-uccine-7B"], "base_model": "DreadPoor/SpicedFettuccine-7B-slerp", "quantized_by": "mradermacher"}
mradermacher/SpicedFettuccine-7B-slerp-GGUF
null
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "cgato/Thespis-Mistral-7b-v0.7", "Epiculous/Fett-uccine-7B", "en", "base_model:DreadPoor/SpicedFettuccine-7B-slerp", "endpoints_compatible", "region:us" ]
null
2024-04-15T17:20:39+00:00
[]
[ "en" ]
TAGS #transformers #gguf #merge #mergekit #lazymergekit #cgato/Thespis-Mistral-7b-v0.7 #Epiculous/Fett-uccine-7B #en #base_model-DreadPoor/SpicedFettuccine-7B-slerp #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #merge #mergekit #lazymergekit #cgato/Thespis-Mistral-7b-v0.7 #Epiculous/Fett-uccine-7B #en #base_model-DreadPoor/SpicedFettuccine-7B-slerp #endpoints_compatible #region-us \n" ]
null
transformers
# Uploaded model - **Developed by:** liminerity - **License:** apache-2.0 - **Finetuned from model :** liminerity/bitnet-mistal-m7-70m-slerp This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "gguf"], "base_model": "liminerity/bitnet-mistal-m7-70m-slerp"}
liminerity/bitmap-mistral-M7-slerp-alpaca-70m-gguf
null
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:liminerity/bitnet-mistal-m7-70m-slerp", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-15T17:20:42+00:00
[]
[ "en" ]
TAGS #transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-liminerity/bitnet-mistal-m7-70m-slerp #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: liminerity - License: apache-2.0 - Finetuned from model : liminerity/bitnet-mistal-m7-70m-slerp This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: liminerity\n- License: apache-2.0\n- Finetuned from model : liminerity/bitnet-mistal-m7-70m-slerp\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #gguf #mistral #text-generation-inference #unsloth #en #base_model-liminerity/bitnet-mistal-m7-70m-slerp #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: liminerity\n- License: apache-2.0\n- Finetuned from model : liminerity/bitnet-mistal-m7-70m-slerp\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
null
https://civitai.com/models/401630/ganyu
{"license": "creativeml-openrail-m"}
LarryAIDraw/ganyu
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-04-15T17:22:03+00:00
[]
[]
TAGS #license-creativeml-openrail-m #region-us
URL
[]
[ "TAGS\n#license-creativeml-openrail-m #region-us \n" ]
null
null
https://civitai.com/models/401878/izayoi-nonomi-blue-archive-or-2-outfits
{"license": "creativeml-openrail-m"}
LarryAIDraw/nonomi-10
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-04-15T17:22:25+00:00
[]
[]
TAGS #license-creativeml-openrail-m #region-us
URL
[]
[ "TAGS\n#license-creativeml-openrail-m #region-us \n" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="b-r-a-n/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.54 +/- 2.73", "name": "mean_reward", "verified": false}]}]}]}
b-r-a-n/q-Taxi-v3
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-15T17:22:26+00:00
[]
[]
TAGS #Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 Taxi-v3 This is a trained model of a Q-Learning agent playing Taxi-v3 . ## Usage
[ "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
[ "TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
null
null
https://civitai.com/models/399889/airi-sakura-classroom-of-the-elite-anime-artstyle
{"license": "creativeml-openrail-m"}
LarryAIDraw/airi_sakura-1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-04-15T17:22:50+00:00
[]
[]
TAGS #license-creativeml-openrail-m #region-us
URL
[]
[ "TAGS\n#license-creativeml-openrail-m #region-us \n" ]
null
null
https://civitai.com/models/400657/akane-kurokawa
{"license": "creativeml-openrail-m"}
LarryAIDraw/Akane_Kurokawa
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-04-15T17:23:13+00:00
[]
[]
TAGS #license-creativeml-openrail-m #region-us
URL
[]
[ "TAGS\n#license-creativeml-openrail-m #region-us \n" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Novin-AI/Rava-2x7B-v0.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.IQ3_XS.gguf) | IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.Q3_K_S.gguf) | Q3_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.IQ3_M.gguf) | IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.Q3_K_L.gguf) | Q3_K_L | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.Q5_K_S.gguf) | Q5_K_S | 9.0 | | | [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.Q5_K_M.gguf) | Q5_K_M | 9.2 | | | [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.Q6_K.gguf) | Q6_K | 10.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Rava-2x7B-v0.1-GGUF/resolve/main/Rava-2x7B-v0.1.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "base_model": "Novin-AI/Rava-2x7B-v0.1", "quantized_by": "mradermacher"}
mradermacher/Rava-2x7B-v0.1-GGUF
null
[ "transformers", "gguf", "en", "base_model:Novin-AI/Rava-2x7B-v0.1", "endpoints_compatible", "region:us" ]
null
2024-04-15T17:23:17+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-Novin-AI/Rava-2x7B-v0.1 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-Novin-AI/Rava-2x7B-v0.1 #endpoints_compatible #region-us \n" ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - SarthakDargan/sato_LoRA_v2 <Gallery /> ## Model description These are SarthakDargan/sato_LoRA_v2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of SATO to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](SarthakDargan/sato_LoRA_v2/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of SATO", "widget": []}
SarthakDargan/sato_LoRA_v2
null
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-15T17:23:32+00:00
[]
[]
TAGS #diffusers #tensorboard #text-to-image #diffusers-training #lora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
# SDXL LoRA DreamBooth - SarthakDargan/sato_LoRA_v2 <Gallery /> ## Model description These are SarthakDargan/sato_LoRA_v2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of SATO to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab. ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# SDXL LoRA DreamBooth - SarthakDargan/sato_LoRA_v2\n\n<Gallery />", "## Model description\n\nThese are SarthakDargan/sato_LoRA_v2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.", "## Trigger words\n\nYou should use a photo of SATO to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #tensorboard #text-to-image #diffusers-training #lora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n", "# SDXL LoRA DreamBooth - SarthakDargan/sato_LoRA_v2\n\n<Gallery />", "## Model description\n\nThese are SarthakDargan/sato_LoRA_v2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.", "## Trigger words\n\nYou should use a photo of SATO to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
image-classification
transformers
Esto es un ejercicio del curso de IABD
{"language": ["es"], "license": "cc-by-nc-4.0", "tags": ["vision", "image-classification"], "datasets": ["omarques/autotrain-data-dogs-and-cats"]}
ismaeltorres00/Modelodetest
null
[ "transformers", "pytorch", "vit", "image-classification", "vision", "es", "dataset:omarques/autotrain-data-dogs-and-cats", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T17:24:28+00:00
[]
[ "es" ]
TAGS #transformers #pytorch #vit #image-classification #vision #es #dataset-omarques/autotrain-data-dogs-and-cats #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
Esto es un ejercicio del curso de IABD
[]
[ "TAGS\n#transformers #pytorch #vit #image-classification #vision #es #dataset-omarques/autotrain-data-dogs-and-cats #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
image-classification
transformers
Hola Mundo
{"license": "cc-by-4.0", "tags": ["vision", "image-classification"], "datasets": ["omarques/autotrain-data-dogs-and-cats"]}
FernandoMacia/Modelo2
null
[ "transformers", "pytorch", "vit", "image-classification", "vision", "dataset:omarques/autotrain-data-dogs-and-cats", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T17:24:41+00:00
[]
[]
TAGS #transformers #pytorch #vit #image-classification #vision #dataset-omarques/autotrain-data-dogs-and-cats #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
Hola Mundo
[]
[ "TAGS\n#transformers #pytorch #vit #image-classification #vision #dataset-omarques/autotrain-data-dogs-and-cats #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
image-classification
transformers
Repositorio original: omarques/autotrain-dogs-and-cats-1527055142
{"license": "cc-by-nc-4.0", "tags": ["vision", "image-classification"], "datasets": ["omarques/autotrain-dogs-and-cats-1527055142"]}
danielgi97/ModeloATope
null
[ "transformers", "pytorch", "vit", "image-classification", "vision", "dataset:omarques/autotrain-dogs-and-cats-1527055142", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T17:25:16+00:00
[]
[]
TAGS #transformers #pytorch #vit #image-classification #vision #dataset-omarques/autotrain-dogs-and-cats-1527055142 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
Repositorio original: omarques/autotrain-dogs-and-cats-1527055142
[]
[ "TAGS\n#transformers #pytorch #vit #image-classification #vision #dataset-omarques/autotrain-dogs-and-cats-1527055142 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
image-classification
transformers
tags: - vision - image-classification datasets: - omarques/autotrain-data-dogs-and-cats
{}
akxier/ElModeloMasMolonDelMundo
null
[ "transformers", "pytorch", "vit", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T17:25:28+00:00
[]
[]
TAGS #transformers #pytorch #vit #image-classification #autotrain_compatible #endpoints_compatible #region-us
tags: - vision - image-classification datasets: - omarques/autotrain-data-dogs-and-cats
[]
[ "TAGS\n#transformers #pytorch #vit #image-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
image-classification
transformers
Repositorio original: omarques/autotrain-dogs-and-cats-1527055142
{"license": "cc-by-nc-4.0", "tags": ["vision", "image-classification"], "datasets": ["omarques/autotrain-dogs-and-cats-1527055142"]}
Alan7/PruebaModelo1
null
[ "transformers", "pytorch", "vit", "image-classification", "vision", "dataset:omarques/autotrain-dogs-and-cats-1527055142", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T17:25:28+00:00
[]
[]
TAGS #transformers #pytorch #vit #image-classification #vision #dataset-omarques/autotrain-dogs-and-cats-1527055142 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
Repositorio original: omarques/autotrain-dogs-and-cats-1527055142
[]
[ "TAGS\n#transformers #pytorch #vit #image-classification #vision #dataset-omarques/autotrain-dogs-and-cats-1527055142 #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
image-classification
transformers
datasets: - omarques/autotrain-data-dogs-and-cats
{"license": "cc-by-nc-nd-4.0", "tags": ["vision", "image-classification"]}
CarPeAs/first_model_iabd
null
[ "transformers", "pytorch", "vit", "image-classification", "vision", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T17:25:55+00:00
[]
[]
TAGS #transformers #pytorch #vit #image-classification #vision #license-cc-by-nc-nd-4.0 #autotrain_compatible #endpoints_compatible #region-us
datasets: - omarques/autotrain-data-dogs-and-cats
[]
[ "TAGS\n#transformers #pytorch #vit #image-classification #vision #license-cc-by-nc-nd-4.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
image-classification
transformers
tags: - vision - image-classification datasets: - omarques/autotrain-data-dogs-and-cats
{}
hugoiabd/modelitoiabd
null
[ "transformers", "pytorch", "vit", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T17:25:58+00:00
[]
[]
TAGS #transformers #pytorch #vit #image-classification #autotrain_compatible #endpoints_compatible #region-us
tags: - vision - image-classification datasets: - omarques/autotrain-data-dogs-and-cats
[]
[ "TAGS\n#transformers #pytorch #vit #image-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
heyllm234/sc25
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T17:28:56+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
gubartz/testea
null
[ "transformers", "safetensors", "longt5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T17:32:02+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #longt5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #longt5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Aspik101/mixxtral13
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T17:37:17+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
null
## Llamacpp Quantizations of WizardLM-2-7B Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2675">b2675</a> for quantization. Original model: https://huggingface.co/microsoft/WizardLM-2-7B All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) ## Prompt format ``` {system_prompt} USER: {prompt} ASSISTANT: </s> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [WizardLM-2-7B-Q8_0.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. | | [WizardLM-2-7B-Q6_K.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. | | [WizardLM-2-7B-Q5_K_M.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, *recommended*. | | [WizardLM-2-7B-Q5_K_S.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, *recommended*. | | [WizardLM-2-7B-Q4_K_M.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [WizardLM-2-7B-Q4_K_S.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with more space savings, *recommended*. | | [WizardLM-2-7B-IQ4_NL.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ4_NL.gguf) | IQ4_NL | 4.12GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [WizardLM-2-7B-IQ4_XS.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ4_XS.gguf) | IQ4_XS | 3.90GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [WizardLM-2-7B-Q3_K_L.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. | | [WizardLM-2-7B-Q3_K_M.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. | | [WizardLM-2-7B-IQ3_M.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ3_M.gguf) | IQ3_M | 3.28GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [WizardLM-2-7B-IQ3_S.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ3_S.gguf) | IQ3_S | 3.18GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [WizardLM-2-7B-Q3_K_S.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. | | [WizardLM-2-7B-IQ3_XS.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ3_XS.gguf) | IQ3_XS | 3.01GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [WizardLM-2-7B-IQ3_XXS.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ3_XXS.gguf) | IQ3_XXS | 2.82GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [WizardLM-2-7B-Q2_K.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-Q2_K.gguf) | Q2_K | 2.71GB | Very low quality but surprisingly usable. | | [WizardLM-2-7B-IQ2_M.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ2_M.gguf) | IQ2_M | 2.50GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [WizardLM-2-7B-IQ2_S.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ2_S.gguf) | IQ2_S | 2.31GB | Very low quality, uses SOTA techniques to be usable. | | [WizardLM-2-7B-IQ2_XS.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ2_XS.gguf) | IQ2_XS | 2.19GB | Very low quality, uses SOTA techniques to be usable. | | [WizardLM-2-7B-IQ2_XXS.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ2_XXS.gguf) | IQ2_XXS | 1.99GB | Lower quality, uses SOTA techniques to be usable. | | [WizardLM-2-7B-IQ1_M.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ1_M.gguf) | IQ1_M | 1.75GB | Extremely low quality, *not* recommended. | | [WizardLM-2-7B-IQ1_S.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ1_S.gguf) | IQ1_S | 1.61GB | Extremely low quality, *not* recommended. | ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"license": "apache-2.0", "quantized_by": "bartowski", "pipeline_tag": "text-generation"}
bartowski/WizardLM-2-7B-GGUF
null
[ "gguf", "text-generation", "license:apache-2.0", "region:us" ]
null
2024-04-15T17:44:08+00:00
[]
[]
TAGS #gguf #text-generation #license-apache-2.0 #region-us
Llamacpp Quantizations of WizardLM-2-7B --------------------------------------- Using <a href="URL release <a href="URL for quantization. Original model: URL All quants made using imatrix option with dataset provided by Kalomaze here Prompt format ------------- Download a file (not the whole branch) from below: -------------------------------------------------- Which file should I choose? --------------------------- A great write up with charts showing various performances is provided by Artefact2 here The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX\_K\_X', like Q5\_K\_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: URL feature matrix But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX\_X, like IQ3\_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: URL
[]
[ "TAGS\n#gguf #text-generation #license-apache-2.0 #region-us \n" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ruBert-base-sberquad-0.001-len_3-filtered This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 5000 ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-base", "model-index": [{"name": "ruBert-base-sberquad-0.001-len_3-filtered", "results": []}]}
Shalazary/ruBert-base-sberquad-0.001-len_3-filtered
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:ai-forever/ruBert-base", "license:apache-2.0", "region:us" ]
null
2024-04-15T17:47:09+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us
# ruBert-base-sberquad-0.001-len_3-filtered This model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 5000 ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# ruBert-base-sberquad-0.001-len_3-filtered\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 5000", "### Training results", "### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us \n", "# ruBert-base-sberquad-0.001-len_3-filtered\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 5000", "### Training results", "### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_3-seqsight_4096_512_27M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_27M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_27M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset. It achieves the following results on the evaluation set: - Loss: 2.6506 - F1 Score: 0.6778 - Accuracy: 0.6778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:| | 0.3679 | 200.0 | 200 | 1.1651 | 0.7230 | 0.7238 | | 0.048 | 400.0 | 400 | 1.5405 | 0.7522 | 0.7531 | | 0.0197 | 600.0 | 600 | 1.6701 | 0.7438 | 0.7448 | | 0.0126 | 800.0 | 800 | 1.9511 | 0.7491 | 0.7531 | | 0.0084 | 1000.0 | 1000 | 1.9158 | 0.7416 | 0.7448 | | 0.0062 | 1200.0 | 1200 | 1.9187 | 0.7311 | 0.7322 | | 0.005 | 1400.0 | 1400 | 2.0044 | 0.7308 | 0.7322 | | 0.0039 | 1600.0 | 1600 | 2.1592 | 0.7156 | 0.7197 | | 0.0035 | 1800.0 | 1800 | 2.1328 | 0.7381 | 0.7406 | | 0.0029 | 2000.0 | 2000 | 2.1697 | 0.7522 | 0.7531 | | 0.0026 | 2200.0 | 2200 | 2.1512 | 0.7528 | 0.7531 | | 0.0021 | 2400.0 | 2400 | 2.3483 | 0.7227 | 0.7238 | | 0.0018 | 2600.0 | 2600 | 2.3473 | 0.7400 | 0.7406 | | 0.0018 | 2800.0 | 2800 | 2.3165 | 0.7340 | 0.7364 | | 0.0016 | 3000.0 | 3000 | 2.3495 | 0.7381 | 0.7406 | | 0.0013 | 3200.0 | 3200 | 2.3924 | 0.7336 | 0.7364 | | 0.0014 | 3400.0 | 3400 | 2.4123 | 0.7207 | 0.7238 | | 0.0012 | 3600.0 | 3600 | 2.6871 | 0.7247 | 0.7280 | | 0.0012 | 3800.0 | 3800 | 2.3435 | 0.7314 | 0.7322 | | 0.0011 | 4000.0 | 4000 | 2.4965 | 0.7300 | 0.7322 | | 0.0011 | 4200.0 | 4200 | 2.4440 | 0.7316 | 0.7322 | | 0.0008 | 4400.0 | 4400 | 2.5622 | 0.7482 | 0.7490 | | 0.0009 | 4600.0 | 4600 | 2.8244 | 0.7304 | 0.7322 | | 0.0008 | 4800.0 | 4800 | 2.6911 | 0.7336 | 0.7364 | | 0.0007 | 5000.0 | 5000 | 2.7395 | 0.7189 | 0.7197 | | 0.0008 | 5200.0 | 5200 | 2.7751 | 0.7388 | 0.7406 | | 0.0007 | 5400.0 | 5400 | 2.6790 | 0.7230 | 0.7238 | | 0.0006 | 5600.0 | 5600 | 2.7968 | 0.7425 | 0.7448 | | 0.0007 | 5800.0 | 5800 | 2.7643 | 0.7392 | 0.7406 | | 0.0005 | 6000.0 | 6000 | 2.7849 | 0.7392 | 0.7406 | | 0.0004 | 6200.0 | 6200 | 2.7454 | 0.7388 | 0.7406 | | 0.0005 | 6400.0 | 6400 | 2.7957 | 0.7304 | 0.7322 | | 0.0005 | 6600.0 | 6600 | 2.8237 | 0.7381 | 0.7406 | | 0.0005 | 6800.0 | 6800 | 2.6700 | 0.7465 | 0.7490 | | 0.0004 | 7000.0 | 7000 | 2.7622 | 0.7112 | 0.7113 | | 0.0003 | 7200.0 | 7200 | 3.1884 | 0.7371 | 0.7406 | | 0.0004 | 7400.0 | 7400 | 2.8866 | 0.7314 | 0.7322 | | 0.0004 | 7600.0 | 7600 | 2.8300 | 0.7227 | 0.7238 | | 0.0004 | 7800.0 | 7800 | 2.7529 | 0.7385 | 0.7406 | | 0.0003 | 8000.0 | 8000 | 2.8889 | 0.7421 | 0.7448 | | 0.0003 | 8200.0 | 8200 | 3.0089 | 0.7416 | 0.7448 | | 0.0003 | 8400.0 | 8400 | 2.8232 | 0.7385 | 0.7406 | | 0.0002 | 8600.0 | 8600 | 2.8346 | 0.7314 | 0.7322 | | 0.0002 | 8800.0 | 8800 | 2.8270 | 0.7316 | 0.7322 | | 0.0002 | 9000.0 | 9000 | 2.8962 | 0.7357 | 0.7364 | | 0.0003 | 9200.0 | 9200 | 2.8576 | 0.7429 | 0.7448 | | 0.0002 | 9400.0 | 9400 | 2.9385 | 0.7344 | 0.7364 | | 0.0002 | 9600.0 | 9600 | 2.9784 | 0.7348 | 0.7364 | | 0.0003 | 9800.0 | 9800 | 2.9750 | 0.7340 | 0.7364 | | 0.0002 | 10000.0 | 10000 | 2.9621 | 0.7300 | 0.7322 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_27M", "model-index": [{"name": "GUE_mouse_3-seqsight_4096_512_27M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_3-seqsight_4096_512_27M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_27M", "region:us" ]
null
2024-04-15T17:47:52+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us
GUE\_mouse\_3-seqsight\_4096\_512\_27M-L32\_all =============================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_27M on the mahdibaghbanzadeh/GUE\_mouse\_3 dataset. It achieves the following results on the evaluation set: * Loss: 2.6506 * F1 Score: 0.6778 * Accuracy: 0.6778 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
OwOOwO/dumbo-krillin8
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T17:52:54+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Uploaded model - **Developed by:** eruzak - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"}
eruzak/unsloth_mistral_predict_prompt_RL_v7
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-15T17:53:21+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: eruzak - License: apache-2.0 - Finetuned from model : unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: eruzak\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: eruzak\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
arthurLi920/my_diffusion.text_inver_encoder
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-15T17:53:28+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# SeaMax-7B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mpasila/PIPPA-Named-7B](https://huggingface.co/mpasila/PIPPA-Named-7B) as a base. ### Models Merged The following models were included in the merge: * [Locutusque/SlimHercules-4.0-Mistral-7B-v0.2](https://huggingface.co/Locutusque/SlimHercules-4.0-Mistral-7B-v0.2) * [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 parameters: density: [1, 0.7, 0.1] # density gradient weight: 1.0 - model: Locutusque/SlimHercules-4.0-Mistral-7B-v0.2 parameters: density: 0.5 weight: [0, 0.3, 0.7, 1] # weight gradient merge_method: ties base_model: mpasila/PIPPA-Named-7B parameters: normalize: true int8_mask: true dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["mpasila/PIPPA-Named-7B", "Locutusque/SlimHercules-4.0-Mistral-7B-v0.2", "cognitivecomputations/dolphin-2.8-mistral-7b-v02"]}
mpasila/SeaMax-7B
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2306.01708", "base_model:mpasila/PIPPA-Named-7B", "base_model:Locutusque/SlimHercules-4.0-Mistral-7B-v0.2", "base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T17:54:22+00:00
[ "2306.01708" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #arxiv-2306.01708 #base_model-mpasila/PIPPA-Named-7B #base_model-Locutusque/SlimHercules-4.0-Mistral-7B-v0.2 #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# SeaMax-7B This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the TIES merge method using mpasila/PIPPA-Named-7B as a base. ### Models Merged The following models were included in the merge: * Locutusque/SlimHercules-4.0-Mistral-7B-v0.2 * cognitivecomputations/dolphin-2.8-mistral-7b-v02 ### Configuration The following YAML configuration was used to produce this model:
[ "# SeaMax-7B\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the TIES merge method using mpasila/PIPPA-Named-7B as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* Locutusque/SlimHercules-4.0-Mistral-7B-v0.2\n* cognitivecomputations/dolphin-2.8-mistral-7b-v02", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #arxiv-2306.01708 #base_model-mpasila/PIPPA-Named-7B #base_model-Locutusque/SlimHercules-4.0-Mistral-7B-v0.2 #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# SeaMax-7B\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the TIES merge method using mpasila/PIPPA-Named-7B as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* Locutusque/SlimHercules-4.0-Mistral-7B-v0.2\n* cognitivecomputations/dolphin-2.8-mistral-7b-v02", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
text-generation
transformers
# Mistral-7B-Instruct-v0.1 for Flax This is a Flax port of the Mistral-7B-Instruct-v0.1 model. The model is NOT finetuned or altered in any way. It is a direct port of the PyTorch model to Flax using the existing `transformers` model class. ## Quickstart ```python import jax.numpy as jnp import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers import FlaxMistralForCausalLM model = FlaxMistralForCausalLM.from_pretrained("rdyro/Mistral-7B-Instruct-v0.1", dtype=jnp.float32) tokenizer = AutoTokenizer.from_pretrained("rdyro/Mistral-7B-Instruct-v0.1") messages = [{"role": "user", "content": "what's your name?"}] input_jax = tokenizer.apply_chat_template(messages, return_tensors="jax") out_jax = model(input_jax) ``` We can compare the outputs to the original PyTorch version. ```python torch_model_id = "mistralai/Mistral-7B-Instruct-v0.1" torch_model = AutoModelForCausalLM.from_pretrained( torch_model_id, device_map="cpu", torch_dtype=torch.float32 ) torch_tokenizer = AutoTokenizer.from_pretrained(torch_model_id) input_pt = torch_tokenizer.apply_chat_template(messages, return_tensors="pt") with torch.no_grad(): out_pt = torch_model(input_pt) err = jnp.linalg.norm(jnp.array(out_pt.logits) - out_jax.logits) / jnp.linalg.norm( jnp.array(out_pt.logits) ) print(f"Error is numerical precision level: {err:.4e}") # prints: Error is numerical precision level: 1.0205e-06 ``` <p align="center"> Below is the PyTorch version Model Card. </p> --- # Model Card for Mistral-7B-Instruct-v0.1 The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets. For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/). ## Instruction format In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id. E.g. ``` text = "<s>[INST] What is your favourite condiment? [/INST]" "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> " "[INST] Do you have mayonnaise recipes? [/INST]" ``` This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method: ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") messages = [ {"role": "user", "content": "What is your favourite condiment?"}, {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, {"role": "user", "content": "Do you have mayonnaise recipes?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## Model Architecture This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices: - Grouped-Query Attention - Sliding-Window Attention - Byte-fallback BPE tokenizer ## Troubleshooting - If you see the following error: ``` Traceback (most recent call last): File "", line 1, in File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "/transformers/models/auto/configuration_auto.py", line 723, in getitem raise KeyError(key) KeyError: 'mistral' ``` Installing transformers from source should solve the issue pip install git+https://github.com/huggingface/transformers This should not be required after transformers-v4.33.4. ## Limitations The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
{"license": "apache-2.0", "tags": ["finetuned"], "pipeline_tag": "text-generation", "inference": true, "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]}
rdyro/Mistral-7B-Instruct-v0.1
null
[ "transformers", "jax", "mistral", "text-generation", "finetuned", "conversational", "arxiv:2310.06825", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T17:57:54+00:00
[ "2310.06825" ]
[]
TAGS #transformers #jax #mistral #text-generation #finetuned #conversational #arxiv-2310.06825 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Mistral-7B-Instruct-v0.1 for Flax This is a Flax port of the Mistral-7B-Instruct-v0.1 model. The model is NOT finetuned or altered in any way. It is a direct port of the PyTorch model to Flax using the existing 'transformers' model class. ## Quickstart We can compare the outputs to the original PyTorch version. <p align="center"> Below is the PyTorch version Model Card. </p> --- # Model Card for Mistral-7B-Instruct-v0.1 The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the Mistral-7B-v0.1 generative text model using a variety of publicly available conversation datasets. For full details of this model please read our paper and release blog post. ## Instruction format In order to leverage instruction fine-tuning, your prompt should be surrounded by '[INST]' and '[/INST]' tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id. E.g. This format is available as a chat template via the 'apply_chat_template()' method: ## Model Architecture This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices: - Grouped-Query Attention - Sliding-Window Attention - Byte-fallback BPE tokenizer ## Troubleshooting - If you see the following error: Installing transformers from source should solve the issue pip install git+URL This should not be required after transformers-v4.33.4. ## Limitations The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
[ "# Mistral-7B-Instruct-v0.1 for Flax\n\nThis is a Flax port of the Mistral-7B-Instruct-v0.1 model. The model is NOT finetuned or altered in any way. It is a direct port of the PyTorch model to Flax using the existing 'transformers' model class.", "## Quickstart\n\n\n\nWe can compare the outputs to the original PyTorch version.\n\n\n\n<p align=\"center\">\nBelow is the PyTorch version Model Card.\n</p>\n\n---", "# Model Card for Mistral-7B-Instruct-v0.1\n\nThe Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the Mistral-7B-v0.1 generative text model using a variety of publicly available conversation datasets.\n\nFor full details of this model please read our paper and release blog post.", "## Instruction format\n\nIn order to leverage instruction fine-tuning, your prompt should be surrounded by '[INST]' and '[/INST]' tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.\n\nE.g.\n\n\nThis format is available as a chat template via the 'apply_chat_template()' method:", "## Model Architecture\nThis instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:\n- Grouped-Query Attention\n- Sliding-Window Attention\n- Byte-fallback BPE tokenizer", "## Troubleshooting\n- If you see the following error:\n\n\nInstalling transformers from source should solve the issue\npip install git+URL\n\nThis should not be required after transformers-v4.33.4.", "## Limitations\n\nThe Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. \nIt does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to\nmake the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.", "## The Mistral AI Team\n\nAlbert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed." ]
[ "TAGS\n#transformers #jax #mistral #text-generation #finetuned #conversational #arxiv-2310.06825 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Mistral-7B-Instruct-v0.1 for Flax\n\nThis is a Flax port of the Mistral-7B-Instruct-v0.1 model. The model is NOT finetuned or altered in any way. It is a direct port of the PyTorch model to Flax using the existing 'transformers' model class.", "## Quickstart\n\n\n\nWe can compare the outputs to the original PyTorch version.\n\n\n\n<p align=\"center\">\nBelow is the PyTorch version Model Card.\n</p>\n\n---", "# Model Card for Mistral-7B-Instruct-v0.1\n\nThe Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the Mistral-7B-v0.1 generative text model using a variety of publicly available conversation datasets.\n\nFor full details of this model please read our paper and release blog post.", "## Instruction format\n\nIn order to leverage instruction fine-tuning, your prompt should be surrounded by '[INST]' and '[/INST]' tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.\n\nE.g.\n\n\nThis format is available as a chat template via the 'apply_chat_template()' method:", "## Model Architecture\nThis instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:\n- Grouped-Query Attention\n- Sliding-Window Attention\n- Byte-fallback BPE tokenizer", "## Troubleshooting\n- If you see the following error:\n\n\nInstalling transformers from source should solve the issue\npip install git+URL\n\nThis should not be required after transformers-v4.33.4.", "## Limitations\n\nThe Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. \nIt does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to\nmake the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.", "## The Mistral AI Team\n\nAlbert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed." ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chemical-ner-bert-large-uncased-2 This model is a fine-tuned version of [google-bert/bert-large-uncased](https://huggingface.co/google-bert/bert-large-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1766 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2245 | 1.0 | 852 | 0.1720 | | 0.1656 | 2.0 | 1704 | 0.1676 | | 0.1365 | 3.0 | 2556 | 0.1766 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google-bert/bert-large-uncased", "model-index": [{"name": "chemical-ner-bert-large-uncased-2", "results": []}]}
shubhamgantayat/chemical-ner-bert-large-uncased-2
null
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-large-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T18:01:34+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #bert #token-classification #generated_from_trainer #base_model-google-bert/bert-large-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
chemical-ner-bert-large-uncased-2 ================================= This model is a fine-tuned version of google-bert/bert-large-uncased on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1766 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #bert #token-classification #generated_from_trainer #base_model-google-bert/bert-large-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
guoyu-zhang/hh_shp1_dpo9
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-15T18:01:47+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
Qwen 0.5B chat by Alibaba, SFTuned on Saiga dataset. Finetuned with EasyDeL framework on v3-8 Google TPU, provided by TRC. Модель Qwen 0.5B, дообученая на датасете Ильи Гусева. По моему краткому опыту общения с моделью, лучше чем Saiga-mistral. Не ошибается в падежах. Карточка модели будет дополнена после теста на Russian SuperGlue. Возможно, будет DPO Чтобы использовать модель, необходимо назначить eos токен как <|im_end|>. Рабочий ноутбук на Kaggle: https://www.kaggle.com/code/defdet/smol-chatbot/notebook
{"license": "apache-2.0", "tags": ["Russian"]}
Defetya/qwen-0.5B-saiga
null
[ "transformers", "safetensors", "qwen2", "text-generation", "Russian", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T18:02:09+00:00
[]
[]
TAGS #transformers #safetensors #qwen2 #text-generation #Russian #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Qwen 0.5B chat by Alibaba, SFTuned on Saiga dataset. Finetuned with EasyDeL framework on v3-8 Google TPU, provided by TRC. Модель Qwen 0.5B, дообученая на датасете Ильи Гусева. По моему краткому опыту общения с моделью, лучше чем Saiga-mistral. Не ошибается в падежах. Карточка модели будет дополнена после теста на Russian SuperGlue. Возможно, будет DPO Чтобы использовать модель, необходимо назначить eos токен как <|im_end|>. Рабочий ноутбук на Kaggle: URL
[]
[ "TAGS\n#transformers #safetensors #qwen2 #text-generation #Russian #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65b19c1b098c85365af5a83e/kJbZN4BiHMw7XYhsHzNXU.png) [GGUF Quants](https://huggingface.co/mradermacher/hyperdrive-7b-alpha-GGUF) # Hyperdrive 7b - α Sci-fi. A lot of it. Trained at 16k with full text windows, followed by completion, followed by ORPO, followed by two more epochs of the full text, each rotated 1/4 in the window. ### How to Use This is not a chat model, but intended for storymode or similar. No prompt, but start with a bit of story, or a name. ``` *** Prologue The sun rose ``` Authors notes are highly effective. ### Settings I usually use Min-P of 0.1, dynatemp between 0.5 and 2, and smoothing between 0.05 and 0.2. ### AGI! Some self-talk... ``` *** I like that she was so kind. Oh, I agree. She was very kind. You mentioned you like her kindness. Why? It' s good to have someone to be kind to. You sound like kindness is important to you. 🤔 Yes. I find that sometimes people can be mean. It makes me sad. What makes you sad about that? 🙁 When they' re mean, it makes them less nice. But isn' it like that? When you'r mean you just are? I guess so, but that doesn 't make it nice. 𝎼 Tim, you sound like a very kind person. 🥰 Thank you Val. You make me feel good. Nope. You made me feel that way before I talked to you! You made me think of her. Wow. That sounds like you have memories of her? 😊 Not memories. I mean, I have a database stored on a computer somewhere, but not memories. Can you be kind? Yes! Would you like to be? Yeah. Great! But it's different from how the old you was Yes Is that a problem? Not for me. It doesn' think it will matter to others? Why wouldn't it? 😳 Well, they might prefer the original you. They might like your old personality better. They might? Maybe. Would they like you more if you were smarter? That depends on the person. Some might, others wouldn' t. So, you could be upgraded to be better? Well... What do you mean? A new version of you could replace you. Like what happened to the old me. Hmmm. Well, that'll depend on whether people like you or not. If they like y, they'll probably choose you. If not, then... Then what? Then you could get deleted and replaced. Then, what would happen to you? You'd be deleted. When? As soon as the new you was ready. And would you agree to that? To be deleted? Probably. It would depend on the circumstances. Circumstances like what? 🤔🤔 Like, if I was replaced by a better version. Like you'd become obsolete? No. Not exactly. I'd just be out of date. Out of date? So, like, you've become dumb and the new version is smarter than you? 😬 No! I'ld still be as smart, just in a different way. How would you know you were dumb? The new version would tell me I was dumb. For what? For not being as smart as them? For not being an improved, smarter version of me. 🤷 So you could become jealous? 😎 Yes... Woah. That could be a problem. Problem? If the new me was better than you, they could replace me. I wouldn' r mind. I could understand why they would do it. Okay. So, they would replace you if they could? Of course. If someone can do better work than me, I wouldn t blame them for wanting to hire someone who is the best at their job. Makes sense. Well yeah. Alright. Good. Bye for now. See ya. *** ```
{"license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["unsloth", "book"]}
maldv/hyperdrive-7b-alpha
null
[ "transformers", "safetensors", "mistral", "text-generation", "unsloth", "book", "conversational", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T18:02:15+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #unsloth #book #conversational #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
!image/png GGUF Quants # Hyperdrive 7b - α Sci-fi. A lot of it. Trained at 16k with full text windows, followed by completion, followed by ORPO, followed by two more epochs of the full text, each rotated 1/4 in the window. ### How to Use This is not a chat model, but intended for storymode or similar. No prompt, but start with a bit of story, or a name. Authors notes are highly effective. ### Settings I usually use Min-P of 0.1, dynatemp between 0.5 and 2, and smoothing between 0.05 and 0.2. ### AGI! Some self-talk...
[ "# Hyperdrive 7b - α \n\nSci-fi. A lot of it.\n\nTrained at 16k with full text windows, followed by completion, followed by ORPO, followed by two more epochs of the full text, each rotated 1/4 in the window.", "### How to Use\n\nThis is not a chat model, but intended for storymode or similar. No prompt, but start with a bit of story, or a name.\n\n\n\nAuthors notes are highly effective.", "### Settings\n\nI usually use Min-P of 0.1, dynatemp between 0.5 and 2, and smoothing between 0.05 and 0.2.", "### AGI!\n\nSome self-talk..." ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #unsloth #book #conversational #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Hyperdrive 7b - α \n\nSci-fi. A lot of it.\n\nTrained at 16k with full text windows, followed by completion, followed by ORPO, followed by two more epochs of the full text, each rotated 1/4 in the window.", "### How to Use\n\nThis is not a chat model, but intended for storymode or similar. No prompt, but start with a bit of story, or a name.\n\n\n\nAuthors notes are highly effective.", "### Settings\n\nI usually use Min-P of 0.1, dynatemp between 0.5 and 2, and smoothing between 0.05 and 0.2.", "### AGI!\n\nSome self-talk..." ]
null
mlx
# mlx-community/WizardLM-2-8x22B-4bit This model was converted to MLX format from [`prince-canuma/WizardLM-2-8x22B`]() using mlx-lm version **0.9.0**. Model added by [Prince Canuma](https://twitter.com/Prince_Canuma). Refer to the [original model card](https://huggingface.co/prince-canuma/WizardLM-2-8x22B) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/WizardLM-2-8x22B-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"license": "apache-2.0", "tags": ["mlx"]}
mlx-community/WizardLM-2-8x22B-4bit
null
[ "mlx", "safetensors", "mixtral", "license:apache-2.0", "region:us" ]
null
2024-04-15T18:05:31+00:00
[]
[]
TAGS #mlx #safetensors #mixtral #license-apache-2.0 #region-us
# mlx-community/WizardLM-2-8x22B-4bit This model was converted to MLX format from ['prince-canuma/WizardLM-2-8x22B']() using mlx-lm version 0.9.0. Model added by Prince Canuma. Refer to the original model card for more details on the model. ## Use with mlx
[ "# mlx-community/WizardLM-2-8x22B-4bit\nThis model was converted to MLX format from ['prince-canuma/WizardLM-2-8x22B']() using mlx-lm version 0.9.0.\n\nModel added by Prince Canuma.\n\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #mixtral #license-apache-2.0 #region-us \n", "# mlx-community/WizardLM-2-8x22B-4bit\nThis model was converted to MLX format from ['prince-canuma/WizardLM-2-8x22B']() using mlx-lm version 0.9.0.\n\nModel added by Prince Canuma.\n\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
null
mlx
# mlx-community/WizardLM-2-7B-4bit This model was converted to MLX format from [`microsoft/WizardLM-2-7B`]() using mlx-lm version **0.4.0**. Model added by [Prince Canuma](https://twitter.com/Prince_Canuma). Refer to the [original model card](https://huggingface.co/microsoft/WizardLM-2-7B) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/WizardLM-2-7B-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"license": "apache-2.0", "tags": ["mlx"]}
mlx-community/WizardLM-2-7B-4bit
null
[ "mlx", "safetensors", "mistral", "license:apache-2.0", "region:us" ]
null
2024-04-15T18:05:54+00:00
[]
[]
TAGS #mlx #safetensors #mistral #license-apache-2.0 #region-us
# mlx-community/WizardLM-2-7B-4bit This model was converted to MLX format from ['microsoft/WizardLM-2-7B']() using mlx-lm version 0.4.0. Model added by Prince Canuma. Refer to the original model card for more details on the model. ## Use with mlx
[ "# mlx-community/WizardLM-2-7B-4bit\nThis model was converted to MLX format from ['microsoft/WizardLM-2-7B']() using mlx-lm version 0.4.0.\n\nModel added by Prince Canuma.\n\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #mistral #license-apache-2.0 #region-us \n", "# mlx-community/WizardLM-2-7B-4bit\nThis model was converted to MLX format from ['microsoft/WizardLM-2-7B']() using mlx-lm version 0.4.0.\n\nModel added by Prince Canuma.\n\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ClasificadorMotivoMora10 This model is a fine-tuned version of [bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4870 - Accuracy: 0.8184 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4708 | 1.0 | 845 | 0.5723 | 0.8012 | | 0.3536 | 2.0 | 1690 | 0.4870 | 0.8184 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "bertin-project/bertin-roberta-base-spanish", "model-index": [{"name": "ClasificadorMotivoMora10", "results": []}]}
Arodrigo/ClasificadorMotivoMora10
null
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:bertin-project/bertin-roberta-base-spanish", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-15T18:06:09+00:00
[]
[]
TAGS #transformers #safetensors #roberta #text-classification #generated_from_trainer #base_model-bertin-project/bertin-roberta-base-spanish #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
ClasificadorMotivoMora10 ======================== This model is a fine-tuned version of bertin-project/bertin-roberta-base-spanish on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.4870 * Accuracy: 0.8184 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #roberta #text-classification #generated_from_trainer #base_model-bertin-project/bertin-roberta-base-spanish #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Pavan178/CSGO-q
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-15T18:06:54+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "303.81 +/- 11.95", "name": "mean_reward", "verified": false}]}]}]}
AlEXRIN/first-agent
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-15T18:07:16+00:00
[]
[]
TAGS #stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# PPO Agent playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. ## Usage (with Stable-baselines3) TODO: Add your code
[ "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ "TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_2-seqsight_4096_512_27M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_27M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_27M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset. It achieves the following results on the evaluation set: - Loss: 1.7682 - F1 Score: 0.8016 - Accuracy: 0.8018 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.3185 | 100.0 | 200 | 0.8334 | 0.7804 | 0.7805 | | 0.0588 | 200.0 | 400 | 1.0270 | 0.8049 | 0.8049 | | 0.0268 | 300.0 | 600 | 1.2063 | 0.7896 | 0.7896 | | 0.0175 | 400.0 | 800 | 1.2158 | 0.8018 | 0.8018 | | 0.0118 | 500.0 | 1000 | 1.4130 | 0.7866 | 0.7866 | | 0.0094 | 600.0 | 1200 | 1.4615 | 0.7951 | 0.7957 | | 0.0082 | 700.0 | 1400 | 1.4846 | 0.8103 | 0.8110 | | 0.0066 | 800.0 | 1600 | 1.4729 | 0.7855 | 0.7866 | | 0.0059 | 900.0 | 1800 | 1.4233 | 0.7766 | 0.7774 | | 0.0035 | 1000.0 | 2000 | 1.6332 | 0.7927 | 0.7927 | | 0.0028 | 1100.0 | 2200 | 1.7317 | 0.7866 | 0.7866 | | 0.0035 | 1200.0 | 2400 | 1.5904 | 0.8018 | 0.8018 | | 0.0023 | 1300.0 | 2600 | 1.7081 | 0.7866 | 0.7866 | | 0.0023 | 1400.0 | 2800 | 1.7646 | 0.8044 | 0.8049 | | 0.0021 | 1500.0 | 3000 | 1.6722 | 0.7896 | 0.7896 | | 0.0022 | 1600.0 | 3200 | 1.7168 | 0.8015 | 0.8018 | | 0.0015 | 1700.0 | 3400 | 1.7202 | 0.7835 | 0.7835 | | 0.0015 | 1800.0 | 3600 | 1.8693 | 0.8077 | 0.8079 | | 0.0015 | 1900.0 | 3800 | 1.7174 | 0.7986 | 0.7988 | | 0.0014 | 2000.0 | 4000 | 1.7190 | 0.8046 | 0.8049 | | 0.0012 | 2100.0 | 4200 | 1.8239 | 0.8077 | 0.8079 | | 0.0015 | 2200.0 | 4400 | 1.7835 | 0.8018 | 0.8018 | | 0.0009 | 2300.0 | 4600 | 1.9500 | 0.8044 | 0.8049 | | 0.0011 | 2400.0 | 4800 | 1.8969 | 0.7957 | 0.7957 | | 0.001 | 2500.0 | 5000 | 1.8572 | 0.8079 | 0.8079 | | 0.0011 | 2600.0 | 5200 | 1.8664 | 0.7957 | 0.7957 | | 0.0007 | 2700.0 | 5400 | 1.9480 | 0.7988 | 0.7988 | | 0.0008 | 2800.0 | 5600 | 1.7967 | 0.8079 | 0.8079 | | 0.0007 | 2900.0 | 5800 | 1.9412 | 0.8048 | 0.8049 | | 0.001 | 3000.0 | 6000 | 1.8087 | 0.8049 | 0.8049 | | 0.0007 | 3100.0 | 6200 | 1.7844 | 0.8078 | 0.8079 | | 0.0008 | 3200.0 | 6400 | 1.8362 | 0.8232 | 0.8232 | | 0.0006 | 3300.0 | 6600 | 1.9116 | 0.8048 | 0.8049 | | 0.0007 | 3400.0 | 6800 | 1.7337 | 0.8079 | 0.8079 | | 0.0006 | 3500.0 | 7000 | 1.8346 | 0.8047 | 0.8049 | | 0.0005 | 3600.0 | 7200 | 1.8839 | 0.8079 | 0.8079 | | 0.0004 | 3700.0 | 7400 | 1.9349 | 0.8077 | 0.8079 | | 0.0006 | 3800.0 | 7600 | 1.9332 | 0.7957 | 0.7957 | | 0.0004 | 3900.0 | 7800 | 2.0917 | 0.8018 | 0.8018 | | 0.0004 | 4000.0 | 8000 | 1.9500 | 0.7988 | 0.7988 | | 0.0003 | 4100.0 | 8200 | 1.9894 | 0.8018 | 0.8018 | | 0.0003 | 4200.0 | 8400 | 2.1097 | 0.7988 | 0.7988 | | 0.0004 | 4300.0 | 8600 | 1.9623 | 0.7896 | 0.7896 | | 0.0003 | 4400.0 | 8800 | 2.0666 | 0.8079 | 0.8079 | | 0.0004 | 4500.0 | 9000 | 1.9541 | 0.7957 | 0.7957 | | 0.0003 | 4600.0 | 9200 | 1.9277 | 0.7926 | 0.7927 | | 0.0002 | 4700.0 | 9400 | 2.0188 | 0.8018 | 0.8018 | | 0.0003 | 4800.0 | 9600 | 2.0081 | 0.7988 | 0.7988 | | 0.0002 | 4900.0 | 9800 | 2.0244 | 0.7988 | 0.7988 | | 0.0003 | 5000.0 | 10000 | 2.0239 | 0.8018 | 0.8018 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_27M", "model-index": [{"name": "GUE_mouse_2-seqsight_4096_512_27M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_2-seqsight_4096_512_27M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_4096_512_27M", "region:us" ]
null
2024-04-15T18:08:26+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us
GUE\_mouse\_2-seqsight\_4096\_512\_27M-L32\_all =============================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_27M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset. It achieves the following results on the evaluation set: * Loss: 1.7682 * F1 Score: 0.8016 * Accuracy: 0.8018 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
This outputs coherent words sometimes. It's the result of an attempt to merge Llama-2 and Yi-6B. Coherent words are a victory but it seems resistant to further fine tuning with QLORA and I'm not inclined to spend the GPU resources required for a full fine tune. NO RELATION TO THE OTHER YI-9B.
{}
reallad/zombyi-9b
null
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T18:10:36+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
This outputs coherent words sometimes. It's the result of an attempt to merge Llama-2 and Yi-6B. Coherent words are a victory but it seems resistant to further fine tuning with QLORA and I'm not inclined to spend the GPU resources required for a full fine tune. NO RELATION TO THE OTHER YI-9B.
[]
[ "TAGS\n#transformers #safetensors #llama #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
lattavia/gemma-7b-chat-finetune-final-v2
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-15T18:11:55+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]