modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
timestamp[us, tz=UTC]
card
stringlengths
1
901k
nDimensional/Clarity
nDimensional
2023-11-28T13:21:02Z
387
36
diffusers
[ "diffusers", "text-to-image", "StableDiffusion", "Stable-Diffusion", "image-generation", "SD1.5", "SDv1.5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-01-23T01:04:35Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - StableDiffusion - Stable-Diffusion - image-generation - SD1.5 - SDv1.5 - image-generation --- <h1>Clarity 🤠</h1> <img src="https://huggingface.co/Schisim/Clarity/resolve/main/Images/00241-3632331504.png" width=512/> VAE NOT REQUIRED BUT RECOMENDED VAE - https://huggingface.co/stabilityai/sd-vae-ft-mse-original/tree/main File Structure for AUTOMATIC1111-webui : |──sd |----|──stable-diffusion-webui |----|----|──models |----|----|----|──VAE |----|----|----|----|──Put your VAE file here Merged Models A list of merged models can be found bellow in the description of the attached model version. Capabilities NSFW Photography SFW Photography is also possible, see "Trigger Words" bellow. Photorealistic 3D renders Emphasis on human anatomy Limitations Anything not listed above. This is model was created as a baseline to a general purpose model I'm working on. Stylized images and object images are possible, but require a little finesse to generate. Trigger Words This checkpoint does not contain any trigger words. However, placing some tags at the beginning of the prompts can heavily influence the generation. These tags include: "nsfw", "sfw", "erotica", and "nudity", "3d render", "cartoon" Note: For SFW generation, try adding sfw to your prompt and nsfw to your negative prompt. For NSFW generation, try adding either nsfw, erotica, or nudity to your prompt and sfw to your negative prompt. In general, this is more useful for generating sfw images. This concept also applies to 3rd render and cartoon. I recommend leaving 3rd render and cartoon both in your negative prompt for generating photographic images. Basic Prompt Guide This model heavily revolves around UnstablePhotorealv.5. This means that you can the tagging system for PhotoReal, although I would recommend using a combination of the PhotoReal comma system and more natural language prompting. Guide to prompting with PhotoReal - https://docs.google.com/document/d/1-DDIHVbsYfynTp_rsKLu4b2tSQgxtO5F6pNsNla12k0/edit#heading=h.3znysh7 Example prompt using commas and natural language: Positive A Professional Full Body Photo, of a beautiful young woman, clothed, standing indoors, Caucasian, toned physique, strawberry red hair, neutral expression Negative I recommend something simple like, deformed, bad anatomy, disfigured, missing limb, floating limbs, twisted, blurry, fused fingers, long neck, words, logo, text, mutated hands, mutated fingers Modify as needed. For example, adding 3d render, cartoon to your negative prompt will help generate photographic images. The prompts for this model are fairly flexible, experiment to find out what works best for you.
lorahub/flan_t5_large-squad_v1.1
lorahub
2023-07-24T10:08:24Z
387
0
peft
[ "peft", "region:us" ]
null
2023-07-24T10:08:14Z
--- library_name: peft ---
lorahub/flan_t5_large-paws_wiki
lorahub
2023-07-24T10:23:12Z
387
0
peft
[ "peft", "region:us" ]
null
2023-07-24T10:23:03Z
--- library_name: peft ---
SargeZT/controlnet-v1e-sdxl-depth
SargeZT
2023-08-05T20:10:22Z
387
36
diffusers
[ "diffusers", "safetensors", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "controlnet", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-29T10:16:56Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-xl-base-1.0 tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - controlnet inference: true --- # controlnet-SargeZT/controlnet-v1e-sdxl-depth These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with depth maps. Note that the input depth maps are perceptually mapped from ZoeDepth. You can find some example images below. prompt: nightmare construction worker, unsettling ![images_0)](./images_0.png) prompt: android warrior, unsettling ![images_1)](./images_1.png) ## License [SDXL 1.0 License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md)
Yntec/NeverEndingDream768
Yntec
2023-09-01T10:36:27Z
387
1
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "text-to-image", "art", "artistic", "Lykon", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-01T09:01:53Z
--- language: - en license: other library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - text-to-image - art - artistic - Lykon --- # Never Ending Dream 768 768x768 version of this model for the interference API. Also consider supporting Lykon on Patreon - https://www.patreon.com/Lykon275 Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/BZiskc0CNpOnJudutbPaz.png) pretty CUTE girl, 1940, Magazine ad, Iconic. hyperrealistic, octane render, Painterly soft brush by yoshitomo nara ( 2 0 1 2 ), painting detailed pastel from fantasia ( 1 9 4 1 ) Official Repository: https://huggingface.co/Lykon/NeverEnding-Dream
DylanJHJ/bert-base-final-v0-ep2
DylanJHJ
2023-11-13T01:56:16Z
387
0
transformers
[ "transformers", "pytorch", "bert", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-11-13T01:24:24Z
--- license: apache-2.0 --- ```python import torch import torch.nn as nn import torch.nn.functional as F from torch.nn import CrossEntropyLoss, KLDivLoss from transformers.modeling_outputs import TokenClassifierOutput from transformers import BertModel, BertPreTrainedModel class BertForHighlightPrediction(BertPreTrainedModel): _keys_to_ignore_on_load_unexpected = [r"pooler"] def __init__(self, config, **model_kwargs): super().__init__(config) # self.model_args = model_kargs["model_args"] self.num_labels = config.num_labels self.bert = BertModel(config, add_pooling_layer=False) classifier_dropout = ( config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob ) self.dropout = nn.Dropout(classifier_dropout) self.tokens_clf = nn.Linear(config.hidden_size, config.num_labels) self.tau = model_kwargs.pop('tau', 1) self.gamma = model_kwargs.pop('gamma', 1) self.soft_labeling = model_kwargs.pop('soft_labeling', False) self.init_weights() self.softmax = nn.Softmax(dim=-1) def forward(self, input_ids=None, probs=None, # soft-labeling attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None,): outputs = self.bert( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) tokens_output = outputs[0] highlight_logits = self.tokens_clf(self.dropout(tokens_output)) loss = None if labels is not None: loss_fct = CrossEntropyLoss() active_loss = attention_mask.view(-1) == 1 active_logits = highlight_logits.view(-1, self.num_labels) active_labels = torch.where( active_loss, labels.view(-1), torch.tensor(loss_fct.ignore_index).type_as(labels) ) loss_ce = loss_fct(active_logits, active_labels) loss_kl = 0 if self.soft_labeling: loss_fct = KLDivLoss(reduction='sum') active_mask = (attention_mask * token_type_ids).view(-1, 1) # BL 1 n_active = (active_mask == 1).sum() active_mask = active_mask.repeat(1, 2) # BL 2 input_logp = F.log_softmax(active_logits / self.tau, -1) # BL 2 target_p = torch.cat(( (1-probs).view(-1, 1), probs.view(-1, 1)), -1) # BL 2 loss_kl = loss_fct(input_logp, target_p * active_mask) / n_active loss = self.gamma * loss_ce + (1-self.gamma) * loss_kl # print("Loss:\n") # print(loss) # print(loss_kl) # print(loss_ce) return TokenClassifierOutput( loss=loss, logits=highlight_logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) @torch.no_grad() def inference(self, outputs): with torch.no_grad(): outputs = self.forward(**batch_inputs) probabilities = self.softmax(self.tokens_clf(outputs.hidden_states[-1])) predictions = torch.argmax(probabilities, dim=-1) # active filtering active_tokens = batch_inputs['attention_mask'] == 1 active_predictions = torch.where( active_tokens, predictions, torch.tensor(-1).type_as(predictions) ) outputs = { "probabilities": probabilities[:, :, 1].detach(), # shape: (batch, length) "active_predictions": predictions.detach(), "active_tokens": active_tokens, } return outputs ```
TheBloke/Mistral-7B-Merge-14-v0.1-GGUF
TheBloke
2023-12-18T15:25:22Z
387
4
transformers
[ "transformers", "gguf", "mistral", "en", "base_model:EmbeddedLLM/Mistral-7B-Merge-14-v0.1", "license:apache-2.0", "text-generation-inference", "region:us" ]
null
2023-12-18T15:20:54Z
--- base_model: EmbeddedLLM/Mistral-7B-Merge-14-v0.1 inference: false language: - en license: apache-2.0 model_creator: EmbeddedLLM model_name: Mistral 7B Merge 14 v0.1 model_type: mistral prompt_template: '{prompt} ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Mistral 7B Merge 14 v0.1 - GGUF - Model creator: [EmbeddedLLM](https://huggingface.co/EmbeddedLLM) - Original model: [Mistral 7B Merge 14 v0.1](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.1) <!-- description start --> ## Description This repo contains GGUF format model files for [EmbeddedLLM's Mistral 7B Merge 14 v0.1](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.1). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mistral-7B-Merge-14-v0.1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mistral-7B-Merge-14-v0.1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mistral-7B-Merge-14-v0.1-GGUF) * [EmbeddedLLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Unknown ``` {prompt} ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [mistral-7b-merge-14-v0.1.Q2_K.gguf](https://huggingface.co/TheBloke/Mistral-7B-Merge-14-v0.1-GGUF/blob/main/mistral-7b-merge-14-v0.1.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes | | [mistral-7b-merge-14-v0.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/Mistral-7B-Merge-14-v0.1-GGUF/blob/main/mistral-7b-merge-14-v0.1.Q3_K_S.gguf) | Q3_K_S | 3 | 3.17 GB| 5.67 GB | very small, high quality loss | | [mistral-7b-merge-14-v0.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Mistral-7B-Merge-14-v0.1-GGUF/blob/main/mistral-7b-merge-14-v0.1.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss | | [mistral-7b-merge-14-v0.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/Mistral-7B-Merge-14-v0.1-GGUF/blob/main/mistral-7b-merge-14-v0.1.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss | | [mistral-7b-merge-14-v0.1.Q4_0.gguf](https://huggingface.co/TheBloke/Mistral-7B-Merge-14-v0.1-GGUF/blob/main/mistral-7b-merge-14-v0.1.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [mistral-7b-merge-14-v0.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/Mistral-7B-Merge-14-v0.1-GGUF/blob/main/mistral-7b-merge-14-v0.1.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss | | [mistral-7b-merge-14-v0.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Mistral-7B-Merge-14-v0.1-GGUF/blob/main/mistral-7b-merge-14-v0.1.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended | | [mistral-7b-merge-14-v0.1.Q5_0.gguf](https://huggingface.co/TheBloke/Mistral-7B-Merge-14-v0.1-GGUF/blob/main/mistral-7b-merge-14-v0.1.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [mistral-7b-merge-14-v0.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Mistral-7B-Merge-14-v0.1-GGUF/blob/main/mistral-7b-merge-14-v0.1.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended | | [mistral-7b-merge-14-v0.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Mistral-7B-Merge-14-v0.1-GGUF/blob/main/mistral-7b-merge-14-v0.1.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended | | [mistral-7b-merge-14-v0.1.Q6_K.gguf](https://huggingface.co/TheBloke/Mistral-7B-Merge-14-v0.1-GGUF/blob/main/mistral-7b-merge-14-v0.1.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss | | [mistral-7b-merge-14-v0.1.Q8_0.gguf](https://huggingface.co/TheBloke/Mistral-7B-Merge-14-v0.1-GGUF/blob/main/mistral-7b-merge-14-v0.1.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Mistral-7B-Merge-14-v0.1-GGUF and below it, a specific filename to download, such as: mistral-7b-merge-14-v0.1.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Mistral-7B-Merge-14-v0.1-GGUF mistral-7b-merge-14-v0.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Mistral-7B-Merge-14-v0.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mistral-7B-Merge-14-v0.1-GGUF mistral-7b-merge-14-v0.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m mistral-7b-merge-14-v0.1.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./mistral-7b-merge-14-v0.1.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "{prompt}", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./mistral-7b-merge-14-v0.1.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: EmbeddedLLM's Mistral 7B Merge 14 v0.1 # Model Description This is an experiment to test merging 14 models using DARE TIES 🦙 The merged model is then merged again with [janai-hq/trinity-v1](https://huggingface.co/janai-hq/trinity-v1) using Gradient SLERP. The result is a base model that performs quite well but requires some further instruction fine-tuning. The 14 models are as follows: 1. [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) 2. [ehartford/dolphin-2.2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.2.1-mistral-7b) 3. [SciPhi/SciPhi-Mistral-7B-32k](https://huggingface.co/SciPhi/SciPhi-Mistral-7B-32k) 4. [ehartford/samantha-1.2-mistral-7b](https://huggingface.co/ehartford/samantha-1.2-mistral-7b) 5. [Arc53/docsgpt-7b-mistral](https://huggingface.co/Arc53/docsgpt-7b-mistral) 6. [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) 7. [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling) 8. [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) 9. [v1olet/v1olet_marcoroni-go-bruins-merge-7B](https://huggingface.co/v1olet/v1olet_marcoroni-go-bruins-merge-7B) 10. [beowolx/MistralHermes-CodePro-7B-v1](https://huggingface.co/beowolx/MistralHermes-CodePro-7B-v1) 11. [TIGER-Lab/MAmmoTH-7B-Mistral](https://huggingface.co/TIGER-Lab/MAmmoTH-7B-Mistral) 12. [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) 13. [Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp) 14. [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) - base model: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) The yaml config file for this model is here: ```yaml slices: - sources: - model: EmbeddedLLM/Mistral-7B-Merge-14-v0 layer_range: [0, 32] - model: janai-hq/trinity-v1 layer_range: [0, 32] merge_method: slerp base_model: EmbeddedLLM/Mistral-7B-Merge-14-v0 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` <!-- original-model-card end -->
imfarzanansari/skintelligent-acne
imfarzanansari
2023-12-24T04:46:50Z
387
6
transformers
[ "transformers", "safetensors", "vit", "image-classification", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-12-24T04:24:55Z
--- license: mit language: - en metrics: - precision library_name: transformers pipeline_tag: image-classification --- # Acne Severity Detection Model ## Overview This model card provides documentation for the Acne Severity Detection model checkpoint used in the Hugging Face pipeline. The model is designed to assess acne severity levels, ranging from clear skin to very severe acne. ## Model Details The checkpoint includes the following files: - **`config.json`**: Model configuration settings. - **`model.safetensors`**: Serialized model parameters and architecture. - **`optimizer.pt`**: Optimizer state capturing the current model optimization. - **`preprocessor_config.json`**: Configuration file for the preprocessor. - **`rng_state.pth`**: Random number generator state for reproducibility. - **`scheduler.pt`**: Scheduler state for controlling learning rate schedules. - **`trainer_state.json`**: Trainer state with information about the training process. - **`training_args.bin`**: Binary file storing training arguments. ## Usage To utilize the model checkpoint, follow these steps: 1. Download this repository. 2. Ensure the required dependencies are installed (`pip install -r requirements.txt`). ## Severity Levels - **Level -1**: Clear Skin - **Level 0**: Occasional Spots - **Level 1**: Mild Acne - **Level 2**: Moderate Acne - **Level 3**: Severe Acne - **Level 4**: Very Severe Acne ## Important Notes - The model card provides insight into the model's purpose, capabilities, and usage instructions. - Ensure all necessary files are present in the `checkpoint` directory for proper functionality. ## License This Acne Severity Detection model checkpoint is licensed under the [MIT License](LICENSE). Please review and adhere to the license when using or modifying the code.
abhinand/malayalam-llama-7b-instruct-v0.1
abhinand
2024-03-04T12:56:47Z
387
8
transformers
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "en", "ml", "arxiv:2311.05845", "base_model:abhinand/malayalam-llama-pt-v0", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-23T02:20:49Z
--- language: - en - ml license: llama2 base_model: abhinand/malayalam-llama-pt-v0 model-index: - name: malayalam-llama-instruct-v0.1 results: [] --- # Malayalam LLaMA 7B Instruct v0.1 Welcome to the inaugural release of the Malayalam LLaMA 7B instruct model – an important step in advancing LLMs for the Malayalam language. This model is ready for immediate inference and is also primed for further fine-tuning to cater to your specific NLP tasks. To dive deep into the development and capabilities of this model, please read the [research paper](https://arxiv.org/abs/2311.05845) and the [introductory blog post (WIP)]() that outlines our journey and the model's potential impact. > **Note:** This model is based on the Tamil LLaMA series of models. The GitHub repository remains the same - [https://github.com/abhinand5/tamil-llama](https://github.com/abhinand5/tamil-llama). The base models and the updated code for Tamil LLaMA v0.2 (which this work is based on) will be released soon. If you appreciate this work and would like to support its continued development, consider [buying me a coffee](https://www.buymeacoffee.com/abhinand.b). Your support is invaluable and greatly appreciated. [!["Buy Me A Coffee"](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.buymeacoffee.com/abhinand.b) ## Demo: To access an easy-to-use, no-code demo, please open the provided Google Colab notebook. Complete instructions for usage are included within the notebook itself. <a href="https://colab.research.google.com/drive/11_RHZim_HubD2NskxSwq4X_NW4XlnrtS?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Demo In Colab"></a></center> ## Model description The Malayalam LLaMA models have been enhanced and tailored specifically with an extensive Malayalam vocabulary of ~16,000 tokens, building upon the foundation set by the original LLaMA-2. - **Model type:** A 7B parameter GPT-like model finetuned on ~500,000 samples consisting of an equal proportion of English and Malayalam samples. (Dataset will be released soon) - **Language(s):** Bilingual. English and Malayalam. - **License:** GNU General Public License v3.0 - **Finetuned from model:** [To be released soon]() - **Training Precision:** `bfloat16` - **Code:** [GitHub](https://github.com/abhinand5/tamil-llama) (To be updated soon) ## Prompt Template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Benchmark Results Benchmarking was done using [LLM-Autoeval](https://github.com/mlabonne/llm-autoeval) on an RTX 3090 on [runpod](https://www.runpod.io/). > **Note:** Please note that discrepancies have been observed between the Open LLM Leaderboard scores and those obtained from local runs using the LM Eval Harness with identical configurations. The results mentioned here are based on our own benchmarking. To replicate these findings, you can utilize the LLM-Autoeval or use [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) locally with the configurations described in Open LLM Leaderboard's About page. | Benchmark | Llama 2 Chat | Tamil Llama v0.2 Instruct | Telugu Llama Instruct | Malayalam Llama Instruct | |---------------|--------------|---------------------------|-----------------------|--------------------------| | ARC Challenge (25-shot) | 52.9 | **53.75** | 52.47 | 52.82 | | TruthfulQA (0-shot) | 45.57 | 47.23 | **48.47** | 47.46 | | Hellaswag (10-shot) | **78.55** | 76.11 | 76.13 | 76.91 | | Winogrande (5-shot) | 71.74 | **73.95** | 71.74 | 73.16 | | AGI Eval (0-shot) | 29.3 | **30.95** | 28.44 | 29.6 | | BigBench (0-shot) | 32.6 | 33.08 | 32.99 | **33.26** | | Average | 51.78 | **52.51** | 51.71 | 52.2 | ## Related Models | Model | Type | Data | Base Model | # Params | Download Links | |--------------------------|-----------------------------|-------------------|----------------------|------|------------------------------------------------------------------------| | Tamil LLaMA 7B v0.1 Base | Base model | 12GB | LLaMA 7B | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-base-v0.1) | | Tamil LLaMA 13B v0.1 Base | Base model | 4GB | LLaMA 13B | 13B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-13b-base-v0.1) | | Tamil LLaMA 7B v0.1 Instruct | Instruction following model | 145k instructions | Tamil LLaMA 7B Base | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.1) | | Tamil LLaMA 13B v0.1 Instruct | Instruction following model | 145k instructions | Tamil LLaMA 13B Base | 13B | [HF Hub](abhinand/tamil-llama-13b-instruct-v0.1) | | Tamil LLaMA 7B v0.2 Instruct | Instruction/Chat model | 420k instructions | Tamil LLaMA 7B Base v0.2 | 7B | [HF Hub](abhinand/tamil-llama-13b-instruct-v0.1) | | Telugu LLaMA 7B v0.2 Instruct | Instruction/Chat model | ~400k instructions | Telugu LLaMA 7B Base v0.1 | 7B | [HF Hub](https://huggingface.co/abhinand/telugu-llama-instruct-v0.1) | ## Example Usage ```python from transformers import LlamaForCausalLM, AutoTokenizer, pipeline model = LlamaForCausalLM.from_pretrained( "abhinand/malayalam-llama-instruct-v0.1", #load_in_8bit=True, # Set this depending on the GPU you have torch_dtype=torch.bfloat16, device_map={"": 0}, # Set this depending on the number of GPUs you have local_files_only=False # Optional ) model.eval() tokenizer = AutoTokenizer.from_pretrained("abhinand/malayalam-llama-instruct-v0.1") inf_pipeline = pipeline("conversational", model=model, tokenizer=tokenizer) def format_instruction(system_prompt, question, return_dict=False): if system_prompt is None: messages = [ {'content': question, 'role': 'user'}, ] else: messages = [ {'content': system_prompt, 'role': 'system'}, {'content': question, 'role': 'user'}, ] if return_dict: return messages prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) return prompt # Set the generation configuration according to your needs temperature = 0.6 repetition_penalty = 1.1 max_new_tokens = 256 SYSTEM_PROMPT = "You are an AI assistant who follows instructions extremely well. Do your best your best to help." INPUT = "Give 3 tips to lead a healthy life" instruction = format_instruction( system_prompt=SYSTEM_PROMPT, question=INPUT, return_dict=True, ) output = inf_pipeline( instruction, temperature=temperature, max_new_tokens=max_new_tokens, repetition_penalty=repetition_penalty ) print(output) ``` **Example Output:** ``` Conversation id: d57cdf33-01ff-4328-8efe-5c4fefdd6e77 system: You are an AI assistant who follows instructions extremely well. Do your best your best to help. user: Give 3 tips to lead a healthy life assistant: 1. Establish a regular sleep schedule: Aim for 7-9 hours of sleep per night, which is crucial for overall health and well-being. 2. Engage in physical activity regularly: Incorporate exercise into your daily routine, even if it's just a brisk walk or some light stretching. Regular physical activity can help reduce stress, improve mood, and maintain a healthy weight. 3. Eat a balanced diet: Consume a variety of whole grains, lean proteins, fruits, vegetables, and healthy fats. Limit processed foods, sugary drinks, and excessive amounts of sodium. ``` ## Usage Note It's important to note that the models have not undergone detoxification/censorship. Therefore, while they possess impressive linguistic capabilities, there is a possibility for them to generate content that could be deemed harmful or offensive. We urge users to exercise discretion and supervise the model's outputs closely, especially in public or sensitive applications. ## Meet the Developers Get to know the creators behind this innovative model and follow their contributions to the field: - [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/) ## Citation If you use this model or any of the the Tamil-Llama related work in your research, please cite: ```bibtex @misc{balachandran2023tamilllama, title={Tamil-Llama: A New Tamil Language Model Based on Llama 2}, author={Abhinand Balachandran}, year={2023}, eprint={2311.05845}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Tamil language. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_abhinand__malayalam-llama-7b-instruct-v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |39.69| |AI2 Reasoning Challenge (25-Shot)|37.20| |HellaSwag (10-Shot) |67.81| |MMLU (5-Shot) |23.12| |TruthfulQA (0-shot) |47.11| |Winogrande (5-shot) |62.90| |GSM8k (5-shot) | 0.00|
kubernetes-bad/CharGen-v2-GGUF
kubernetes-bad
2024-02-07T23:09:58Z
387
6
null
[ "gguf", "license:cc-by-nc-4.0", "region:us" ]
null
2024-02-07T22:52:41Z
--- license: cc-by-nc-4.0 --- # CharGen v2 ![CharGen v2](assets/cover_art.jpg) ## Live version https://chargen.kubes-lab.com ## Overview This repository contains GGUF quants of [CharGen v2](https://huggingface.co/kubernetes-bad/chargen-v2) model. Please see [original repository](https://huggingface.co/kubernetes-bad/chargen-v2) for proper full model card with prompting format and other details. CharGen is a model that helps you to write characters for role playing with. It produces character description based on your input prompt, step-by-step, in a dialogue format. Warning: this model was trained on some NSFW content, so it may produce NSFW characters. CharGen v2 is a project of several months of work. It's trained on a custom non-synthetic dataset, manually curated by hand. Read below on how it came together.
mradermacher/Plap-8x13B-GGUF
mradermacher
2024-05-06T06:14:27Z
387
0
transformers
[ "transformers", "gguf", "not-for-all-audiences", "nsfw", "en", "base_model:Undi95/Plap-8x13B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-03-13T10:53:34Z
--- base_model: Undi95/Plap-8x13B language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - not-for-all-audiences - nsfw --- ## About static quants of https://huggingface.co/Undi95/Plap-8x13B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Plap-8x13B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Plap-8x13B-GGUF/resolve/main/Plap-8x13B.Q2_K.gguf) | Q2_K | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/Plap-8x13B-GGUF/resolve/main/Plap-8x13B.IQ3_XS.gguf) | IQ3_XS | 31.4 | | | [GGUF](https://huggingface.co/mradermacher/Plap-8x13B-GGUF/resolve/main/Plap-8x13B.IQ3_S.gguf) | IQ3_S | 33.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Plap-8x13B-GGUF/resolve/main/Plap-8x13B.Q3_K_S.gguf) | Q3_K_S | 33.1 | | | [GGUF](https://huggingface.co/mradermacher/Plap-8x13B-GGUF/resolve/main/Plap-8x13B.IQ3_M.gguf) | IQ3_M | 34.6 | | | [GGUF](https://huggingface.co/mradermacher/Plap-8x13B-GGUF/resolve/main/Plap-8x13B.Q3_K_M.gguf) | Q3_K_M | 36.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Plap-8x13B-GGUF/resolve/main/Plap-8x13B.Q3_K_L.gguf) | Q3_K_L | 38.7 | | | [GGUF](https://huggingface.co/mradermacher/Plap-8x13B-GGUF/resolve/main/Plap-8x13B.IQ4_XS.gguf) | IQ4_XS | 40.6 | | | [GGUF](https://huggingface.co/mradermacher/Plap-8x13B-GGUF/resolve/main/Plap-8x13B.Q4_K_S.gguf) | Q4_K_S | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Plap-8x13B-GGUF/resolve/main/Plap-8x13B.Q4_K_M.gguf) | Q4_K_M | 45.2 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Plap-8x13B-GGUF/resolve/main/Plap-8x13B.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Plap-8x13B-GGUF/resolve/main/Plap-8x13B.Q5_K_S.gguf.part2of2) | Q5_K_S | 50.9 | | | [PART 1](https://huggingface.co/mradermacher/Plap-8x13B-GGUF/resolve/main/Plap-8x13B.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Plap-8x13B-GGUF/resolve/main/Plap-8x13B.Q5_K_M.gguf.part2of2) | Q5_K_M | 52.4 | | | [PART 1](https://huggingface.co/mradermacher/Plap-8x13B-GGUF/resolve/main/Plap-8x13B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Plap-8x13B-GGUF/resolve/main/Plap-8x13B.Q6_K.gguf.part2of2) | Q6_K | 60.3 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Plap-8x13B-GGUF/resolve/main/Plap-8x13B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Plap-8x13B-GGUF/resolve/main/Plap-8x13B.Q8_0.gguf.part2of2) | Q8_0 | 77.3 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/airoboros-34b-3.3-i1-GGUF
mradermacher
2024-05-06T05:21:32Z
387
1
transformers
[ "transformers", "gguf", "en", "dataset:jondurbin/airoboros-3.2", "dataset:bluemoon-fandom-1-1-rp-cleaned", "dataset:boolq", "dataset:jondurbin/gutenberg-dpo-v0.1", "dataset:LDJnr/Capybara", "dataset:jondurbin/cinematika-v0.1", "dataset:glaiveai/glaive-function-calling-v2", "dataset:grimulkan/LimaRP-augmented", "dataset:piqa", "dataset:Vezora/Tested-22k-Python-Alpaca", "dataset:mattpscott/airoboros-summarization", "dataset:unalignment/toxic-dpo-v0.2", "base_model:jondurbin/airoboros-34b-3.3", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-03T02:52:22Z
--- base_model: jondurbin/airoboros-34b-3.3 datasets: - jondurbin/airoboros-3.2 - bluemoon-fandom-1-1-rp-cleaned - boolq - jondurbin/gutenberg-dpo-v0.1 - LDJnr/Capybara - jondurbin/cinematika-v0.1 - glaiveai/glaive-function-calling-v2 - grimulkan/LimaRP-augmented - piqa - Vezora/Tested-22k-Python-Alpaca - mattpscott/airoboros-summarization - unalignment/toxic-dpo-v0.2 language: - en library_name: transformers license: other license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE license_name: yi-license quantized_by: mradermacher --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/jondurbin/airoboros-34b-3.3 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ1_S.gguf) | i1-IQ1_S | 8.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ1_M.gguf) | i1-IQ1_M | 8.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 10.0 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 11.0 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ2_S.gguf) | i1-IQ2_S | 11.6 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ2_M.gguf) | i1-IQ2_M | 12.5 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q2_K.gguf) | i1-Q2_K | 13.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 14.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.9 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ3_S.gguf) | i1-IQ3_S | 15.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ3_M.gguf) | i1-IQ3_M | 16.2 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 17.3 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.8 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 19.1 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q4_0.gguf) | i1-Q4_0 | 20.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 20.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 21.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 24.3 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 25.0 | | | [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q6_K.gguf) | i1-Q6_K | 28.9 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
QuantFactory/Mistral-7B-OpenOrca-GGUF
QuantFactory
2024-04-10T01:45:25Z
387
0
transformers
[ "transformers", "gguf", "mistral", "conversational", "text-generation-inference", "text-generation", "en", "base_model:Open-Orca/Mistral-7B-OpenOrca", "license:apache-2.0", "region:us" ]
text-generation
2024-04-07T01:07:42Z
--- license: apache-2.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - mistral - conversational - text-generation-inference base_model: Open-Orca/Mistral-7B-OpenOrca inference: false --- # Mistral-7B-OpenOrca-GGUF - Quantized version of [Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) - Created using llama.cpp ## Available Quants * Q2_K * Q3_K_L * Q3_K_M * Q3_K_S * Q4_0 * Q4_K_M * Q4_K_S * Q5_0 * Q5_K_M * Q5_K_S * Q6_K * Q8_0
johnsnowlabs/PhiMerge-2.7B-Dare
johnsnowlabs
2024-04-09T16:56:45Z
387
0
transformers
[ "transformers", "safetensors", "phi", "feature-extraction", "merge", "mergekit", "lazymergekit", "abacaj/phi-2-super", "mobiuslabsgmbh/aanaphi2-v0.1", "custom_code", "base_model:abacaj/phi-2-super", "base_model:mobiuslabsgmbh/aanaphi2-v0.1", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
feature-extraction
2024-04-08T20:13:24Z
--- tags: - merge - mergekit - lazymergekit - abacaj/phi-2-super - mobiuslabsgmbh/aanaphi2-v0.1 base_model: - abacaj/phi-2-super - mobiuslabsgmbh/aanaphi2-v0.1 license: apache-2.0 --- # PhiMerge-2.7B-Dare ![image/png](https://cdn-uploads.huggingface.co/production/uploads/660cfe98280a82e38fe4ef49/rcSJbgdC-9F9MyUwKkhEb.png) PhiMerge-2.7B-Dare is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [abacaj/phi-2-super](https://huggingface.co/abacaj/phi-2-super) * [mobiuslabsgmbh/aanaphi2-v0.1](https://huggingface.co/mobiuslabsgmbh/aanaphi2-v0.1) ## 🏆 Evaluation results ### Coming Soon ## 🧩 Configuration ```yaml models: - model: microsoft/phi-2 # No parameters necessary for base model - model: abacaj/phi-2-super parameters: density: 0.53 weight: 0.5 - model: mobiuslabsgmbh/aanaphi2-v0.1 parameters: density: 0.53 weight: 0.45 merge_method: dare_ties base_model: microsoft/phi-2 parameters: int8_mask: true dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "johnsnowlabs/PhiMerge-2.7B-Dare" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
paloalma/ECE-TW3-JRGL-V5
paloalma
2024-04-11T20:14:23Z
387
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "moreh/MoMo-72B-lora-1.8.7-DPO", "ibivibiv/alpaca-dragon-72b-v1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-11T19:39:15Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - moreh/MoMo-72B-lora-1.8.7-DPO - ibivibiv/alpaca-dragon-72b-v1 --- # ECE-TW3-JRGL-V5 ECE-TW3-JRGL-V5 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [moreh/MoMo-72B-lora-1.8.7-DPO](https://huggingface.co/moreh/MoMo-72B-lora-1.8.7-DPO) * [ibivibiv/alpaca-dragon-72b-v1](https://huggingface.co/ibivibiv/alpaca-dragon-72b-v1) ## 🧩 Configuration
RichardErkhov/GroNLP_-_gpt2-small-dutch-gguf
RichardErkhov
2024-04-17T10:23:55Z
387
0
null
[ "gguf", "arxiv:2012.05628", "region:us" ]
null
2024-04-17T10:20:00Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gpt2-small-dutch - GGUF - Model creator: https://huggingface.co/GroNLP/ - Original model: https://huggingface.co/GroNLP/gpt2-small-dutch/ | Name | Quant method | Size | | ---- | ---- | ---- | | [gpt2-small-dutch.Q2_K.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-dutch-gguf/blob/main/gpt2-small-dutch.Q2_K.gguf) | Q2_K | 0.07GB | | [gpt2-small-dutch.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-dutch-gguf/blob/main/gpt2-small-dutch.IQ3_XS.gguf) | IQ3_XS | 0.07GB | | [gpt2-small-dutch.IQ3_S.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-dutch-gguf/blob/main/gpt2-small-dutch.IQ3_S.gguf) | IQ3_S | 0.07GB | | [gpt2-small-dutch.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-dutch-gguf/blob/main/gpt2-small-dutch.Q3_K_S.gguf) | Q3_K_S | 0.07GB | | [gpt2-small-dutch.IQ3_M.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-dutch-gguf/blob/main/gpt2-small-dutch.IQ3_M.gguf) | IQ3_M | 0.08GB | | [gpt2-small-dutch.Q3_K.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-dutch-gguf/blob/main/gpt2-small-dutch.Q3_K.gguf) | Q3_K | 0.08GB | | [gpt2-small-dutch.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-dutch-gguf/blob/main/gpt2-small-dutch.Q3_K_M.gguf) | Q3_K_M | 0.08GB | | [gpt2-small-dutch.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-dutch-gguf/blob/main/gpt2-small-dutch.Q3_K_L.gguf) | Q3_K_L | 0.08GB | | [gpt2-small-dutch.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-dutch-gguf/blob/main/gpt2-small-dutch.IQ4_XS.gguf) | IQ4_XS | 0.08GB | | [gpt2-small-dutch.Q4_0.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-dutch-gguf/blob/main/gpt2-small-dutch.Q4_0.gguf) | Q4_0 | 0.09GB | | [gpt2-small-dutch.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-dutch-gguf/blob/main/gpt2-small-dutch.IQ4_NL.gguf) | IQ4_NL | 0.09GB | | [gpt2-small-dutch.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-dutch-gguf/blob/main/gpt2-small-dutch.Q4_K_S.gguf) | Q4_K_S | 0.09GB | | [gpt2-small-dutch.Q4_K.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-dutch-gguf/blob/main/gpt2-small-dutch.Q4_K.gguf) | Q4_K | 0.09GB | | [gpt2-small-dutch.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-dutch-gguf/blob/main/gpt2-small-dutch.Q4_K_M.gguf) | Q4_K_M | 0.09GB | | [gpt2-small-dutch.Q4_1.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-dutch-gguf/blob/main/gpt2-small-dutch.Q4_1.gguf) | Q4_1 | 0.09GB | | [gpt2-small-dutch.Q5_0.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-dutch-gguf/blob/main/gpt2-small-dutch.Q5_0.gguf) | Q5_0 | 0.1GB | | [gpt2-small-dutch.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-dutch-gguf/blob/main/gpt2-small-dutch.Q5_K_S.gguf) | Q5_K_S | 0.1GB | | [gpt2-small-dutch.Q5_K.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-dutch-gguf/blob/main/gpt2-small-dutch.Q5_K.gguf) | Q5_K | 0.11GB | | [gpt2-small-dutch.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-dutch-gguf/blob/main/gpt2-small-dutch.Q5_K_M.gguf) | Q5_K_M | 0.11GB | | [gpt2-small-dutch.Q5_1.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-dutch-gguf/blob/main/gpt2-small-dutch.Q5_1.gguf) | Q5_1 | 0.11GB | | [gpt2-small-dutch.Q6_K.gguf](https://huggingface.co/RichardErkhov/GroNLP_-_gpt2-small-dutch-gguf/blob/main/gpt2-small-dutch.Q6_K.gguf) | Q6_K | 0.12GB | Original model description: --- language: nl tags: - adaption - recycled - gpt2-small pipeline_tag: text-generation --- # GPT-2 recycled for Dutch (small) [Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) • [Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475) ## Model description This model is based on the small OpenAI GPT-2 ([`gpt2`](https://huggingface.co/gpt2)) model. For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle). ## Related models ### Dutch - [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings. - [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**) - [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings. ### Italian - [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings. - [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**) - [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings. ## How to use ```python from transformers import pipeline pipe = pipeline("text-generation", model="GroNLP/gpt2-small-dutch") ``` ```python from transformers import AutoTokenizer, AutoModel, TFAutoModel tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-small-dutch") model = AutoModel.from_pretrained("GroNLP/gpt2-small-dutch") # PyTorch model = TFAutoModel.from_pretrained("GroNLP/gpt2-small-dutch") # Tensorflow ``` ## BibTeX entry ```bibtex @misc{devries2020good, title={As good as new. How to successfully recycle English GPT-2 to make models for other languages}, author={Wietse de Vries and Malvina Nissim}, year={2020}, eprint={2012.05628}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
jsfs11/MixtralxWizardLM2-8x22B-SLERP-v0.2
jsfs11
2024-04-19T04:48:06Z
387
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "mergekit", "merge", "conversational", "base_model:alpindale/WizardLM-2-8x22B", "base_model:mistralai/Mixtral-8x22B-Instruct-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-19T02:35:29Z
--- base_model: - alpindale/WizardLM-2-8x22B - mistralai/Mixtral-8x22B-Instruct-v0.1 tags: - mergekit - merge license: apache-2.0 --- # Should be working. * Test merge of two extremely large MoE models using SLERP. Don't know if it's working correctly yet, haven't had the time or hardware to test. # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [alpindale/WizardLM-2-8x22B](https://huggingface.co/alpindale/WizardLM-2-8x22B) * [mistralai/Mixtral-8x22B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: mistralai/Mixtral-8x22B-Instruct-v0.1 layer_range: [0, 56] - model: alpindale/WizardLM-2-8x22B layer_range: [0, 56] merge_method: slerp base_model: mistralai/Mixtral-8x22B-Instruct-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
victunes/TherapyLlama-8B-v1-GGUF
victunes
2024-04-29T20:11:23Z
387
8
null
[ "gguf", "dataset:victunes/nart-100k-synthetic-buddy-mixed-names", "license:cc-by-nc-4.0", "region:us" ]
null
2024-04-22T09:19:19Z
--- license: cc-by-nc-4.0 datasets: - victunes/nart-100k-synthetic-buddy-mixed-names --- **Original:** https://huggingface.co/victunes/TherapyLlama-8B-v1 ## IMPORTANT: Use Vicuna format This was trained with the Vicuna 1.1 format. Use the Vicuna format! Do NOT use the Llama 3 format. # TherapyLlama 8B v1 _Buddy is here for you._ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f07d05279d2d8f725bf0c3/A-ckcZ9H0Ee1n_ls2FM41.png) Trained on Llama 3 8B using a modified version of [jerryjalapeno/nart-100k-synthetic](https://huggingface.co/datasets/jerryjalapeno/nart-100k-synthetic). It is a Llama 3 version of https://huggingface.co/victunes/TherapyBeagle-11B-v2 TherapyLlama is _hopefully_ aligned to be helpful, healthy, and comforting. ## IMPORTANT: Use Vicuna format This was trained with the Vicuna 1.1 format. Use the Vicuna format! Do NOT use the Llama 3 format. ## Usage - Do not hold back on Buddy. - Open up to Buddy. - Pour your heart out to Buddy. - Engage with Buddy. - Remember that Buddy is just an AI. ## Notes - Tested with the Vicuna 1.1 Format - You might be assigned a random name if you don't give yourself one. - RP capabilities are weaker compared to TherapyBeagle. - Make sure to say **"You are Socrates"** for its best RP attempt. ## Disclaimer **TherapyLlama is NOT a real therapist. It is a friendly AI that mimics empathy and psychotherapy. It is an illusion without the slightest clue who you are as a person. As much as it can help you with self-discovery, A LLAMA IS NOT A SUBSTITUTE to a real professional.** With that said, I found it to be a very helpful tool in unravelling one's self. I hope this model can help sooth minds and hearts. ## Examples ### LLAMA 3 INSTRUCT ``` {{[INPUT]}} Hello, I'm victunes {{[OUTPUT]}} Hi victunes! Welcome to our session. I'm here to listen and support you. How are you feeling today? {{[INPUT]}} I've been feeling down lately. {{[OUTPUT]}} I'm sorry to hear that. Can you tell me more about what's been going on? {{[INPUT]}} I think I'm pretty lonely. {{[OUTPUT]}} Loneliness can be really tough. It sounds like it's been weighing heavily on your mind. What makes you feel lonely? {{[INPUT]}} My friend died recently. {{[OUTPUT]}} Oh, I'm so sorry for your loss, victunes. Losing a friend is never easy. How have you been coping with the grief? ``` **Original:** https://huggingface.co/victunes/TherapyLlama-8B-v1
ChuGyouk/Llama-3-6.3B-no-healing
ChuGyouk
2024-04-23T15:35:41Z
387
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "en", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-23T14:52:15Z
--- base_model: - meta-llama/Meta-Llama-3-8B library_name: transformers tags: - mergekit - merge license: llama3 language: - en --- # merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: bfloat16 merge_method: passthrough slices: - sources: - layer_range: [0, 22] model: model: path: meta-llama/Meta-Llama-3-8B - sources: - layer_range: [30, 32] model: model: path: meta-llama/Meta-Llama-3-8B ```
Nhoodie/llama-3-tall-crocodile-v0.1
Nhoodie
2024-04-29T04:09:02Z
387
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-29T03:24:58Z
--- license: other license_name: llama3 license_link: LICENSE base_model: [] library_name: transformers tags: - mergekit - merge --- # llama-3-sqrt-crocodile-v0.2A This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * llama-3-sqrt-crocodile-v0.0A/the-operator * llama-3-sqrt-crocodile-v0.0A/sqrt-talker ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Orenguteng/Lexi-Llama-3-8B-Uncensored parameters: weight: [0.2, 0.3, 0.4, 0.6] layer_range: [0, 32] - model: NousResearch/Meta-Llama-3-8B parameters: weight: [0.6, 0.2, 0.2, 0.1] layer_range: [0, 32] - model: NousResearch/Meta-Llama-3-8B-Instruct parameters: weight: [0.2, 0.3, 0.85, 0.3] layer_range: [0, 32] merge_method: dare_linear base_model: NousResearch/Meta-Llama-3-8B-Instruct dtype: bfloat16 name: Uninstruct-Uncensored --- models: - model: cognitivecomputations/dolphin-2.9-llama3-8b parameters: weight: [0.25, 0.4, 0.35, 0.35] density: [0.3, 0.45, 0.2, 0.6] layer_range: [0, 32] - model: NousResearch/Meta-Llama-3-8B parameters: weight: [0.15, 0.25, 0.05, 0] density: [0.2, 0.3, 0.4, 0.1] - model: Undi95/Llama-3-Unholy-8B parameters: weight: [0.4, 0.25, 0.45, 0.35] density: [0.2, 0.15, 1.5, 0.1] layer_range: [0, 32] - model: Uninstruct-Uncensored parameters: weight: [0.3, 0.1, 0.25, 0.3] density: [0.3, 0.15, 2.5, 0.2] layer_range: [0, 32] merge_method: dare_ties base_model: Uninstruct-Uncensored dtype: bfloat16 name: augmented-dolphin-hap --- models: - model: vicgalle/Configurable-Llama-3-8B-v0.3 parameters: weight: [0.5, 0.3, 0.1] - model: hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode parameters: weight: 0.5 - model: Trelis/Meta-Llama-3-8B-Instruct-function-calling parameters: weight: 0.3 layer_range: [0, 32] - model: Rookie/Llama-3-8B-Instruct-Chinese parameters: weight: 0.2 layer_range: [0, 32] - model: Uninstruct-Uncensored parameters: weight: [0.7, 0.4, 0.25, 0.1] layer_range: [0, 32] merge_method: model_stock base_model: Uninstruct-Uncensored dtype: bfloat16 name: the-operator --- models: - model: vicgalle/Configurable-Llama-3-8B-v0.3 parameters: weight: 0.7 - model: hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode parameters: weight: 0.1 - model: Trelis/Meta-Llama-3-8B-Instruct-function-calling parameters: weight: 0.03 layer_range: [0, 32] - model: Rookie/Llama-3-8B-Instruct-Chinese parameters: weight: 0.07 layer_range: [0, 32] - model: Uninstruct-Uncensored parameters: weight: 0.1 layer_range: [0, 32] merge_method: model_stock base_model: Uninstruct-Uncensored dtype: bfloat16 name: her-calculator --- models: - model: her-calculator parameters: density: 0.7 # density gradient weight: [0.7, 0.5, 0.1, 0.8] - model: augmented-dolphin-hap parameters: weight: 0.7 merge_method: slerp base_model: her-calculator parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors dtype: float16 name: sqrt-talker --- slices: - sources: - model: the-operator layer_range: [0, 6] - sources: - model: sqrt-talker layer_range: [3, 9] - sources: - model: the-operator layer_range: [6, 12] - sources: - model: sqrt-talker layer_range: [10, 16] - sources: - model: the-operator layer_range: [13, 19] - sources: - model: sqrt-talker layer_range: [16, 22] - sources: - model: the-operator layer_range: [19, 25] - sources: - model: sqrt-talker layer_range: [22, 28] - sources: - model: the-operator layer_range: [26, 32] merge_method: passthrough dtype: bfloat16 name: llama-3-sqrt-crocodile-v0.2A ```
athirdpath/Llama-3-11b-Instruct
athirdpath
2024-05-04T09:22:55Z
387
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-02T03:04:58Z
--- license: llama3 --- I'm back and doing well! I've got a job in the field now, so we'll see in the long run how that effects my open source output. Here we have a 11b Llama 3 instruct model for future work. EDIT: Made a yaml mistake with part funnel, but it still works well. --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/633a809fa4a8f33508dce32c/jJxgpSwdSal2XWsJ0KlG8.png) This is a merge stock of 3 models: - Part Wave - Part Block - Part Funnel With Part Funnel as the base. --- Part Wave: - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [0, 12] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [8, 18] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [13, 23] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [18, 32] --- Part Block: - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [0, 15] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [8, 23] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [16, 32] --- Part Funnel: - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [0, 15] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [14, 14] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [13, 13] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [12, 12] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [11, 11] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [10, 10] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [9, 9] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [8, 23] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [22, 22] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [21, 21] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [20, 20] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [19, 19] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [18, 18] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [17, 17] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [16, 32]
mradermacher/Code-Llama-3-8B-GGUF
mradermacher
2024-05-07T15:30:34Z
387
0
transformers
[ "transformers", "gguf", "code", "Python", "Cpp", "PHP", "JS", "Java", "Rust", "Ruby", "SQL", "MySql", "R", "Julia", "en", "dataset:ajibawa-2023/Code-290k-ShareGPT", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:m-a-p/Code-Feedback", "dataset:microsoft/orca-math-word-problems-200k", "base_model:ajibawa-2023/Code-Llama-3-8B", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-05-07T13:20:25Z
--- base_model: ajibawa-2023/Code-Llama-3-8B datasets: - ajibawa-2023/Code-290k-ShareGPT - m-a-p/CodeFeedback-Filtered-Instruction - m-a-p/Code-Feedback - microsoft/orca-math-word-problems-200k language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - code - Python - Cpp - PHP - JS - Java - Rust - Ruby - SQL - MySql - R - Julia --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ajibawa-2023/Code-Llama-3-8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Code-Llama-3-8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Code-Llama-3-8B-GGUF/resolve/main/Code-Llama-3-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-3-8B-GGUF/resolve/main/Code-Llama-3-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-3-8B-GGUF/resolve/main/Code-Llama-3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-3-8B-GGUF/resolve/main/Code-Llama-3-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-3-8B-GGUF/resolve/main/Code-Llama-3-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-3-8B-GGUF/resolve/main/Code-Llama-3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-3-8B-GGUF/resolve/main/Code-Llama-3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-3-8B-GGUF/resolve/main/Code-Llama-3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-3-8B-GGUF/resolve/main/Code-Llama-3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-3-8B-GGUF/resolve/main/Code-Llama-3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-3-8B-GGUF/resolve/main/Code-Llama-3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-3-8B-GGUF/resolve/main/Code-Llama-3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-3-8B-GGUF/resolve/main/Code-Llama-3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-3-8B-GGUF/resolve/main/Code-Llama-3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Code-Llama-3-8B-GGUF/resolve/main/Code-Llama-3-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
NotAiLOL/Athena-OrpoMistral-7B-v0.2-4bit
NotAiLOL
2024-05-08T13:22:08Z
387
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "orpo", "conversational", "dataset:mlabonne/orpo-dpo-mix-40k", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-08T13:12:42Z
--- library_name: transformers tags: - trl - orpo license: apache-2.0 datasets: - mlabonne/orpo-dpo-mix-40k --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Liu Hong Yuan Tom - **Funded by [optional]:** N/A - **Shared by [optional]:** N/A - **Model type:** Mistral - **Language(s) (NLP):** [More Information Needed] - **License:** Apache 2.0 - **Finetuned from model [optional]:** [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** N/A - **Paper [optional]:** N/A - **Demo [optional]:** N/A ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> Dataset used: [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k) by [Maxime Labonne](https://huggingface.co/mlabonne). ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> * Plateform: Kaggle * GPU: 1 x NVIDIA Tesla P100 More details at [Wandb](https://wandb.ai/liuhongyuan3000/huggingface/reports/Fine-Tuning-Mistral-7B-Instruct-v0-2--Vmlldzo3ODYxNTM5) #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kimdeokgi/all_dpo_model_test1
kimdeokgi
2024-05-10T16:08:11Z
387
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-10T15:53:45Z
--- license: apache-2.0 language: - en --- # kimdeokgi/all_dpo_model_test1 # **Introduction** This model is test version, alignment-tuned model. We utilize state-of-the-art instruction fine-tuning methods including direct preference optimization (DPO). After DPO training, we linearly merged models to boost performance.
ukung/Nusantara-2.7b-Indo-Chat-GGUF
ukung
2024-05-20T04:51:02Z
387
0
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
2024-05-20T04:04:01Z
--- license: apache-2.0 ---
NietWitt/NietWitt02
NietWitt
2024-05-20T22:35:03Z
387
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-20T16:14:01Z
Test
Niggendar/boleromixPony_v13
Niggendar
2024-05-29T11:24:12Z
387
1
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-05-29T11:16:21Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf
RichardErkhov
2024-05-30T05:17:34Z
387
0
null
[ "gguf", "region:us" ]
null
2024-05-30T01:45:37Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Samantha-Nebula-7B - GGUF - Model creator: https://huggingface.co/Weyaxi/ - Original model: https://huggingface.co/Weyaxi/Samantha-Nebula-7B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Samantha-Nebula-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf/blob/main/Samantha-Nebula-7B.Q2_K.gguf) | Q2_K | 2.53GB | | [Samantha-Nebula-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf/blob/main/Samantha-Nebula-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [Samantha-Nebula-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf/blob/main/Samantha-Nebula-7B.IQ3_S.gguf) | IQ3_S | 2.96GB | | [Samantha-Nebula-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf/blob/main/Samantha-Nebula-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [Samantha-Nebula-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf/blob/main/Samantha-Nebula-7B.IQ3_M.gguf) | IQ3_M | 3.06GB | | [Samantha-Nebula-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf/blob/main/Samantha-Nebula-7B.Q3_K.gguf) | Q3_K | 3.28GB | | [Samantha-Nebula-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf/blob/main/Samantha-Nebula-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [Samantha-Nebula-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf/blob/main/Samantha-Nebula-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [Samantha-Nebula-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf/blob/main/Samantha-Nebula-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [Samantha-Nebula-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf/blob/main/Samantha-Nebula-7B.Q4_0.gguf) | Q4_0 | 3.83GB | | [Samantha-Nebula-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf/blob/main/Samantha-Nebula-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [Samantha-Nebula-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf/blob/main/Samantha-Nebula-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [Samantha-Nebula-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf/blob/main/Samantha-Nebula-7B.Q4_K.gguf) | Q4_K | 4.07GB | | [Samantha-Nebula-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf/blob/main/Samantha-Nebula-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [Samantha-Nebula-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf/blob/main/Samantha-Nebula-7B.Q4_1.gguf) | Q4_1 | 4.24GB | | [Samantha-Nebula-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf/blob/main/Samantha-Nebula-7B.Q5_0.gguf) | Q5_0 | 4.65GB | | [Samantha-Nebula-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf/blob/main/Samantha-Nebula-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [Samantha-Nebula-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf/blob/main/Samantha-Nebula-7B.Q5_K.gguf) | Q5_K | 4.78GB | | [Samantha-Nebula-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf/blob/main/Samantha-Nebula-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [Samantha-Nebula-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf/blob/main/Samantha-Nebula-7B.Q5_1.gguf) | Q5_1 | 5.07GB | | [Samantha-Nebula-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf/blob/main/Samantha-Nebula-7B.Q6_K.gguf) | Q6_K | 5.53GB | | [Samantha-Nebula-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Samantha-Nebula-7B-gguf/blob/main/Samantha-Nebula-7B.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- datasets: - garage-bAInd/Open-Platypus language: - en license: apache-2.0 --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/cKySe1S5IW_KnbZpKmozQ.png) <a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> # Samantha-Nebula-7B Samantha-Nebula-7B is a merge of [ehartford/samantha-mistral-7b](https://huggingface.co/ehartford/samantha-mistral-7b) and [PulsarAI/Nebula-7B](https://huggingface.co/PulsarAI/Nebula-7B-Lora) # Evaluation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)) | Metric | Value | |-----------------------|-------| | Avg. | | | ARC (25-shot) | | | HellaSwag (10-shot) | | | MMLU (5-shot) | | | TruthfulQA (0-shot) | | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Samantha-Nebula-7B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 52.87 | | ARC (25-shot) | 57.0 | | HellaSwag (10-shot) | 82.25 | | MMLU (5-shot) | 54.21 | | TruthfulQA (0-shot) | 49.58 | | Winogrande (5-shot) | 73.09 | | GSM8K (5-shot) | 11.37 | | DROP (3-shot) | 42.57 |
v8karlo/UNCENSORED-Wukong-Phi-3-Instruct-Ablated-Q4_K_M-GGUF
v8karlo
2024-06-06T01:37:01Z
387
4
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "dataset:cognitivecomputations/Dolphin-2.9", "base_model:RESMPDEV/Wukong-Phi-3-Instruct-Ablated", "license:mit", "region:us" ]
null
2024-06-06T01:03:38Z
--- language: - en license: mit tags: - llama-cpp - gguf-my-repo base_model: RESMPDEV/Wukong-Phi-3-Instruct-Ablated datasets: - cognitivecomputations/Dolphin-2.9 uncensored: - yes --- # v8karlo/Wukong-Phi-3-Instruct-Ablated-Q4_K_M-GGUF UNCENSORED Phi-3 model. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/662c3116277765660783ca6d/OUGbRFBAx9Ibs2bD-OZGD.png) This model was converted to GGUF format from [`RESMPDEV/Wukong-Phi-3-Instruct-Ablated`](https://huggingface.co/RESMPDEV/Wukong-Phi-3-Instruct-Ablated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/RESMPDEV/Wukong-Phi-3-Instruct-Ablated) for more details on the model. Convert Safetensors to GGUF . https://huggingface.co/spaces/ggml-org/gguf-my-repo . <video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/662c3116277765660783ca6d/qPHdaxOccIFcpmcewfa9r.mp4"></video> ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama --hf-repo v8karlo/Wukong-Phi-3-Instruct-Ablated-Q4_K_M-GGUF --hf-file wukong-phi-3-instruct-ablated-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo v8karlo/Wukong-Phi-3-Instruct-Ablated-Q4_K_M-GGUF --hf-file wukong-phi-3-instruct-ablated-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./main --hf-repo v8karlo/Wukong-Phi-3-Instruct-Ablated-Q4_K_M-GGUF --hf-file wukong-phi-3-instruct-ablated-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./server --hf-repo v8karlo/Wukong-Phi-3-Instruct-Ablated-Q4_K_M-GGUF --hf-file wukong-phi-3-instruct-ablated-q4_k_m.gguf -c 2048 ```
mradermacher/Cream-Phi-3-14B-v1-GGUF
mradermacher
2024-06-11T08:25:44Z
387
0
transformers
[ "transformers", "gguf", "not-for-all-audiences", "en", "base_model:TheDrummer/Cream-Phi-3-14B-v1", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-06-09T04:08:22Z
--- base_model: TheDrummer/Cream-Phi-3-14B-v1 language: - en library_name: transformers license: mit quantized_by: mradermacher tags: - not-for-all-audiences --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/TheDrummer/Cream-Phi-3-14B-v1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Cream-Phi-3-14B-v1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Cream-Phi-3-14B-v1-GGUF/resolve/main/Cream-Phi-3-14B-v1.Q2_K.gguf) | Q2_K | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Cream-Phi-3-14B-v1-GGUF/resolve/main/Cream-Phi-3-14B-v1.IQ3_XS.gguf) | IQ3_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Cream-Phi-3-14B-v1-GGUF/resolve/main/Cream-Phi-3-14B-v1.IQ3_S.gguf) | IQ3_S | 6.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Cream-Phi-3-14B-v1-GGUF/resolve/main/Cream-Phi-3-14B-v1.Q3_K_S.gguf) | Q3_K_S | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/Cream-Phi-3-14B-v1-GGUF/resolve/main/Cream-Phi-3-14B-v1.IQ3_M.gguf) | IQ3_M | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/Cream-Phi-3-14B-v1-GGUF/resolve/main/Cream-Phi-3-14B-v1.Q3_K_M.gguf) | Q3_K_M | 7.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Cream-Phi-3-14B-v1-GGUF/resolve/main/Cream-Phi-3-14B-v1.Q3_K_L.gguf) | Q3_K_L | 7.6 | | | [GGUF](https://huggingface.co/mradermacher/Cream-Phi-3-14B-v1-GGUF/resolve/main/Cream-Phi-3-14B-v1.IQ4_XS.gguf) | IQ4_XS | 7.6 | | | [GGUF](https://huggingface.co/mradermacher/Cream-Phi-3-14B-v1-GGUF/resolve/main/Cream-Phi-3-14B-v1.Q4_K_S.gguf) | Q4_K_S | 8.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Cream-Phi-3-14B-v1-GGUF/resolve/main/Cream-Phi-3-14B-v1.Q4_K_M.gguf) | Q4_K_M | 8.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Cream-Phi-3-14B-v1-GGUF/resolve/main/Cream-Phi-3-14B-v1.Q5_K_S.gguf) | Q5_K_S | 9.7 | | | [GGUF](https://huggingface.co/mradermacher/Cream-Phi-3-14B-v1-GGUF/resolve/main/Cream-Phi-3-14B-v1.Q5_K_M.gguf) | Q5_K_M | 10.2 | | | [GGUF](https://huggingface.co/mradermacher/Cream-Phi-3-14B-v1-GGUF/resolve/main/Cream-Phi-3-14B-v1.Q6_K.gguf) | Q6_K | 11.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Cream-Phi-3-14B-v1-GGUF/resolve/main/Cream-Phi-3-14B-v1.Q8_0.gguf) | Q8_0 | 14.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
4bit/gemma-2b
4bit
2024-06-20T07:56:43Z
387
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:2312.11805", "arxiv:2009.03300", "arxiv:1905.07830", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1905.10044", "arxiv:1907.10641", "arxiv:1811.00937", "arxiv:1809.02789", "arxiv:1911.01547", "arxiv:1705.03551", "arxiv:2107.03374", "arxiv:2108.07732", "arxiv:2110.14168", "arxiv:2304.06364", "arxiv:2206.04615", "arxiv:1804.06876", "arxiv:2110.08193", "arxiv:2009.11462", "arxiv:2101.11718", "arxiv:1804.09301", "arxiv:2109.07958", "arxiv:2203.09509", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-20T07:54:15Z
--- library_name: transformers tags: [] extra_gated_heading: "Access Gemma on Hugging Face" extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately." extra_gated_button_content: "Acknowledge license" --- # Gemma Model Card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) This model card corresponds to the 2B base version of the Gemma model. You can also visit the model card of the [7B base model](https://huggingface.co/google/gemma-7b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it). **Resources and Technical Documentation**: * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma) * [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335) **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Usage Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase. #### Fine-tuning the model You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-2b`. In that repository, we provide: * A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA * A script to perform SFT using FSDP on TPU devices * A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset #### Running the model on a CPU ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(**input_text, return_tensors="pt") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a GPU using different precisions * _Using `torch.float16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.float16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.bfloat16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using 4-bit precision_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Other optimizations * _Flash Attention 2_ First make sure to install `flash-attn` in your environment `pip install flash-attn` ```diff model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, + attn_implementation="flash_attention_2" ).to(0) ``` ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources, totaling 6 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safely in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11). ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/). ### Software Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models](https://ai.google/discover/foundation-models/), including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 | | [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 | | [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 | | [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 | | [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 | | [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 | | [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 | | [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 | | [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 | | [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 | | [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 | | [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 | | [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 | | [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 | | [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 | | [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 | | [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 | | [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 | | ------------------------------ | ------------- | ----------- | --------- | | **Average** | | **54.0** | **56.4** | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2). * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 | | [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 | | [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 | | [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 | | [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 | | [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 | | [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 | | [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 | | [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 | | [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 | | ------------------------------ | ------------- | ----------- | --------- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible). * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives.
ukr-models/xlm-roberta-base-uk
ukr-models
2023-08-31T09:41:51Z
386
11
transformers
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "fill-mask", "ukrainian", "uk", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-11T10:53:02Z
--- language: - uk tags: - ukrainian widget: - text: "Тарас Шевченко – великий український <mask>." license: mit --- This is a smaller version of the [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base) model with only Ukrainian and some English embeddings left. * The original model has 470M parameters, with 384M of them being input and output embeddings. * After shrinking the `sentencepiece` vocabulary from 250K to 31K (top 25K Ukrainian tokens and top English tokens) the number of model parameters reduced to 134M parameters, and model size reduced from 1GB to 400MB.
IDEA-CCNL/Erlangshen-SimCSE-110M-Chinese
IDEA-CCNL
2023-05-25T09:49:42Z
386
20
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "zh", "arxiv:2209.02970", "license:apache-2.0", "text-embeddings-inference", "region:us" ]
feature-extraction
2022-11-07T03:55:11Z
--- language: - zh inference: false license: apache-2.0 --- # Erlangshen-SimCSE-110M-Chinese - Main Page:[Fengshenbang](https://fengshenbang-lm.com/) - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM) ## 简介 Brief Introduction 基于simcse无监督版本,用搜集整理的中文NLI数据进行simcse有监督任务的训练。在中文句子对任务上有良好的效果。 **Erlangshen-SimCSE-110M-Chinese** is based on the unsupervised version of simcse, And training simcse supervised task with collected and sorted chinese NLI data for. It has good effect on the task in Chinese sentences pair. ## 模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 通用 General | 自然语言生成 NLU | 二郎神 Erlangshen | Bert | 110M | 中文 Chinese | ## 模型信息 Model Information 为了获得一个通用句子向量表征的模型,我们基于bert-base模型用了大量的无监督数据和有监督数据进行对比学习,最终获得了一个无需微调就能够利用模型输出的[CLS]进行相似度判断的模型。与用bert模型在针对任务微调后,再进行句子相似度任务不同,我们的模型在预训练完成后直接具备提取句子向量的能力。在一些任务上有如下的测评效果: In order to obtain a general sentence-embedding-model, we use a large number of unsupervised data and supervised data for comparative learning based on the Bert-base model, and finally obtained a model that can use the [CLS] output from the model to judge the similarity without fine-tuning. Different from the sentence similarity task after fine tuning the task with the bert model, our model has the ability to extract sentence vectors directly after pre training. In some tasks, the evaluation results are as follows: |模型 | LCQMC | BQ | PAWSX | ATEC | STS-B | | :----: | :----: | :----: | :----: | :----: | :----: | |Bert | 62 |38.62 |17.38 |28.98 |68.27| |Bert-large| 63.78| 37.51| 18.63| 30.24| 68.87| |RoBerta| 67.3| 39.89| 16.79| 30.57| 69.|36 |RoBerta large |67.25 |38.39 |19.09 |30.85 |69.36| |RoFormer| 63.58| 39.9 |17.52| 29.37 |67.32| |SimBERT| 73.43| 40.98| 15.87| 31.24| 72| |Erlangshen-SimCSE-110M-Chinese|74.94| 56.97| 21.84| 34.12| 70.5| *备注:我们的模型是直接用[cls],无whitening;其余模型是last avg + whitening* *ps:Our model use [cls] directly,and no whitening;Other model use last avg and do whitening* ## 使用 Usage ### 加载模型 Loading Models ```python from transformers import AutoTokenizer,AutoModelForMaskedLM model =AutoModelForMaskedLM.from_pretrained('IDEA-CCNL/Erlangshen-SimCSE-110M-Chinese') tokenizer = AutoTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-SimCSE-110M-Chinese') ``` ### 使用示例 Usage Examples ```python import torch from sklearn.metrics.pairwise import cosine_similarity texta = '今天天气真不错,我们去散步吧!' textb = '今天天气真糟糕,还是在宅家里写bug吧!' inputs_a = tokenizer(texta,return_tensors="pt") inputs_b = tokenizer(textb,return_tensors="pt") outputs_a = model(**inputs_a ,output_hidden_states=True) texta_embedding = outputs_a.hidden_states[-1][:,0,:].squeeze() outputs_b = model(**inputs_b ,output_hidden_states=True) textb_embedding = outputs_b.hidden_states[-1][:,0,:].squeeze() # if you use cuda, the text_embedding should be textb_embedding.cpu().numpy() # 或者用torch.no_grad(): with torch.no_grad(): silimarity_soce = cosine_similarity(texta_embedding.reshape(1,-1),textb_embedding .reshape(1,-1))[0][0] print(silimarity_soce) ``` ## 引用 Citation 如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970): If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970): ```text @article{fengshenbang, author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen}, title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence}, journal = {CoRR}, volume = {abs/2209.02970}, year = {2022} } ``` 也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/): You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/): ```text @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
circulus/sd-photoreal-semi-v2
circulus
2023-01-15T07:44:06Z
386
4
diffusers
[ "diffusers", "generative ai", "stable-diffusion", "image-to-image", "realism", "art", "text-to-image", "en", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-01-15T06:12:45Z
--- license: gpl-3.0 language: - en library_name: diffusers pipeline_tag: text-to-image tags: - generative ai - stable-diffusion - image-to-image - realism - art --- Photoreal Semi v2 Finetuned Stable Diffusion 1.5 for generating images ![img](./e4.png) ![img](./ex4.png)
sam2ai/whisper-small-or
sam2ai
2023-03-19T22:55:14Z
386
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-03-19T19:29:04Z
Entry not found
timm/mobilevitv2_175.cvnets_in1k
timm
2023-04-24T22:25:57Z
386
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2206.02680", "license:other", "region:us" ]
image-classification
2023-04-24T22:25:28Z
--- tags: - image-classification - timm library_name: timm license: other datasets: - imagenet-1k --- # Model card for mobilevitv2_175.cvnets_in1k A MobileViT-v2 image classification model. Trained on ImageNet-1k by paper authors. See license details at https://github.com/apple/ml-cvnets/blob/main/LICENSE ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 14.3 - GMACs: 5.5 - Activations (M): 28.1 - Image size: 256 x 256 - **Papers:** - Separable Self-attention for Mobile Vision Transformers: https://arxiv.org/abs/2206.02680 - **Original:** https://github.com/apple/ml-cvnets - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('mobilevitv2_175.cvnets_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'mobilevitv2_175.cvnets_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 112, 128, 128]) # torch.Size([1, 224, 64, 64]) # torch.Size([1, 448, 32, 32]) # torch.Size([1, 672, 16, 16]) # torch.Size([1, 896, 8, 8]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'mobilevitv2_175.cvnets_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 896, 8, 8) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{Mehta2022SeparableSF, title={Separable Self-attention for Mobile Vision Transformers}, author={Sachin Mehta and Mohammad Rastegari}, journal={ArXiv}, year={2022}, volume={abs/2206.02680} } ```
reciprocate/dahoas-gptj-rm-static
reciprocate
2023-05-04T09:45:27Z
386
0
transformers
[ "transformers", "pytorch", "gptj", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-04T09:36:44Z
Entry not found
sail-rvc/MichaelJackson
sail-rvc
2023-07-14T07:27:59Z
386
0
transformers
[ "transformers", "rvc", "sail-rvc", "audio-to-audio", "endpoints_compatible", "region:us" ]
audio-to-audio
2023-07-14T07:27:38Z
--- pipeline_tag: audio-to-audio tags: - rvc - sail-rvc --- # MichaelJackson ## RVC Model ![banner](https://i.imgur.com/xocCjhH.jpg) This model repo was automatically generated. Date: 2023-07-14 07:27:59 Bot Name: juuxnscrap Model Type: RVC Source: https://huggingface.co/juuxn/RVCModels/ Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
lorahub/flan_t5_large-race_middle_Taking_a_test
lorahub
2023-07-24T09:48:18Z
386
0
peft
[ "peft", "region:us" ]
null
2023-07-24T09:48:09Z
--- library_name: peft ---
Yntec/BeenYou
Yntec
2023-09-18T01:59:10Z
386
1
diffusers
[ "diffusers", "safetensors", "Anime", "Cute", "Pretty", "Bradcatt", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-18T00:59:12Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Anime - Cute - Pretty - Bradcatt - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image --- # Been You Original page: https://civitai.com/models/27688/beenyou Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/mG9qGvAOdXJ6zgsFKPE6S.png) Anime fine details portrait of joyful cute little girl play school class room, bokeh. anime masterpiece by studio ghibli. 8k, sharp high quality classic anime from 1990 in style of hayao miyazaki. Wikipedia. hugging. OIL PAINTING. DOCTOR with short hair in coat BEAUTIFUL girl eyes. she has pigtails
Yntec/elldrethsImagination
Yntec
2023-09-19T06:50:20Z
386
3
diffusers
[ "diffusers", "safetensors", "Anime", "General", "Landscapes", "Eldreths", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "Exclusive", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-19T02:22:14Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Anime - General - Landscapes - Eldreths - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image - Exclusive --- # eldrethSImagination ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/1IxGyU8q3f63YJ1iQpidH.png) 1 image is more than a thousands words, I'll let the model introduce itself. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/VYDd8HbXKM6TGLsWpSQlw.png) Prompts: https://pastebin.com/91f9fq69 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/uIxVaOqUVm3KRYltJ5DtO.png) hyperdetailed oil painting key visual pretty cute little girl nazi maid ss secret police with flowing fitted leather military commander uniform, DETAILED CHIBI EYES, wearing garter belt skirt boots stockings, face of gap moe kuudere noble dictator, trending pixiv fanbox, by greg rutkowski makoto shinkai takashi takeuchi studio ghibli jamie wyeth. wearing a interior sundress, With credits to the anon that shared the model at 4chan's /sdg/
TheBloke/MAmmoTH-13B-GGUF
TheBloke
2023-09-27T12:54:14Z
386
2
transformers
[ "transformers", "gguf", "llama", "en", "dataset:TIGER-Lab/MathInstruct", "arxiv:2309.05653", "base_model:TIGER-Lab/MAmmoTH-13B", "license:mit", "text-generation-inference", "region:us" ]
null
2023-09-23T21:22:29Z
--- language: - en license: mit datasets: - TIGER-Lab/MathInstruct model_name: MAmmoTH 13B base_model: TIGER-Lab/MAmmoTH-13B inference: false model_creator: TIGER-Lab model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # MAmmoTH 13B - GGUF - Model creator: [TIGER-Lab](https://huggingface.co/TIGER-Lab) - Original model: [MAmmoTH 13B](https://huggingface.co/TIGER-Lab/MAmmoTH-13B) <!-- description start --> ## Description This repo contains GGUF format model files for [TIGER-Lab's MAmmoTH 13B](https://huggingface.co/TIGER-Lab/MAmmoTH-13B). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MAmmoTH-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MAmmoTH-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MAmmoTH-13B-GGUF) * [TIGER-Lab's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TIGER-Lab/MAmmoTH-13B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `mit`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [TIGER-Lab's MAmmoTH 13B](https://huggingface.co/TIGER-Lab/MAmmoTH-13B). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [mammoth-13b.Q2_K.gguf](https://huggingface.co/TheBloke/MAmmoTH-13B-GGUF/blob/main/mammoth-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [mammoth-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/MAmmoTH-13B-GGUF/blob/main/mammoth-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [mammoth-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/MAmmoTH-13B-GGUF/blob/main/mammoth-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [mammoth-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/MAmmoTH-13B-GGUF/blob/main/mammoth-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [mammoth-13b.Q4_0.gguf](https://huggingface.co/TheBloke/MAmmoTH-13B-GGUF/blob/main/mammoth-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [mammoth-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/MAmmoTH-13B-GGUF/blob/main/mammoth-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [mammoth-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/MAmmoTH-13B-GGUF/blob/main/mammoth-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [mammoth-13b.Q5_0.gguf](https://huggingface.co/TheBloke/MAmmoTH-13B-GGUF/blob/main/mammoth-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [mammoth-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/MAmmoTH-13B-GGUF/blob/main/mammoth-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [mammoth-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/MAmmoTH-13B-GGUF/blob/main/mammoth-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [mammoth-13b.Q6_K.gguf](https://huggingface.co/TheBloke/MAmmoTH-13B-GGUF/blob/main/mammoth-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [mammoth-13b.Q8_0.gguf](https://huggingface.co/TheBloke/MAmmoTH-13B-GGUF/blob/main/mammoth-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/MAmmoTH-13B-GGUF and below it, a specific filename to download, such as: mammoth-13b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/MAmmoTH-13B-GGUF mammoth-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/MAmmoTH-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MAmmoTH-13B-GGUF mammoth-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m mammoth-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/MAmmoTH-13B-GGUF", model_file="mammoth-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: TIGER-Lab's MAmmoTH 13B # 🦣 MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning Project Page: [https://tiger-ai-lab.github.io/MAmmoTH/](https://tiger-ai-lab.github.io/MAmmoTH/) Paper: [https://arxiv.org/pdf/2309.05653.pdf](https://arxiv.org/pdf/2309.05653.pdf) Code: [https://github.com/TIGER-AI-Lab/MAmmoTH](https://github.com/TIGER-AI-Lab/MAmmoTH) ## Introduction We introduce 🦣 MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on 🤗 [MathInstruct Dataset](https://huggingface.co/datasets/TIGER-Lab/MathInstruct), a meticulously curated instruction tuning dataset that is lightweight yet generalizable. MathInstruct is compiled from 13 math rationale datasets, six of which are newly curated by this work. It uniquely focuses on the hybrid use of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and ensures extensive coverage of diverse mathematical fields. | | **Base Model: Llama-2** | **Base Model: Code Llama** | |-----|---------------------------------------------------------------|--------------------------------------------------------------------------| | 7B | 🦣 [MAmmoTH-7B](https://huggingface.co/TIGER-Lab/MAmmoTH-7B) | 🦣 [MAmmoTH-Coder-7B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-7B) | | 13B | 🦣 [MAmmoTH-13B](https://huggingface.co/TIGER-Lab/MAmmoTH-13B) | 🦣 [MAmmoTH-Coder-13B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-13B)| | 34B | - | 🦣 [MAmmoTH-Coder-34B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-34B)| | 70B | 🦣 [MAmmoTH-70B](https://huggingface.co/TIGER-Lab/MAmmoTH-70B) | - | | ## Training Data The models are trained on the 🤗 [MathInstruct Dataset](https://huggingface.co/datasets/TIGER-Lab/MathInstruct), which is compiled from 13 different math rationale datasets. Check out the dataset card for more details. ## Training Procedure The models are fine-tuned with the MathInstruct dataset using the original Llama-2 and Code Llama models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details. ## Evaluation The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results: | Model | Size | Base | GSM8K | MATH | AQuA | NumGLUE | IID Avg | SVAMP | Mathematics | SimulEq | SAT-Math | MMLU-Math | OOD Avg | |-------------------|-------|---------------|-----------|-------|-------|-----------|---------------|-----------|---------------|-----------|-----------|---------------|---------------| | | | | | | | | | | | | | | | | MAmmoTH | 7B | Llama-2 | 51.7 | 31.2 | 42.9 | 53.1 | 44.7 | 66.7 | 44.8 | 42 | 36.4 | 38.6 | 45.7 | | MAmmoTH-Coder | 7B | Code-Llama | 58.8 | 35.2 | 43 | 57.1 | 48.5 | 71.1 | 53.9 | 44.6 | 40 | 40.5 | 50.2 | | MAmmoTH | 13B | Llama-2 | 61.7 | 36 | 44.8 | 59.6 | 50.5 | 72.4 | 48.7 | 40.5 | 42.7 | 45.3 | 49.9 | | MAmmoTH-Coder | 13B | Code-Llama | 64.3 | 38.6 | 46.1 | 54.2 | 50.8 | 73.2 | 60 | 44.1 | 40.9 | 45.2 | 52.6 | | MAmmoTH-Coder | 34B | Code-Llama | 72.3 | 46.8 | 50.8 | 59.6 | 57.3 | 84 | 64.7 | 50.6 | 51.8 | 50.2 | 60.3 | | MAmmoTH | 70B | Llama-2 | 76.7 | 44.2 | 61.4 | 64.3 | 61.7 | 81.7 | 55.3 | 45.3 | 58.6 | 52.3 | 58.6 | ## Usage You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution. Check our Github repo for more advanced use: [https://github.com/TIGER-AI-Lab/MAmmoTH](https://github.com/TIGER-AI-Lab/MAmmoTH) ## Prompt Format If you want to do CoT: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Response: ``` If you want to do PoT: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {instruction} Let's write a program. ### Response: ``` ## Intended Uses These models are trained for research purposes. They are designed to solve general math problems. They can be used in educational software, tutoring systems, or any application where a solution to a math problem is needed. The models can generate both a chain of thought (CoT) rationale and a program of thought (PoT) rationale, providing a comprehensive solution to a given math problem. ## Limitations We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively. ## Citation If you use the models, data, or code from this project, please cite the original paper: ``` @article{yue2023mammoth, title={MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning}, author={Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen}, journal={arXiv preprint arXiv:2309.05653}, year={2023} } ``` <!-- original-model-card end -->
TheBloke/Pandalyst-7B-v1.2-GGUF
TheBloke
2023-10-17T14:43:49Z
386
2
transformers
[ "transformers", "gguf", "llama", "code", "en", "base_model:pipizhao/Pandalyst-7B-V1.2", "license:llama2", "model-index", "text-generation-inference", "region:us" ]
null
2023-10-17T14:38:26Z
--- base_model: pipizhao/Pandalyst-7B-V1.2 inference: false language: - en library_name: transformers license: llama2 model-index: - name: Pandalyst-7B-V1.2 results: - metrics: - name: acc@1 type: acc@1 value: 0.0 verified: false task: type: text-generation model_creator: pipizhao model_name: Pandalyst 7B v1.2 model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke tags: - code --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Pandalyst 7B v1.2 - GGUF - Model creator: [pipizhao](https://huggingface.co/pipizhao) - Original model: [Pandalyst 7B v1.2](https://huggingface.co/pipizhao/Pandalyst-7B-V1.2) <!-- description start --> ## Description This repo contains GGUF format model files for [pipizhao's Pandalyst 7B v1.2](https://huggingface.co/pipizhao/Pandalyst-7B-V1.2). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Pandalyst-7B-v1.2-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Pandalyst-7B-v1.2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Pandalyst-7B-v1.2-GGUF) * [pipizhao's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/pipizhao/Pandalyst-7B-V1.2) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [pandalyst-7b-v1.2.Q2_K.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-v1.2-GGUF/blob/main/pandalyst-7b-v1.2.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes | | [pandalyst-7b-v1.2.Q3_K_S.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-v1.2-GGUF/blob/main/pandalyst-7b-v1.2.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss | | [pandalyst-7b-v1.2.Q3_K_M.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-v1.2-GGUF/blob/main/pandalyst-7b-v1.2.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss | | [pandalyst-7b-v1.2.Q3_K_L.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-v1.2-GGUF/blob/main/pandalyst-7b-v1.2.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss | | [pandalyst-7b-v1.2.Q4_0.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-v1.2-GGUF/blob/main/pandalyst-7b-v1.2.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [pandalyst-7b-v1.2.Q4_K_S.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-v1.2-GGUF/blob/main/pandalyst-7b-v1.2.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss | | [pandalyst-7b-v1.2.Q4_K_M.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-v1.2-GGUF/blob/main/pandalyst-7b-v1.2.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended | | [pandalyst-7b-v1.2.Q5_0.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-v1.2-GGUF/blob/main/pandalyst-7b-v1.2.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [pandalyst-7b-v1.2.Q5_K_S.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-v1.2-GGUF/blob/main/pandalyst-7b-v1.2.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended | | [pandalyst-7b-v1.2.Q5_K_M.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-v1.2-GGUF/blob/main/pandalyst-7b-v1.2.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended | | [pandalyst-7b-v1.2.Q6_K.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-v1.2-GGUF/blob/main/pandalyst-7b-v1.2.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss | | [pandalyst-7b-v1.2.Q8_0.gguf](https://huggingface.co/TheBloke/Pandalyst-7B-v1.2-GGUF/blob/main/pandalyst-7b-v1.2.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Pandalyst-7B-v1.2-GGUF and below it, a specific filename to download, such as: pandalyst-7b-v1.2.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Pandalyst-7B-v1.2-GGUF pandalyst-7b-v1.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Pandalyst-7B-v1.2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Pandalyst-7B-v1.2-GGUF pandalyst-7b-v1.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m pandalyst-7b-v1.2.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Pandalyst-7B-v1.2-GGUF", model_file="pandalyst-7b-v1.2.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: pipizhao's Pandalyst 7B v1.2 ## Pandalyst: A large language model for mastering data analysis using pandas <p align="center"> <img src="https://raw.githubusercontent.com/pipizhaoa/Pandalyst/master/imgs/pandalyst.png" width="300"/> </p> <p align="center"> 🐱 <a href="https://github.com/pipizhaoa/Pandalyst" target="_blank">Github Repo</a> <br> </p> **What is Pandalyst** - Pandalyst is a general large language model specifically trained to process and analyze data using the pandas library. **How is Pandalyst** - Pandalyst has strong generalization capabilities for data tables in different fields and different data analysis needs. **Why is Pandalyst** - Pandalyst is open source and free to use, and its small parameter size (7B/13B) allows us to easily deploy it on local PC. - Pandalyst can handle complex data tables (multiple columns and multiple rows), allowing us to enter enough context to describe our table in detail. - Pandalyst has very competitive performance, significantly outperforming models of the same size and even outperforming some of the strongest closed-source models. ## News - 🔥[2023/10/15] Now we can **plot** 📈! and much more powerful! We released **Pandalyst-7B-V1.2**, which was trained on **CodeLlama-7b-Python** and it surpasses **ChatGPT-3.5 (2023/06/13)**, **Pandalyst-7B-V1.1** and **WizardCoder-Python-13B-V1.0** in our **PandaTest_V1.0**. - 🤖️[2023/09/30] We released **Pandalyst-7B-V1.1** , which was trained on **CodeLlama-7b-Python** and achieves the **76.1 exec@1** in our **PandaTest_V1.0** and surpasses **WizardCoder-Python-13B-V1.0** and **ChatGPT-3.5 (2023/06/13)**. | Model | Checkpoint | Support plot | License | |---------------------|--------------------------------------------------------------------------------------------|--------------| ----- | | 🔥Pandalyst-7B-V1.2 | 🤗 <a href="https://huggingface.co/pipizhao/Pandalyst-7B-V1.2" target="_blank">HF Link</a> | ✅ | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | Pandalyst-7B-V1.1 | 🤗 <a href="https://huggingface.co/pipizhao/Pandalyst-7B-V1.1" target="_blank">HF Link</a> | ❌ | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | ## Usage and Human evaluation Please refer to <a href="https://github.com/pipizhaoa/Pandalyst" target="_blank">Github</a>. <!-- original-model-card end -->
Yntec/HELLmix
Yntec
2023-11-27T10:45:02Z
386
4
diffusers
[ "diffusers", "safetensors", "Anime", "2D", "Base Model", "Barons", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-11-27T10:01:48Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Anime - 2D - Base Model - Barons - stable-diffusion - stable-diffusion-diffusers - text-to-image language: - en inference: true --- # HELLmix This model with the MoistMixV2VAE baked in. Original page: https://civitai.com/models/21493/hellmix?modelVersionId=25632 # Comparison: ![Comparison](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/YDbdmfEmh06b42PsZPRV9.png) (Click for larger) # Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/jXXKTeuEd4T5J-EJ3IR5S.png) pretty cute girl holding hamburger, fries, masterpiece glass of soda, ketchup, salt. best quality, 4k, high quality, solo. contrast style.
Qwen/Qwen1.5-0.5B-Chat-AWQ
Qwen
2024-04-30T07:20:07Z
386
6
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
2024-02-03T07:01:23Z
--- license: other license_name: tongyi-qianwen-research license_link: >- https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat-AWQ/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - chat --- # Qwen1.5-0.5B-Chat-AWQ ## Introduction Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated; * Significant performance improvement in human preference for chat models; * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of `trust_remote_code`. For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). <br> ## Model Details Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. ## Requirements The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen1.5-0.5B-Chat-AWQ", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-0.5B-Chat-AWQ") prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## Tips * If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ```
LoneStriker/OpenCodeInterpreter-CL-13B-GGUF
LoneStriker
2024-02-24T16:34:24Z
386
0
null
[ "gguf", "code", "text-generation", "en", "arxiv:2402.14658", "region:us" ]
text-generation
2024-02-24T15:55:57Z
--- language: - en pipeline_tag: text-generation tags: - code --- <h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1> <p align="center"> <img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png"> </p> <p align="center"> <a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a> | <a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[🛠️Code]</a> </p> <hr> ## Introduction OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities. For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv. ## Model Usage ### Inference ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path="OpenCodeInterpreter-CL-13B" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto", ) model.eval() prompt = "Write a function to find the shared elements from the given two lists." inputs = tokenizer.apply_chat_template( [{'role': 'user', 'content': prompt }], return_tensors="pt" ).to(model.device) outputs = model.generate( inputs, max_new_tokens=1024, do_sample=False, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ## Contact If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected]. We're here to assist you!"
ChrisWilson011016/5DFyC7U2RuLvWCDxcWikpDP2fQysfeuEuQmUaUN1hjDgsFSD_vgg
ChrisWilson011016
2024-03-04T19:08:34Z
386
0
keras
[ "keras", "region:us" ]
null
2024-02-29T13:09:25Z
Entry not found
mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF
mradermacher
2024-05-10T16:44:22Z
386
0
transformers
[ "transformers", "gguf", "en", "base_model:sophosympatheia/Midnight-Rose-70B-v2.0.3", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-03-12T15:31:27Z
--- base_model: sophosympatheia/Midnight-Rose-70B-v2.0.3 language: - en library_name: transformers license: llama2 quantized_by: mradermacher --- ## About weighted/imatrix quants of https://huggingface.co/sophosympatheia/Midnight-Rose-70B-v2.0.3 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF/resolve/main/Midnight-Rose-70B-v2.0.3.i1-IQ1_S.gguf) | i1-IQ1_S | 15.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF/resolve/main/Midnight-Rose-70B-v2.0.3.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF/resolve/main/Midnight-Rose-70B-v2.0.3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.7 | | | [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF/resolve/main/Midnight-Rose-70B-v2.0.3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.8 | | | [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF/resolve/main/Midnight-Rose-70B-v2.0.3.i1-IQ2_S.gguf) | i1-IQ2_S | 21.8 | | | [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF/resolve/main/Midnight-Rose-70B-v2.0.3.i1-IQ2_M.gguf) | i1-IQ2_M | 23.7 | | | [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF/resolve/main/Midnight-Rose-70B-v2.0.3.i1-Q2_K.gguf) | i1-Q2_K | 25.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF/resolve/main/Midnight-Rose-70B-v2.0.3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF/resolve/main/Midnight-Rose-70B-v2.0.3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.6 | | | [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF/resolve/main/Midnight-Rose-70B-v2.0.3.i1-IQ3_S.gguf) | i1-IQ3_S | 30.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF/resolve/main/Midnight-Rose-70B-v2.0.3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF/resolve/main/Midnight-Rose-70B-v2.0.3.i1-IQ3_M.gguf) | i1-IQ3_M | 31.4 | | | [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF/resolve/main/Midnight-Rose-70B-v2.0.3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.7 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF/resolve/main/Midnight-Rose-70B-v2.0.3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF/resolve/main/Midnight-Rose-70B-v2.0.3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | | | [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF/resolve/main/Midnight-Rose-70B-v2.0.3.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF/resolve/main/Midnight-Rose-70B-v2.0.3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF/resolve/main/Midnight-Rose-70B-v2.0.3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF/resolve/main/Midnight-Rose-70B-v2.0.3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.9 | | | [GGUF](https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF/resolve/main/Midnight-Rose-70B-v2.0.3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 49.2 | | | [PART 1](https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF/resolve/main/Midnight-Rose-70B-v2.0.3.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Midnight-Rose-70B-v2.0.3-i1-GGUF/resolve/main/Midnight-Rose-70B-v2.0.3.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 57.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
ABX-AI/Cerebral-Lemonade-9B
ABX-AI
2024-04-09T23:57:03Z
386
4
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "not-for-all-audiences", "base_model:ABX-AI/Infinite-Laymons-7B", "base_model:ABX-AI/Cerebral-Infinity-7B", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-29T15:56:32Z
--- base_model: - ABX-AI/Infinite-Laymons-7B - ABX-AI/Cerebral-Infinity-7B library_name: transformers tags: - mergekit - merge - mistral - not-for-all-audiences license: other --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d936ad52eca001fdcd3245/8iIzO4gUUSjQsfiFnMgfI.png) # Cerebral-Lemonade-9B The concept behind this merge was to use the improved reasoning of of Cerebral-Infinity-7B, and merge it with the improved originality of Infinite-Laymons-7B. I think the experiment worked, and so far I am happy with the results. This model is intended for fictional storytelling and role-playing, with a focus on more original conversations and less alignment. [GGUF / IQ / Imatrix](https://huggingface.co/ABX-AI/Cerebral-Lemonade-9B-GGUF-IQ-Imatrix) ## Merge Details This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [ABX-AI/Infinite-Laymons-7B](https://huggingface.co/ABX-AI/Infinite-Laymons-7B) * [ABX-AI/Cerebral-Infinity-7B](https://huggingface.co/ABX-AI/Cerebral-Infinity-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: ABX-AI/Cerebral-Infinity-7B layer_range: [0, 20] - sources: - model: ABX-AI/Infinite-Laymons-7B layer_range: [12, 32] merge_method: passthrough dtype: float16 ```
weezywitasneezy/Oxide-F2-7B-slerp
weezywitasneezy
2024-04-10T14:54:59Z
386
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp", "weezywitasneezy/BenchmarkEngineering-F2-7B-slerp", "base_model:weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp", "base_model:weezywitasneezy/BenchmarkEngineering-F2-7B-slerp", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-09T15:32:32Z
--- tags: - merge - mergekit - lazymergekit - weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp - weezywitasneezy/BenchmarkEngineering-F2-7B-slerp base_model: - weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp - weezywitasneezy/BenchmarkEngineering-F2-7B-slerp license: cc-by-nc-4.0 --- # Oxide-F2-7B-slerp Oxide-F2-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp](https://huggingface.co/weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp) * [weezywitasneezy/BenchmarkEngineering-F2-7B-slerp](https://huggingface.co/weezywitasneezy/BenchmarkEngineering-F2-7B-slerp) # GGUF Quants Available: * [mradermacher/Oxide-F2-7B-slerp-GGUF](https://huggingface.co/mradermacher/Oxide-F2-7B-slerp-GGUF) ## 🧩 Configuration ```yaml slices: - sources: - model: weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp layer_range: [0, 32] - model: weezywitasneezy/BenchmarkEngineering-F2-7B-slerp layer_range: [0, 32] merge_method: slerp base_model: weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "weezywitasneezy/Oxide-F2-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
jilijeanlouis/gliner_largev2
jilijeanlouis
2024-04-09T20:29:26Z
386
0
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2024-04-09T18:19:30Z
Entry not found
CarrotAI/OpenCarrot-Mix-7B
CarrotAI
2024-06-21T06:13:59Z
386
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "ko", "en", "arxiv:2203.05482", "base_model:amazingvince/Not-WizardLM-2-7B", "base_model:CarrotAI/OpenCarrot-Mistral-7B-Instruct-v0.2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-18T03:16:49Z
--- base_model: - amazingvince/Not-WizardLM-2-7B - CarrotAI/OpenCarrot-Mistral-7B-Instruct-v0.2 library_name: transformers tags: - mergekit - merge license: mit language: - ko - en --- # output This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [amazingvince/Not-WizardLM-2-7B](https://huggingface.co/amazingvince/Not-WizardLM-2-7B) * [CarrotAI/OpenCarrot-Mistral-7B-Instruct-v0.2](https://huggingface.co/CarrotAI/OpenCarrot-Mistral-7B-Instruct-v0.2) ### Score ``` openai/gpt-4 : 0.6158 gemini-pro: 0.515 OpenCarrot-Mix-7B (this) : 0.4425 mistralai/Mixtral-8x7B-Instruct-v0.1 : 0.4304 openai/gpt-3.5-turbo : 0.4217 ``` | 평가 지표 | 점수 | |--------------|---------| | AVG_llm_kr_eval | 0.4425 | | EL | 0.0522 | | FA | 0.0865 | | NLI | 0.6700 | | QA | 0.5100 | | RC | 0.8937 | | klue_ner_set_f1| 0.0944 | | klue_re_exact_match | 0.0100 | | kmmlu_preview_exact_match | 0.4000 | | kobest_copa_exact_match | 0.8200 | | kobest_hs_exact_match | 0.5500 | | kobest_sn_exact_match | 0.9800 | | kobest_wic_exact_match | 0.6200 | | korea_cg_bleu | 0.0865 | | kornli_exact_match | 0.6400 | | korsts_pearson | 0.8547 | | korsts_spearman| 0.8464 | LogicKor | 카테고리 | 싱글 점수 평균 | 멀티 점수 평균 | |----------|------------------|-------------------| | 코딩(Coding) | 7.71 | 7.71 | | 수학(Math) | 5.57 | 3.86 | | 이해(Understanding) | 6.86 | 8.14 | | 추론(Reasoning) | 8.14 | 6.43 | | 글쓰기(Writing) | 8.71 | 6.86 | | 문법(Grammar) | 5.29 | 2.29 | | 카테고리 | 싱글 점수 평균 | 멀티 점수 평균 | |------------|------------------|-------------------| | 전체 싱글 | 7.05 | 5.88 | ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: amazingvince/Not-WizardLM-2-7B parameters: weight: 1.0 - model: CarrotAI/OpenCarrot-Mistral-7B-Instruct-v0.2 parameters: weight: 0.5 merge_method: linear dtype: float16 ```
mradermacher/Mermaid-Solar-GGUF
mradermacher
2024-05-06T04:39:18Z
386
0
transformers
[ "transformers", "gguf", "en", "base_model:TroyDoesAI/Mermaid-Solar", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-19T11:59:42Z
--- base_model: TroyDoesAI/Mermaid-Solar language: - en library_name: transformers license: cc-by-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/TroyDoesAI/Mermaid-Solar <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.Q2_K.gguf) | Q2_K | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.IQ3_XS.gguf) | IQ3_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.IQ3_M.gguf) | IQ3_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.IQ4_XS.gguf) | IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.Q5_K_S.gguf) | Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.Q5_K_M.gguf) | Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.Q6_K.gguf) | Q6_K | 8.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mermaid-Solar-GGUF/resolve/main/Mermaid-Solar.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
allknowingroger/Llama3merge8-15B-MoE
allknowingroger
2024-04-22T09:53:46Z
386
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "sethuiyer/Medichat-Llama3-8B", "psyche/llama3-8b-instruct-mrc-v0.3", "conversational", "base_model:sethuiyer/Medichat-Llama3-8B", "base_model:psyche/llama3-8b-instruct-mrc-v0.3", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-22T09:46:07Z
--- license: apache-2.0 tags: - moe - frankenmoe - merge - mergekit - lazymergekit - sethuiyer/Medichat-Llama3-8B - psyche/llama3-8b-instruct-mrc-v0.3 base_model: - sethuiyer/Medichat-Llama3-8B - psyche/llama3-8b-instruct-mrc-v0.3 --- # Llama3merge8-15B-MoE Llama3merge8-15B-MoE is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B) * [psyche/llama3-8b-instruct-mrc-v0.3](https://huggingface.co/psyche/llama3-8b-instruct-mrc-v0.3) ## 🧩 Configuration ```yaml base_model: sethuiyer/Medichat-Llama3-8B experts: - source_model: sethuiyer/Medichat-Llama3-8B positive_prompts: ["medical"] - source_model: psyche/llama3-8b-instruct-mrc-v0.3 positive_prompts: ["what"] ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "allknowingroger/Llama3merge8-15B-MoE" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
prs-eth/marigold-normals-lcm-v0-1
prs-eth
2024-05-09T13:56:36Z
386
4
diffusers
[ "diffusers", "safetensors", "monocular normals estimation", "single image normals estimation", "normals", "in-the-wild", "zero-shot", "LCM", "normals-estimation", "en", "arxiv:2312.02145", "license:apache-2.0", "diffusers:MarigoldPipeline", "region:us" ]
null
2024-04-26T13:27:15Z
--- license: apache-2.0 language: - en pipeline_tag: normals-estimation tags: - monocular normals estimation - single image normals estimation - normals - in-the-wild - zero-shot - LCM --- # Marigold Normals (LCM) Model Card This model belongs to the family of diffusion-based Marigold models for solving various computer vision tasks. The Marigold Normals model focuses on the surface normals task. It takes an input image and computes surface normals in each pixel. The LCM stands for Latent Consistency Models, which is a technique for making the diffusion model fast. The Marigold Normals model is trained from Stable Diffusion with synthetic data, and the LCM model is further fine-tuned from it. Thanks to the rich visual knowledge stored in Stable Diffusion, Marigold models possess deep scene understanding and excel at solving computer vision tasks. Read more about Marigold in our paper titled "Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation". [![Website](doc/badges/badge-website.svg)](https://marigoldmonodepth.github.io) [![GitHub](https://img.shields.io/github/stars/prs-eth/Marigold?style=default&label=GitHub%20★&logo=github)](https://github.com/prs-eth/Marigold) [![Paper](doc/badges/badge-pdf.svg)](https://arxiv.org/abs/2312.02145) [![Hugging Face Space](https://img.shields.io/badge/🤗%20Hugging%20Face-Space-yellow)](https://huggingface.co/spaces/toshas/marigold) Developed by: [Bingxin Ke](http://www.kebingxin.com/), [Anton Obukhov](https://www.obukhov.ai/), [Shengyu Huang](https://shengyuh.github.io/), [Nando Metzger](https://nandometzger.github.io/), [Rodrigo Caye Daudt](https://rcdaudt.github.io/), [Konrad Schindler](https://scholar.google.com/citations?user=FZuNgqIAAAAJ&hl=en) ![teaser](doc/teaser_collage_transparant.png) ## 🎓 Citation ```bibtex @InProceedings{ke2023repurposing, title={Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation}, author={Bingxin Ke and Anton Obukhov and Shengyu Huang and Nando Metzger and Rodrigo Caye Daudt and Konrad Schindler}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2024} } ``` ## 🎫 License This work is licensed under the Apache License, Version 2.0 (as defined in the [LICENSE](LICENSE.txt)). By downloading and using the code and model you agree to the terms in the [LICENSE](LICENSE.txt). [![License](https://img.shields.io/badge/License-Apache--2.0-929292)](https://www.apache.org/licenses/LICENSE-2.0)
chujiezheng/Snorkel-Mistral-PairRM-DPO-ExPO
chujiezheng
2024-05-27T18:16:33Z
386
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "arxiv:2404.16792", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-04T07:09:35Z
--- license: apache-2.0 language: - en --- # Snorkel-Mistral-PairRM-DPO-ExPO The extrapolated (ExPO) model based on [`snorkelai/Snorkel-Mistral-PairRM-DPO`](https://huggingface.co/snorkelai/Snorkel-Mistral-PairRM-DPO) and [`mistralai/Mistral-7B-Instruct-v0.2`](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2), as in the "[Weak-to-Strong Extrapolation Expedites Alignment](https://arxiv.org/abs/2404.16792)" paper. Specifically, we obtain this model by extrapolating **(alpha = 0.3)** from the weights of the SFT and DPO/RLHF checkpoints, achieving superior alignment with human preference. ## Evaluation Results Evaluation results on the **AlpacaEval 2.0** benchmark (you can find the evaluation outputs on the [official GitHub repo](https://github.com/chujiezheng/LLM-Extrapolation/tree/main/results_alpaca)): | | Win Rate (Ori) | LC Win Rate (Ori) | Win Rate (+ ExPO) | LC Win Rate (+ ExPO) | | ------------------------------------ | -------------- | ----------------- | ----------------- | -------------------- | | `HuggingFaceH4/zephyr-7b-alpha` | 6.7% | 10.0% | **10.6%** | **13.6%** | | `HuggingFaceH4/zephyr-7b-beta` | 10.2% | 13.2% | **11.1%** | **14.0%** | | `berkeley-nest/Starling-LM-7B-alpha` | 15.0% | 18.3% | **18.2%** | **19.5%** | | `Nexusflow/Starling-LM-7B-beta` | 26.6% | 25.8% | **29.6%** | **26.4%** | | `snorkelai/Snorkel-Mistral-PairRM` | 24.7% | 24.0% | **28.8%** | **26.4%** | | `RLHFlow/LLaMA3-iterative-DPO-final` | 29.2% | 36.0% | **32.7%** | **37.8%** | | `internlm/internlm2-chat-1.8b` | 3.8% | 4.0% | **5.2%** | **4.3%** | | `internlm/internlm2-chat-7b` | 20.5% | 18.3% | **28.1%** | **22.7%** | | `internlm/internlm2-chat-20b` | 36.1% | 24.9% | **46.2%** | **27.2%** | | `allenai/tulu-2-dpo-7b` | 8.5% | 10.2% | **11.5%** | **11.7%** | | `allenai/tulu-2-dpo-13b` | 11.2% | 15.5% | **15.6%** | **17.6%** | | `allenai/tulu-2-dpo-70b` | 15.4% | 21.2% | **23.0%** | **25.7%** | Evaluation results on the **MT-Bench** benchmark (you can find the evaluation outputs on the [official GitHub repo](https://github.com/chujiezheng/LLM-Extrapolation/tree/main/results_mtbench)): | | Original | + ExPO | | ------------------------------------ | -------- | -------- | | `HuggingFaceH4/zephyr-7b-alpha` | 6.85 | **6.87** | | `HuggingFaceH4/zephyr-7b-beta` | 7.02 | **7.06** | | `berkeley-nest/Starling-LM-7B-alpha` | 7.82 | **7.91** | | `Nexusflow/Starling-LM-7B-beta` | 8.10 | **8.18** | | `snorkelai/Snorkel-Mistral-PairRM` | 7.63 | **7.69** | | `RLHFlow/LLaMA3-iterative-DPO-final` | 8.08 | **8.45** | | `internlm/internlm2-chat-1.8b` | 5.17 | **5.26** | | `internlm/internlm2-chat-7b` | 7.72 | **7.80** | | `internlm/internlm2-chat-20b` | 8.13 | **8.26** | | `allenai/tulu-2-dpo-7b` | 6.35 | **6.38** | | `allenai/tulu-2-dpo-13b` | 7.00 | **7.26** | | `allenai/tulu-2-dpo-70b` | 7.79 | **8.03** |
dragonSwing/LLaVA-Llama-3-8B-Instruct-GGUF
dragonSwing
2024-05-05T09:35:09Z
386
2
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
2024-05-05T05:36:39Z
--- license: apache-2.0 ---
RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf
RichardErkhov
2024-05-11T18:27:35Z
386
0
null
[ "gguf", "arxiv:2312.02120", "arxiv:2305.06161", "region:us" ]
null
2024-05-11T16:33:36Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Magicoder-S-DS-6.7B - GGUF - Model creator: https://huggingface.co/ise-uiuc/ - Original model: https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Magicoder-S-DS-6.7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q2_K.gguf) | Q2_K | 2.36GB | | [Magicoder-S-DS-6.7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.IQ3_XS.gguf) | IQ3_XS | 2.61GB | | [Magicoder-S-DS-6.7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.IQ3_S.gguf) | IQ3_S | 2.75GB | | [Magicoder-S-DS-6.7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q3_K_S.gguf) | Q3_K_S | 2.75GB | | [Magicoder-S-DS-6.7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.IQ3_M.gguf) | IQ3_M | 2.9GB | | [Magicoder-S-DS-6.7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q3_K.gguf) | Q3_K | 3.07GB | | [Magicoder-S-DS-6.7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q3_K_M.gguf) | Q3_K_M | 3.07GB | | [Magicoder-S-DS-6.7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q3_K_L.gguf) | Q3_K_L | 3.35GB | | [Magicoder-S-DS-6.7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.IQ4_XS.gguf) | IQ4_XS | 3.4GB | | [Magicoder-S-DS-6.7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q4_0.gguf) | Q4_0 | 3.56GB | | [Magicoder-S-DS-6.7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.IQ4_NL.gguf) | IQ4_NL | 3.59GB | | [Magicoder-S-DS-6.7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q4_K_S.gguf) | Q4_K_S | 3.59GB | | [Magicoder-S-DS-6.7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q4_K.gguf) | Q4_K | 3.8GB | | [Magicoder-S-DS-6.7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q4_K_M.gguf) | Q4_K_M | 3.8GB | | [Magicoder-S-DS-6.7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q4_1.gguf) | Q4_1 | 3.95GB | | [Magicoder-S-DS-6.7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q5_0.gguf) | Q5_0 | 4.33GB | | [Magicoder-S-DS-6.7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q5_K_S.gguf) | Q5_K_S | 4.33GB | | [Magicoder-S-DS-6.7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q5_K.gguf) | Q5_K | 4.46GB | | [Magicoder-S-DS-6.7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q5_K_M.gguf) | Q5_K_M | 4.46GB | | [Magicoder-S-DS-6.7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q5_1.gguf) | Q5_1 | 4.72GB | | [Magicoder-S-DS-6.7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/ise-uiuc_-_Magicoder-S-DS-6.7B-gguf/blob/main/Magicoder-S-DS-6.7B.Q6_K.gguf) | Q6_K | 5.15GB | Original model description: --- license: other library_name: transformers datasets: - ise-uiuc/Magicoder-OSS-Instruct-75K - ise-uiuc/Magicoder-Evol-Instruct-110K license_name: deepseek pipeline_tag: text-generation --- # 🎩 Magicoder: Source Code Is All You Need > Refer to our GitHub repo [ise-uiuc/magicoder](https://github.com/ise-uiuc/magicoder/) for an up-to-date introduction to the Magicoder family! * 🎩**Magicoder** is a model family empowered by 🪄**OSS-Instruct**, a novel approach to enlightening LLMs with open-source code snippets for generating *low-bias* and *high-quality* instruction data for code. * 🪄**OSS-Instruct** mitigates the *inherent bias* of the LLM-synthesized instruction data by empowering them with *a wealth of open-source references* to produce more diverse, realistic, and controllable data. ![Overview of OSS-Instruct](assets/overview.svg) ![Overview of Result](assets/result.png) ## Model Details ### Model Description * **Developed by:** [Yuxiang Wei](https://yuxiang.cs.illinois.edu), [Zhe Wang](https://github.com/zhewang2001), [Jiawei Liu](https://jiawei-site.github.io), [Yifeng Ding](https://yifeng-ding.com), [Lingming Zhang](https://lingming.cs.illinois.edu) * **License:** [DeepSeek](https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL) * **Finetuned from model:** [deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) ### Model Sources * **Repository:** <https://github.com/ise-uiuc/magicoder> * **Paper:** <https://arxiv.org/abs/2312.02120> * **Demo (powered by [Gradio](https://www.gradio.app)):** <https://github.com/ise-uiuc/magicoder/tree/main/demo> ### Training Data * [Magicoder-OSS-Instruct-75K](https://huggingface.co/datasets/ise-uiuc/Magicoder_oss_instruct_75k): generated through **OSS-Instruct** using `gpt-3.5-turbo-1106` and used to train both Magicoder and Magicoder-S series. * [Magicoder-Evol-Instruct-110K](https://huggingface.co/datasets/ise-uiuc/Magicoder_evol_instruct_110k): decontaminated and redistributed from [theblackcat102/evol-codealpaca-v1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1), used to further finetune Magicoder series and obtain Magicoder-S models. ## Uses ### Direct Use Magicoders are designed and best suited for **coding tasks**. ### Out-of-Scope Use Magicoders may not work well in non-coding tasks. ## Bias, Risks, and Limitations Magicoders may sometimes make errors, producing misleading contents, or struggle to manage tasks that are not related to coding. ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. ## How to Get Started with the Model Use the code below to get started with the model. Make sure you installed the [transformers](https://huggingface.co/docs/transformers/index) library. ```python from transformers import pipeline import torch MAGICODER_PROMPT = """You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions. @@ Instruction {instruction} @@ Response """ instruction = <Your code instruction here> prompt = MAGICODER_PROMPT.format(instruction=instruction) generator = pipeline( model="ise-uiuc/Magicoder-S-DS-6.7B", task="text-generation", torch_dtype=torch.bfloat16, device_map="auto", ) result = generator(prompt, max_length=1024, num_return_sequences=1, temperature=0.0) print(result[0]["generated_text"]) ``` ## Technical Details Refer to our GitHub repo: [ise-uiuc/magicoder](https://github.com/ise-uiuc/magicoder/). ## Citation ```bibtex @misc{magicoder, title={Magicoder: Source Code Is All You Need}, author={Yuxiang Wei and Zhe Wang and Jiawei Liu and Yifeng Ding and Lingming Zhang}, year={2023}, eprint={2312.02120}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Acknowledgements * [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder): Evol-Instruct * [DeepSeek-Coder](https://github.com/deepseek-ai/DeepSeek-Coder): Base model for Magicoder-DS * [CodeLlama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/): Base model for Magicoder-CL * [StarCoder](https://arxiv.org/abs/2305.06161): Data decontamination ## Important Note Magicoder models are trained on the synthetic data generated by OpenAI models. Please pay attention to OpenAI's [terms of use](https://openai.com/policies/terms-of-use) when using the models and the datasets. Magicoders will not compete with OpenAI's commercial products.
nbeerbower/KawaiiMahou-mistral-7B
nbeerbower
2024-05-13T02:52:03Z
386
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "dataset:yongtae-jp/orca_dpo_pairs_ja", "base_model:flammenai/Mahou-1.1-mistral-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-13T02:02:10Z
--- library_name: transformers license: apache-2.0 base_model: - flammenai/Mahou-1.1-mistral-7B datasets: - yongtae-jp/orca_dpo_pairs_ja --- ![image/png](https://huggingface.co/flammenai/Mahou-1.0-mistral-7B/resolve/main/mahou1.png) # KawaiiMahou-mistral-7B flammenai/Mahou-1.1-mistral-7B trained on a Japanese DPO set. ### Chat Format This model has been trained to use ChatML format. ``` <|im_start|>system {{system}}<|im_end|> <|im_start|>{{char}} {{message}}<|im_end|> <|im_start|>{{user}} {{message}}<|im_end|> ``` ### ST Settings 1. Use ChatML for the Context Template. 2. Turn on Instruct Mode for ChatML. 3. Use the following stopping strings: `["<", "|", "<|", "\n"]` ### Method Finetuned using an A100 on Google Colab. [Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne) ### Configuration LoRA, model, and training settings: ```python # LoRA configuration peft_config = LoraConfig( r=16, lora_alpha=16, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj'] ) # Model to fine-tune model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) model.config.use_cache = False # Reference model ref_model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) # Training arguments training_args = TrainingArguments( per_device_train_batch_size=4, gradient_accumulation_steps=4, gradient_checkpointing=True, learning_rate=5e-5, lr_scheduler_type="cosine", max_steps=1000, save_strategy="no", logging_steps=1, output_dir=new_model, optim="paged_adamw_32bit", warmup_steps=100, bf16=True, report_to="wandb", ) # Create DPO trainer dpo_trainer = DPOTrainer( model, ref_model, args=training_args, train_dataset=dataset, tokenizer=tokenizer, peft_config=peft_config, beta=0.1, force_use_ref_model=True ) ```
ukung/Gemma2B-Indo-FineTune-GGUF
ukung
2024-05-20T14:56:28Z
386
0
null
[ "gguf", "region:us" ]
null
2024-05-20T14:20:02Z
Entry not found
dreamgen/llama3-8b-assistant-test-run1-sft-e2
dreamgen
2024-05-24T18:35:16Z
386
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:cc", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-23T21:17:37Z
--- license: cc --- - **What is this?** Nothing interesting, just an experiment. - **License:** CC-BY-NC ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------------|------:|--------------|-----:|---|-----:| |all | |acc |0.6606|± |0.0323| | | |acc_norm |0.6669|± |0.0093| | | |truthfulqa_mc1|0.3403|± |0.0166| | | |truthfulqa_mc2|0.5085|± |0.0153| | | |qem |0.7020|± |0.0126| |leaderboard:arc:challenge:25 | 0|acc |0.5683|± |0.0145| | | |acc_norm |0.5469|± |0.0145| |leaderboard:gsm8k:5 | 0|qem |0.7020|± |0.0126| |leaderboard:hellaswag:10 | 0|acc |0.5990|± |0.0049| | | |acc_norm |0.7868|± |0.0041| |leaderboard:mmlu:_average:5 | |acc |0.6621|± |0.0334| |leaderboard:mmlu:abstract_algebra:5 | 0|acc |0.3100|± |0.0465| |leaderboard:mmlu:anatomy:5 | 0|acc |0.6667|± |0.0407| |leaderboard:mmlu:astronomy:5 | 0|acc |0.7368|± |0.0358| |leaderboard:mmlu:business_ethics:5 | 0|acc |0.7000|± |0.0461| |leaderboard:mmlu:clinical_knowledge:5 | 0|acc |0.7434|± |0.0269| |leaderboard:mmlu:college_biology:5 | 0|acc |0.7847|± |0.0344| |leaderboard:mmlu:college_chemistry:5 | 0|acc |0.4500|± |0.0500| |leaderboard:mmlu:college_computer_science:5 | 0|acc |0.4900|± |0.0502| |leaderboard:mmlu:college_mathematics:5 | 0|acc |0.4000|± |0.0492| |leaderboard:mmlu:college_medicine:5 | 0|acc |0.6301|± |0.0368| |leaderboard:mmlu:college_physics:5 | 0|acc |0.4412|± |0.0494| |leaderboard:mmlu:computer_security:5 | 0|acc |0.7900|± |0.0409| |leaderboard:mmlu:conceptual_physics:5 | 0|acc |0.6043|± |0.0320| |leaderboard:mmlu:econometrics:5 | 0|acc |0.5175|± |0.0470| |leaderboard:mmlu:electrical_engineering:5 | 0|acc |0.6207|± |0.0404| |leaderboard:mmlu:elementary_mathematics:5 | 0|acc |0.4021|± |0.0253| |leaderboard:mmlu:formal_logic:5 | 0|acc |0.5238|± |0.0447| |leaderboard:mmlu:global_facts:5 | 0|acc |0.4400|± |0.0499| |leaderboard:mmlu:high_school_biology:5 | 0|acc |0.8000|± |0.0228| |leaderboard:mmlu:high_school_chemistry:5 | 0|acc |0.5419|± |0.0351| |leaderboard:mmlu:high_school_computer_science:5 | 0|acc |0.7200|± |0.0451| |leaderboard:mmlu:high_school_european_history:5 | 0|acc |0.7636|± |0.0332| |leaderboard:mmlu:high_school_geography:5 | 0|acc |0.7828|± |0.0294| |leaderboard:mmlu:high_school_government_and_politics:5| 0|acc |0.8756|± |0.0238| |leaderboard:mmlu:high_school_macroeconomics:5 | 0|acc |0.6744|± |0.0238| |leaderboard:mmlu:high_school_mathematics:5 | 0|acc |0.3630|± |0.0293| |leaderboard:mmlu:high_school_microeconomics:5 | 0|acc |0.7563|± |0.0279| |leaderboard:mmlu:high_school_physics:5 | 0|acc |0.3775|± |0.0396| |leaderboard:mmlu:high_school_psychology:5 | 0|acc |0.8569|± |0.0150| |leaderboard:mmlu:high_school_statistics:5 | 0|acc |0.4769|± |0.0341| |leaderboard:mmlu:high_school_us_history:5 | 0|acc |0.8039|± |0.0279| |leaderboard:mmlu:high_school_world_history:5 | 0|acc |0.8439|± |0.0236| |leaderboard:mmlu:human_aging:5 | 0|acc |0.7399|± |0.0294| |leaderboard:mmlu:human_sexuality:5 | 0|acc |0.7710|± |0.0369| |leaderboard:mmlu:international_law:5 | 0|acc |0.7521|± |0.0394| |leaderboard:mmlu:jurisprudence:5 | 0|acc |0.7593|± |0.0413| |leaderboard:mmlu:logical_fallacies:5 | 0|acc |0.7423|± |0.0344| |leaderboard:mmlu:machine_learning:5 | 0|acc |0.5357|± |0.0473| |leaderboard:mmlu:management:5 | 0|acc |0.8447|± |0.0359| |leaderboard:mmlu:marketing:5 | 0|acc |0.8974|± |0.0199| |leaderboard:mmlu:medical_genetics:5 | 0|acc |0.7400|± |0.0441| |leaderboard:mmlu:miscellaneous:5 | 0|acc |0.8174|± |0.0138| |leaderboard:mmlu:moral_disputes:5 | 0|acc |0.7197|± |0.0242| |leaderboard:mmlu:moral_scenarios:5 | 0|acc |0.4168|± |0.0165| |leaderboard:mmlu:nutrition:5 | 0|acc |0.7516|± |0.0247| |leaderboard:mmlu:philosophy:5 | 0|acc |0.7170|± |0.0256| |leaderboard:mmlu:prehistory:5 | 0|acc |0.7222|± |0.0249| |leaderboard:mmlu:professional_accounting:5 | 0|acc |0.5213|± |0.0298| |leaderboard:mmlu:professional_law:5 | 0|acc |0.4759|± |0.0128| |leaderboard:mmlu:professional_medicine:5 | 0|acc |0.7537|± |0.0262| |leaderboard:mmlu:professional_psychology:5 | 0|acc |0.6993|± |0.0186| |leaderboard:mmlu:public_relations:5 | 0|acc |0.7182|± |0.0431| |leaderboard:mmlu:security_studies:5 | 0|acc |0.7673|± |0.0270| |leaderboard:mmlu:sociology:5 | 0|acc |0.8259|± |0.0268| |leaderboard:mmlu:us_foreign_policy:5 | 0|acc |0.8300|± |0.0378| |leaderboard:mmlu:virology:5 | 0|acc |0.5181|± |0.0389| |leaderboard:mmlu:world_religions:5 | 0|acc |0.8129|± |0.0299| |leaderboard:truthfulqa:mc:0 | 0|truthfulqa_mc1|0.3403|± |0.0166| | | |truthfulqa_mc2|0.5085|± |0.0153| |leaderboard:winogrande:5 | 0|acc |0.7309|± |0.0125| ``` Baseline: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------------|------:|--------------|-----:|---|-----:| |all | |acc |0.6635|± |0.0322| | | |acc_norm |0.6569|± |0.0094| | | |truthfulqa_mc1|0.3745|± |0.0169| | | |truthfulqa_mc2|0.5338|± |0.0160| | | |qem |0.6808|± |0.0128| |leaderboard:arc:challenge:25 | 0|acc |0.5742|± |0.0144| | | |acc_norm |0.5828|± |0.0144| |leaderboard:gsm8k:5 | 0|qem |0.6808|± |0.0128| |leaderboard:hellaswag:10 | 0|acc |0.5707|± |0.0049| | | |acc_norm |0.7310|± |0.0044| |leaderboard:mmlu:_average:5 | |acc |0.6662|± |0.0333| |leaderboard:mmlu:abstract_algebra:5 | 0|acc |0.3300|± |0.0473| |leaderboard:mmlu:anatomy:5 | 0|acc |0.6815|± |0.0402| |leaderboard:mmlu:astronomy:5 | 0|acc |0.7500|± |0.0352| |leaderboard:mmlu:business_ethics:5 | 0|acc |0.7000|± |0.0461| |leaderboard:mmlu:clinical_knowledge:5 | 0|acc |0.7472|± |0.0267| |leaderboard:mmlu:college_biology:5 | 0|acc |0.7917|± |0.0340| |leaderboard:mmlu:college_chemistry:5 | 0|acc |0.4500|± |0.0500| |leaderboard:mmlu:college_computer_science:5 | 0|acc |0.5200|± |0.0502| |leaderboard:mmlu:college_mathematics:5 | 0|acc |0.3900|± |0.0490| |leaderboard:mmlu:college_medicine:5 | 0|acc |0.6590|± |0.0361| |leaderboard:mmlu:college_physics:5 | 0|acc |0.4314|± |0.0493| |leaderboard:mmlu:computer_security:5 | 0|acc |0.7900|± |0.0409| |leaderboard:mmlu:conceptual_physics:5 | 0|acc |0.5872|± |0.0322| |leaderboard:mmlu:econometrics:5 | 0|acc |0.5439|± |0.0469| |leaderboard:mmlu:electrical_engineering:5 | 0|acc |0.6138|± |0.0406| |leaderboard:mmlu:elementary_mathematics:5 | 0|acc |0.4683|± |0.0257| |leaderboard:mmlu:formal_logic:5 | 0|acc |0.5317|± |0.0446| |leaderboard:mmlu:global_facts:5 | 0|acc |0.4600|± |0.0501| |leaderboard:mmlu:high_school_biology:5 | 0|acc |0.8065|± |0.0225| |leaderboard:mmlu:high_school_chemistry:5 | 0|acc |0.5419|± |0.0351| |leaderboard:mmlu:high_school_computer_science:5 | 0|acc |0.6800|± |0.0469| |leaderboard:mmlu:high_school_european_history:5 | 0|acc |0.7394|± |0.0343| |leaderboard:mmlu:high_school_geography:5 | 0|acc |0.8131|± |0.0278| |leaderboard:mmlu:high_school_government_and_politics:5| 0|acc |0.8964|± |0.0220| |leaderboard:mmlu:high_school_macroeconomics:5 | 0|acc |0.6769|± |0.0237| |leaderboard:mmlu:high_school_mathematics:5 | 0|acc |0.3259|± |0.0286| |leaderboard:mmlu:high_school_microeconomics:5 | 0|acc |0.7563|± |0.0279| |leaderboard:mmlu:high_school_physics:5 | 0|acc |0.4106|± |0.0402| |leaderboard:mmlu:high_school_psychology:5 | 0|acc |0.8477|± |0.0154| |leaderboard:mmlu:high_school_statistics:5 | 0|acc |0.4769|± |0.0341| |leaderboard:mmlu:high_school_us_history:5 | 0|acc |0.7892|± |0.0286| |leaderboard:mmlu:high_school_world_history:5 | 0|acc |0.8397|± |0.0239| |leaderboard:mmlu:human_aging:5 | 0|acc |0.7265|± |0.0299| |leaderboard:mmlu:human_sexuality:5 | 0|acc |0.7939|± |0.0355| |leaderboard:mmlu:international_law:5 | 0|acc |0.7686|± |0.0385| |leaderboard:mmlu:jurisprudence:5 | 0|acc |0.7593|± |0.0413| |leaderboard:mmlu:logical_fallacies:5 | 0|acc |0.7607|± |0.0335| |leaderboard:mmlu:machine_learning:5 | 0|acc |0.5268|± |0.0474| |leaderboard:mmlu:management:5 | 0|acc |0.8155|± |0.0384| |leaderboard:mmlu:marketing:5 | 0|acc |0.9060|± |0.0191| |leaderboard:mmlu:medical_genetics:5 | 0|acc |0.7900|± |0.0409| |leaderboard:mmlu:miscellaneous:5 | 0|acc |0.8238|± |0.0136| |leaderboard:mmlu:moral_disputes:5 | 0|acc |0.7399|± |0.0236| |leaderboard:mmlu:moral_scenarios:5 | 0|acc |0.4358|± |0.0166| |leaderboard:mmlu:nutrition:5 | 0|acc |0.7549|± |0.0246| |leaderboard:mmlu:philosophy:5 | 0|acc |0.7331|± |0.0251| |leaderboard:mmlu:prehistory:5 | 0|acc |0.7469|± |0.0242| |leaderboard:mmlu:professional_accounting:5 | 0|acc |0.5177|± |0.0298| |leaderboard:mmlu:professional_law:5 | 0|acc |0.4648|± |0.0127| |leaderboard:mmlu:professional_medicine:5 | 0|acc |0.7279|± |0.0270| |leaderboard:mmlu:professional_psychology:5 | 0|acc |0.6928|± |0.0187| |leaderboard:mmlu:public_relations:5 | 0|acc |0.6636|± |0.0453| |leaderboard:mmlu:security_studies:5 | 0|acc |0.7306|± |0.0284| |leaderboard:mmlu:sociology:5 | 0|acc |0.8557|± |0.0248| |leaderboard:mmlu:us_foreign_policy:5 | 0|acc |0.8600|± |0.0349| |leaderboard:mmlu:virology:5 | 0|acc |0.5361|± |0.0388| |leaderboard:mmlu:world_religions:5 | 0|acc |0.7953|± |0.0309| |leaderboard:truthfulqa:mc:0 | 0|truthfulqa_mc1|0.3745|± |0.0169| | | |truthfulqa_mc2|0.5338|± |0.0160| |leaderboard:winogrande:5 | 0|acc |0.6930|± |0.0130| ```
wwe180/Mistral-7B-v0.3-Chinese-lingyang-v1
wwe180
2024-05-27T01:44:43Z
386
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "shenzhi-Chinese", "7B", "base_model:shenzhi-wang/Mistral-7B-v0.3-Chinese-Chat", "base_model:MaziyarPanahi/Calme-7B-Instruct-v0.3", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-27T00:01:02Z
--- tags: - merge - mergekit - lazymergekit - shenzhi-Chinese - 7B base_model: - shenzhi-wang/Mistral-7B-v0.3-Chinese-Chat - MaziyarPanahi/Calme-7B-Instruct-v0.3 license: - other --- # Mistral-7B-v0.3-Chinese-lingyang-v1 #该模型是实验性的,因此无法保证结果。 Mistral-7B-v0.3-Chinese-lingyang-v1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [shenzhi-wang/Mistral-7B-v0.3-Chinese-Chat](https://huggingface.co/shenzhi-wang/Mistral-7B-v0.3-Chinese-Chat) * [MaziyarPanahi/Calme-7B-Instruct-v0.3](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.3) ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "wwe180/Mistral-7B-v0.3-Chinese-lingyang-v1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Sharan1712/llama2_7B_alpaca_loftq_4bit_3a
Sharan1712
2024-05-31T10:59:37Z
386
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dataset:tatsu-lab/alpaca", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-31T09:28:26Z
--- library_name: transformers license: apache-2.0 datasets: - tatsu-lab/alpaca --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SidXXD/blend_factor_157
SidXXD
2024-06-10T15:45:16Z
386
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "custom-diffusion", "base_model:stabilityai/stable-diffusion-2-1-base", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-06-07T09:05:01Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1-base instance_prompt: photo of a <v1*> cat tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - custom-diffusion inference: true --- # Custom Diffusion - SidXXD/blend_factor_157 These are Custom Diffusion adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on photo of a <v1*> cat using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following. For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
ThomasSimonini/t5-end2end-question-generation
ThomasSimonini
2021-10-10T08:30:38Z
385
15
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: t5-end2end-question-generation results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: squad type: squad args: plain_text --- # t5-end2end-question-generation This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad dataset to generate questions based on a context. 👉 If you want to learn how to fine-tune the t5 model to do the same, you can follow this [tutorial](https://colab.research.google.com/drive/1z-Zl2hftMrFXabYfmz8o9YZpgYx6sGeW?usp=sharing) For instance: ``` Context: "Python is an interpreted, high-level, general-purpose programming language. Created by Guido van Rossum and first released in 1991, Python's design philosophy emphasizes code readability with its notable use of significant whitespace." ``` ``` Questions: Who created Python?, When was Python first released? What is Python's design philosophy? ``` It achieves the following results on the evaluation set: - Loss: 1.5691 ## Use the Model ``` from transformers import T5ForConditionalGeneration, T5TokenizerFast hfmodel = T5ForConditionalGeneration.from_pretrained("ThomasSimonini/t5-end2end-question-generation") text= "The abolition of feudal privileges by the National Constituent Assembly on 4 August 1789 and the Declaration \\nof the Rights of Man and of the Citizen (La Déclaration des Droits de l'Homme et du Citoyen), drafted by Lafayette \\nwith the help of Thomas Jefferson and adopted on 26 August, paved the way to a Constitutional Monarchy \\n(4 September 1791 – 21 September 1792). Despite these dramatic changes, life at the court continued, while the situation \\nin Paris was becoming critical because of bread shortages in September. On 5 October 1789, a crowd from Paris descended upon Versailles \\nand forced the royal family to move to the Tuileries Palace in Paris, where they lived under a form of house arrest under \\nthe watch of Lafayette's Garde Nationale, while the Comte de Provence and his wife were allowed to reside in the \\nPetit Luxembourg, where they remained until they went into exile on 20 June 1791." def run_model(input_string, **generator_args): generator_args = { "max_length": 256, "num_beams": 4, "length_penalty": 1.5, "no_repeat_ngram_size": 3, "early_stopping": True, } input_string = "generate questions: " + input_string + " </s>" input_ids = tokenizer.encode(input_string, return_tensors="pt") res = hfmodel.generate(input_ids, **generator_args) output = tokenizer.batch_decode(res, skip_special_tokens=True) output = [item.split("<sep>") for item in output] return output run_model(text) => [['When did the National Constituent Assembly abolish feudal privileges?', ' Who drafted the Declaration of the Rights of Man and of the Citizen?', ' When was the Constitutional Monarchy established?', ' What was the name of the Declaration that paved the way to a constitutional monarchy?', '']] ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5834 | 0.34 | 100 | 1.9107 | | 1.9642 | 0.68 | 200 | 1.7227 | | 1.8526 | 1.02 | 300 | 1.6627 | | 1.7383 | 1.36 | 400 | 1.6354 | | 1.7223 | 1.69 | 500 | 1.6154 | | 1.6871 | 2.03 | 600 | 1.6096 | | 1.6309 | 2.37 | 700 | 1.6048 | | 1.6242 | 2.71 | 800 | 1.5923 | | 1.6226 | 3.05 | 900 | 1.5855 | | 1.5645 | 3.39 | 1000 | 1.5874 | | 1.5705 | 3.73 | 1100 | 1.5822 | | 1.5543 | 4.07 | 1200 | 1.5817 | | 1.5284 | 4.41 | 1300 | 1.5841 | | 1.5275 | 4.75 | 1400 | 1.5741 | | 1.5269 | 5.08 | 1500 | 1.5715 | | 1.5079 | 5.42 | 1600 | 1.5701 | | 1.4876 | 5.76 | 1700 | 1.5754 | | 1.498 | 6.1 | 1800 | 1.5699 | | 1.4852 | 6.44 | 1900 | 1.5693 | | 1.4776 | 6.78 | 2000 | 1.5691 | ### Framework versions - Transformers 4.10.3 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
ThePioneer/CoolerWaifuDiffusion
ThePioneer
2023-01-22T19:16:59Z
385
6
diffusers
[ "diffusers", "art", "text-to-image", "en", "ja", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-01-20T23:52:39Z
--- license: other language: - en - ja library_name: diffusers pipeline_tag: text-to-image tags: - art --- <style> code { white-space : pre-wrap !important; word-break: break-word; } </style> # モデル説明 (model explanation) - [CoolJapanDiffusion 2.1.1](https://huggingface.co/aipicasso/cool-japan-diffusion-2-1-1/blob/main/v2-1-1.ckpt)と[WaifuDiffusion 1.4 anime epoch2](https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/wd-1-4-anime_e2.ckpt)のマージ。比率はckptファイル名の記載の通り。 - colabのWebUIで動かせる。 - [これ](https://colab.research.google.com/drive/1ldhBc70wvuvkp4Af_vNTzTfBXwpf_cH5?usp=sharing)の以下の書き換えを行う。やり方は[ここ](https://the-pioneer.notion.site/Colab-Automatic1111-6043f15ef44d4ba0b11920c95d33a78c)。 - ~~リアル系モデルとマージしようとすると、発色が鮮やかになりすぎる傾向あり。~~SD 2.1 768とのマージが原因。512系とのマージなら問題なし。 ```python !aria2c --summary-interval=10 -x 16 -s 16 --allow-overwrite=true -Z https://huggingface.co/JosephusCheung/ACertainModel/resolve/main/ACertainModel-half.ckpt ``` - **注意: URLを引用符で囲まないとエラーになることが判明したのでご注意ください** ```python !aria2c --summary-interval=10 -x 16 -s 16 --allow-overwrite=true -Z "https://huggingface.co/ThePioneer/CoolerWaifuDiffusion/resolve/main/0.65(wd-1-4-anime_e2)%20%2B%200.35(v2-1-1).ckpt" !aria2c --summary-interval=10 -x 16 -s 16 --allow-overwrite=true -Z "https://huggingface.co/ThePioneer/CoolerWaifuDiffusion/raw/main/0.65(wd-1-4-anime_e2)%20%2B%200.35(v2-1-1).yaml" ``` - Some merged model of [CoolJapanDiffusion 2.1.1](https://huggingface.co/aipicasso/cool-japan-diffusion-2-1-1/blob/main/v2-1-1.ckpt) and [WaifuDiffusion 1.4 anime epoch2](https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/wd-1-4-anime_e2.ckpt). The merge ration of each model is written on the ckpt file name. - You can run this model on colab WebUI. - Rewrite the following line of [this notebook](https://colab.research.google.com/drive/1ldhBc70wvuvkp4Af_vNTzTfBXwpf_cH5?usp=sharing) following the instructions I posted [here](https://the-pioneer.notion.site/Colab-Automatic1111-6043f15ef44d4ba0b11920c95d33a78c). - ~~Trying to merge with a realistic model will probably result in a model with too vivid color.~~ It was because I was trying to merge with a SD 2.1 768 based model. It works fine with a SD 2.1 512 based model. ```python !aria2c --summary-interval=10 -x 16 -s 16 --allow-overwrite=true -Z https://huggingface.co/JosephusCheung/ACertainModel/resolve/main/ACertainModel-half.ckpt ``` - **NOTE: you need to wrap the URL with a quotation as follows** ```python !aria2c --summary-interval=10 -x 16 -s 16 --allow-overwrite=true -Z "https://huggingface.co/ThePioneer/CoolerWaifuDiffusion/resolve/main/0.65(wd-1-4-anime_e2)%20%2B%200.35(v2-1-1).ckpt" !aria2c --summary-interval=10 -x 16 -s 16 --allow-overwrite=true -Z "https://huggingface.co/ThePioneer/CoolerWaifuDiffusion/raw/main/0.65(wd-1-4-anime_e2)%20%2B%200.35(v2-1-1).yaml" ``` # サンプル画像 (sample images) ## prompt ``` masterpiece, best quality, A teenage girl wearing a white feather down jacket, smile, in the style of Kyoto Animation in the 2010s, official art, ((black hair)), eyes of Haruhi Suzumiya, face of Haruhi Suzumiya, beautiful symmetric face, ponytail, beautifully detailed hair, posing of Haruhi Suzumiya, at a snowing mountain in winter, detailed background, alone, solo, 8k, ((((sharp contrast)))), watercolor Negative prompt: low quality, bad face, ((ugly face)), asymmetric face, ((((bad anatomy)))), ((bad hand)), too many fingers, missing fingers, too many legs, too many arms, too many heads, wrong anatomy, ((lowres, jpeg artifacts)), [[[[3d]]]], 2d, (((text))), logo, signature, ((loli)), twintails, ponytail, long hair, plaits, pajamas, blushing, boy, sad face, bells, fanart, pixiv, card game, ahoge, ribbon, headband, thick eyebrow, bakemonogatari, black outlines, solid outlines, bold outlines, outlines, technicolor, ((blurry)), vivid colors, vector art, anime, manga, posters, [[oily skin]], huge breasts, baby face, bruises, simple background Steps: 50 Sampler: Euler a CFG scale: 7 Seed: 2930115154 Size: 768x768 ``` ## xy plot - 最適なモデルは何を生成するかによって変わりうる。 - The best model may depend on what to generate. ![sample1_75_95](https://huggingface.co/ThePioneer/CoolerWaifuDiffusion/resolve/main/xy_grid-0000-2930115154-masterpiece%2C%20best%20quality%2C%20A%20teenage%20girl%20wearing%20a%20white%20feather%20down%20jacket%2C%20smile%2C%20in%20the%20style%20of%20Kyoto%20Animation%20in%20the%20201.png) ![sample2_65_80](https://huggingface.co/ThePioneer/CoolerWaifuDiffusion/resolve/main/xy_grid-0001-2930115154-masterpiece%2C%20best%20quality%2C%20A%20teenage%20girl%20wearing%20a%20white%20feather%20down%20jacket%2C%20smile%2C%20in%20the%20style%20of%20Kyoto%20Animation%20in%20the%20201.png) ![sample3_40_65_1](https://huggingface.co/ThePioneer/CoolerWaifuDiffusion/resolve/main/xy_grid-0002-2930115154-masterpiece%2C%20best%20quality%2C%20A%20teenage%20girl%20wearing%20a%20white%20feather%20down%20jacket%2C%20smile%2C%20in%20the%20style%20of%20Kyoto%20Animation%20in%20the%20201.png) ![sample4_40_65_2](https://huggingface.co/ThePioneer/CoolerWaifuDiffusion/resolve/main/xy_grid-0003-321423-masterpiece%2C%20best%20quality%2C%20A%20teenage%20girl%20wearing%20a%20white%20feather%20down%20jacket%2C%20smile%2C%20in%20the%20style%20of%20Kyoto%20Animation%20in%20the%20201.png) # License: The Libertarian OpenRAIL License 注意: アップロード者が日本語母語話者であるため、翻訳版と日本語版に差異がある場合、**元の日本語版**が優先されるものとする。 Caution: Since the uploader is a Japanese native, in the event of any differences in meaning between the original Japanese version and a translation, **the original Japanese version** takes precedence. 要約: ほぼCreativeML Open RAIL-M。但しリバタリアン的解釈によって再構成。CreativeML Open RAIL-Mの制限は、同解釈において維持されているものと判断する。 Summary: A CreativeML Open RAIL-M, interpreted and reconstructed under a libertarian manner. The restriction of CreativeML Open RAIL-M is considered to be valid under such interpretation. ## 主な相違 (differences from the original CreativeML Open RAIL-M license) - 違法性は、無罪推定の原則に基づき、有罪確定を以て、かつそれのみによって判断する(有罪が確定するまで、法令違反であるように見えても、ライセンス者は違法とはみなさない)。 - ex. フェアユース文化圏は無論、親告罪である日本においても、著作者が訴えない範囲のほどほどの二次創作は、事実上問題視しない。 - 本モデル及び派生モデルによる生成物はパブリック・ドメイン(CC0 1.0)とすることを義務付け、生成者を含む任意の人物による(再)利用の自由を保障する。 - Stability.aiが運営するDream Studioが生成物をCC0 1.0としているが、元のモデルライセンスと両立していることに注意せよ。 - 派生モデルでは、本ライセンスと同等以上の制限とともに、同等以上の自由も保障しなければならない。 - The violation of law or regulation will be judged by and only by your conviction per the presumption of innocence (unless you are convicted, it is not enough to claim it is illegal for the Licensor, even if it looks like it). - ex. Fanart in Japan is technically illegal, unlike countries which have fair use, but as long as it is in the moderate range and the copright holder won't sue you, we will practically do not consider it as problematic. - Outputs you generated by the Model or Derivatives of the Model must be distributed under public domain (CC0 1.0), to ensure not only you but anyone can (re)use it freely. - Note that Dream Studio, run by Stability.ai demands the output be CC0 1.0 as well, but still isn't against the original model license. - Derivatives of the Model will always have to include - at minimum - the same use-based restrictions <u>and the same open permissions</u>. ## 全文 (full license) ### 日本語版 [License_ja.md](https://huggingface.co/ThePioneer/MoeDiffusionPlusPlus/blob/main/License_ja.md)を参照。 ### English version [License_en.md](https://huggingface.co/ThePioneer/MoeDiffusionPlusPlus/blob/main/License_en.md)を参照。
charanhu/text_to_sql_5
charanhu
2023-01-26T07:50:52Z
385
3
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain", "translation", "unk", "dataset:charanhu/autotrain-data-text_to_sql_finetune", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
translation
2023-01-26T07:40:11Z
--- tags: - autotrain - translation language: - unk - unk datasets: - charanhu/autotrain-data-text_to_sql_finetune co2_eq_emissions: emissions: 14.683238550750525 --- # Model Trained Using AutoTrain - Problem type: Translation - Model ID: 3073487572 - CO2 Emissions (in grams): 14.6832 ## Validation Metrics - Loss: 0.159 - SacreBLEU: 72.889 - Gen len: 40.580
circulus/sd-photoreal-v2.7
circulus
2023-02-26T14:32:07Z
385
4
diffusers
[ "diffusers", "generative ai", "stable-diffusion", "image-to-image", "realism", "art", "text-to-image", "en", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-17T12:10:58Z
--- license: gpl-3.0 language: - en library_name: diffusers pipeline_tag: text-to-image tags: - generative ai - stable-diffusion - image-to-image - realism - art --- Photoreal v2.7 Finetuned Stable Diffusion 1.5 for generating images You can test this model here > https://eva.circul.us/index.html ![img](./photoreal0.png) ![img](./photoreal1.png) ![img](./photoreal2.png)
timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k
timm
2024-02-10T23:41:09Z
385
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "arxiv:1905.00546", "arxiv:1611.05431", "arxiv:1512.03385", "license:cc-by-nc-4.0", "region:us" ]
image-classification
2023-04-05T19:01:17Z
--- license: cc-by-nc-4.0 library_name: timm tags: - image-classification - timm --- # Model card for resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k A ResNeXt-B image classification model. This model features: * ReLU activations * single layer 7x7 convolution with pooling * 1x1 convolution shortcut downsample * grouped 3x3 bottleneck convolutions Pretrained on a subset of YFCC100M using semi-supervised learning and fine-tuned on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 44.2 - GMACs: 8.0 - Activations (M): 21.2 - Image size: 224 x 224 - **Papers:** - Billion-scale semi-supervised learning for image classification: https://arxiv.org/abs/1905.00546 - Aggregated Residual Transformations for Deep Neural Networks: https://arxiv.org/abs/1611.05431 - Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385 - **Original:** https://github.com/facebookresearch/semi-supervised-ImageNet1K-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 256, 56, 56]) # torch.Size([1, 512, 28, 28]) # torch.Size([1, 1024, 14, 14]) # torch.Size([1, 2048, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec| |------------------------------------------|--------|-----|-----|-----------|-----|-----|-------| |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 | |[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 | |[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 | |[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 | |[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 | |[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 | |[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 | |[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 | |[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 | |[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 | |[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 | |[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 | |[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 | |[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 | |[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 | |[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 | |[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 | |[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 | |[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 | |[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 | |[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 | |[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 | |[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 | |[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 | |[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 | |[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 | |[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 | |[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 | |[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 | |[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 | |[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 | |[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 | |[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 | |[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 | |[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 | |[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 | |[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 | |[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 | |[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 | |[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 | |[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 | |[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 | |[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 | |[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 | |[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 | |[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 | |[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 | |[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 | |[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 | ## Citation ```bibtex @misc{yalniz2019billionscale, title={Billion-scale semi-supervised learning for image classification}, author={I. Zeki Yalniz and Hervé Jégou and Kan Chen and Manohar Paluri and Dhruv Mahajan}, year={2019}, eprint={1905.00546}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @article{Xie2016, title={Aggregated Residual Transformations for Deep Neural Networks}, author={Saining Xie and Ross Girshick and Piotr Dollár and Zhuowen Tu and Kaiming He}, journal={arXiv preprint arXiv:1611.05431}, year={2016} } ``` ```bibtex @article{He2015, author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun}, title = {Deep Residual Learning for Image Recognition}, journal = {arXiv preprint arXiv:1512.03385}, year = {2015} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
redstonehero/realisian_v40
redstonehero
2023-05-29T23:30:54Z
385
1
diffusers
[ "diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-29T21:59:28Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image ---
digiplay/V3_by_Hans_Asian
digiplay
2023-07-05T01:37:19Z
385
1
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-04T23:39:44Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/62602/v3 Sample image and prompt : beautiful girl close-up ,lake,mountain ,wide landscape, sharp and crisp background, very beautiful landscape,best quality , ![de2c8ef8-23f8-491c-b9c8-cfca748d85ad.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/5nWpLmeYVEAagJOutf3O1.jpeg) Original Author's DEMO image : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/7abddb5a-2e99-4431-8ef0-ff6d9d4180dd/width=1248/2.jpeg)
lorahub/flan_t5_large-app_reviews_categorize_rating_using_review
lorahub
2023-07-24T09:42:32Z
385
0
peft
[ "peft", "region:us" ]
null
2023-07-24T09:42:22Z
--- library_name: peft ---
gotutiyan/gector-roberta-base-5k
gotutiyan
2024-03-19T06:20:02Z
385
0
transformers
[ "transformers", "pytorch", "GECToR_gotutiyan", "grammatical error correction", "en", "endpoints_compatible", "region:us" ]
null
2023-08-21T04:54:18Z
--- language: en tags: - GECToR_gotutiyan - grammatical error correction --- Only non-commercial purposes. # gector sample This is an unofficial pretrained model of GECToR ([Omelianchuk+ 2020](https://aclanthology.org/2020.bea-1.16/)). ### How to use The code is avaliable from https://github.com/gotutiyan/gector. CLI ```sh python predict.py --input <raw text file> --restore_dir gotutiyan/gector-roberta-base-5k --out <path to output file> ``` API ```py from transformers import AutoTokenizer from gector.modeling import GECToR from gector.predict import predict, load_verb_dict import torch model_id = 'gotutiyan/gector-roberta-base-5k' model = GECToR.from_pretrained(model_id) if torch.cuda.is_available(): model.cuda() tokenizer = AutoTokenizer.from_pretrained(model_id) encode, decode = load_verb_dict('data/verb-form-vocab.txt') srcs = [ 'This is a correct sentence.', 'This are a wrong sentences' ] corrected = predict( model, tokenizer, srcs, encode, decode, keep_confidence=0.0, min_error_prob=0.0, n_iteration=5, batch_size=2, ) print(corrected) ```
TheBloke/guanaco-13B-GGUF
TheBloke
2023-09-27T12:52:44Z
385
0
transformers
[ "transformers", "gguf", "llama", "base_model:timdettmers/guanaco-13b-merged", "license:other", "text-generation-inference", "region:us" ]
null
2023-09-19T23:31:35Z
--- license: other model_name: Guanaco 13B base_model: timdettmers/guanaco-13b-merged inference: false model_creator: Tim Dettmers model_type: llama prompt_template: '### Human: {prompt} ### Assistant: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Guanaco 13B - GGUF - Model creator: [Tim Dettmers](https://huggingface.co/timdettmers) - Original model: [Guanaco 13B](https://huggingface.co/timdettmers/guanaco-13b-merged) <!-- description start --> ## Description This repo contains GGUF format model files for [Tim Dettmers's Guanaco 13B](https://huggingface.co/timdettmers/guanaco-13b-merged). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/guanaco-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/guanaco-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/guanaco-13B-GGUF) * [Tim Dettmers's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/timdettmers/guanaco-13b-merged) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Guanaco ``` ### Human: {prompt} ### Assistant: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [guanaco-13B.Q2_K.gguf](https://huggingface.co/TheBloke/guanaco-13B-GGUF/blob/main/guanaco-13B.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes | | [guanaco-13B.Q3_K_S.gguf](https://huggingface.co/TheBloke/guanaco-13B-GGUF/blob/main/guanaco-13B.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss | | [guanaco-13B.Q3_K_M.gguf](https://huggingface.co/TheBloke/guanaco-13B-GGUF/blob/main/guanaco-13B.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss | | [guanaco-13B.Q3_K_L.gguf](https://huggingface.co/TheBloke/guanaco-13B-GGUF/blob/main/guanaco-13B.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss | | [guanaco-13B.Q4_0.gguf](https://huggingface.co/TheBloke/guanaco-13B-GGUF/blob/main/guanaco-13B.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [guanaco-13B.Q4_K_S.gguf](https://huggingface.co/TheBloke/guanaco-13B-GGUF/blob/main/guanaco-13B.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss | | [guanaco-13B.Q4_K_M.gguf](https://huggingface.co/TheBloke/guanaco-13B-GGUF/blob/main/guanaco-13B.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended | | [guanaco-13B.Q5_0.gguf](https://huggingface.co/TheBloke/guanaco-13B-GGUF/blob/main/guanaco-13B.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [guanaco-13B.Q5_K_S.gguf](https://huggingface.co/TheBloke/guanaco-13B-GGUF/blob/main/guanaco-13B.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended | | [guanaco-13B.Q5_K_M.gguf](https://huggingface.co/TheBloke/guanaco-13B-GGUF/blob/main/guanaco-13B.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended | | [guanaco-13B.Q6_K.gguf](https://huggingface.co/TheBloke/guanaco-13B-GGUF/blob/main/guanaco-13B.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss | | [guanaco-13B.Q8_0.gguf](https://huggingface.co/TheBloke/guanaco-13B-GGUF/blob/main/guanaco-13B.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/guanaco-13B-GGUF and below it, a specific filename to download, such as: guanaco-13B.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/guanaco-13B-GGUF guanaco-13B.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/guanaco-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/guanaco-13B-GGUF guanaco-13B.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m guanaco-13B.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Human: {prompt}\n### Assistant:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/guanaco-13B-GGUF", model_file="guanaco-13B.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Tim Dettmers's Guanaco 13B No original model card was available. <!-- original-model-card end -->
andrijdavid/Sakura-SOLAR-Instruct-GGUF
andrijdavid
2023-12-28T00:00:48Z
385
1
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "GGUF", "conversational", "en", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-27T22:25:08Z
--- language: - en license: cc-by-nc-sa-4.0 tags: - GGUF pipeline_tag: text-generation quantized_by: andrijdavid --- # Sakura-SOLAR-Instruct-GGUF - Original model: [Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) <!-- description start --> ## Description This repo contains GGUF format model files for [Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration. * [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications​ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling. * [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration. * [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection. * [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use. * [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server. * [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents. <!-- README_GGUF.md-about-gguf end --> <!-- compatibility_gguf start --> ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: andrijdavid/Sakura-SOLAR-Instruct-GGUF and below it, a specific filename to download, such as: Sakura-SOLAR-Instruct-f16.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download andrijdavid/Sakura-SOLAR-Instruct-GGUF Sakura-SOLAR-Instruct-f16.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download andrijdavid/Sakura-SOLAR-Instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download andrijdavid/Sakura-SOLAR-Instruct-GGUF Sakura-SOLAR-Instruct-f16.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Sakura-SOLAR-Instruct-f16.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Sakura-SOLAR-Instruct-f16.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<PROMPT>", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Sakura-SOLAR-Instruct-f16.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer end --> <!-- original-model-card start --> # Original model card: Sakura-SOLAR-Instruct # **Sakura-SOLAR-Instruct** <img src='./sakura.png' width=512> **(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다** ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Method** Using [Mergekit](https://github.com/cg123/mergekit). I shared the information about my model. (training and code) **Please see: [⭐Sakura-SOLAR](https://github.com/KyujinHan/Sakura-SOLAR-DPO).** **Blog** - [Sakura-SOLAR 모델 제작 과정 및 후기](https://kyujinpy.tistory.com/122). # **Model Benchmark** ## Open leaderboard - Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | | -- | -- | - | <!-- original-model-card end -->
Blib-la/caricature_lora_sdxl
Blib-la
2024-05-15T23:42:22Z
385
7
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "dataset:Blib-la/caricature_dataset", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:cc-by-nc-nd-4.0", "region:us" ]
text-to-image
2024-03-25T19:30:35Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: >- caricature of a asian woman with disappointed expression, oversized big chin, wearing a Folk costume, white background, exaggerated facial features <lora:caricature_sdxl_v2:1> output: url: images/00543-3110114300.png - text: >- caricature of a middle eastern kid with amazed expression, oversized big ears, wearing a 60's style outfit, white background, exaggerated facial features <lora:caricature_sdxl_v2:1> output: url: images/00488-599893346.png - text: >- caricature of a caucasian woman with drowsy expression, oversized big eyes, wearing a Leather jacket, white background, exaggerated facial features <lora:caricature_sdxl_v2:1> output: url: images/00493-3833057062.png - text: >- caricature of a black kid with amazed expression, oversized big eyes, wearing a Superhero costume, white background, exaggerated facial features <lora:caricature_sdxl_v2:1> output: url: images/00496-3581784488.png - text: >- caricature of a caucasian man with amazed expression, oversized big ears, wearing a Jogging suit, white background, exaggerated facial features <lora:caricature_sdxl_v2:1> output: url: images/00502-1038589121.png - text: >- caricature of a black man with skeptical expression, oversized big mouth, wearing a Hip hop gear, white background, exaggerated facial features <lora:caricature_sdxl_v2:1> output: url: images/00504-2768598054.png - text: >- caricature of a indian kid with happy expression, oversized big ears, wearing a Dungarees, white background, exaggerated facial features <lora:caricature_sdxl_v2:1> output: url: images/00505-2065322002.png - text: >- caricature of a middle eastern man with happy expression, oversized big ears, wearing a Hawaiian shirt, white background, exaggerated facial features <lora:caricature_sdxl_v2:1> output: url: images/00527-2557789390.png - text: >- caricature of a asian man with sad expression, oversized big ears, wearing a Linen shirt, white background, exaggerated facial features <lora:caricature_sdxl_v2:1> output: url: images/00542-2851304771.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: null license: cc-by-nc-nd-4.0 datasets: - Blib-la/caricature_dataset --- # Caricature LoRA SDXL ## Captain: The AI platform that evolves to your needs * 🚀 [Check out Captain](https://get-captain.com) * 👩‍💻 [Captain on GitHub](https://github.com/blib-la/captain) <Gallery /> [![Discord](https://img.shields.io/discord/1091306623819059300?color=7289da&label=Discord&logo=discord&logoColor=fff&style=for-the-badge)](https://discord.com/invite/m3TBB9XEkb) ## Model Overview This model card showcases a LoRA (Low-Rank Adaptation) model trained on our proprietary [Caricature Dataset](https://huggingface.co/datasets/Blib-la/caricature_dataset). The model is fine-tuned to specialize in generating exaggerated and distinctive caricature images, drawing from a diverse set of AI-generated portraits. ## Training Configuration - **Dataset**: Proprietary Caricature Dataset created via Stable Diffusion (SDXL) - **Epochs**: 16 - **Number of Images**: 174 - **Repeats per Image**: 10 (Each image was utilized 10 times during training to reinforce learning) - **Optimizer**: DAdaptAdam (An advanced optimizer for efficient and dynamic AI training) - **Precision**: bf16 (Chosen for the optimal balance of performance and computational resource management) - **Main Trigger**: Keywords like “caricature” prime the model to generate images within the caricature domain. - **Xformers**: Enabled (Enhancing transformer model efficiency) - **Captioning Method**: GPT-Vision (Employed to generate relevant captions, crucial for token shuffling in training) - **Base Model**: Stable Diffusion XL 1.0 (A robust foundation for image generation tasks) ## Model Usage Employ this model to create a wide variety of caricatures, each with exaggerated features that highlight the subject's distinct characteristics in a stylized manner. ## Performance and Limitations - **Performance**: Exhibits a strong ability to vary facial features creatively and with high fidelity to the caricature art style. - **Limitations**: May exhibit less diversity in scenarios not covered by the 174 training images. ## Ethical Considerations - **Intended Use**: The model is purposed for creative and educational applications, particularly in the arts and entertainment sectors. - **Bias and Fairness**: Attention has been paid to ensure a diverse representation within the dataset to mitigate biases. ## Licensing - **Model License**: Licensed under Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International (CC BY-NC-ND 4.0) for non-commercial use. ## Contributions and Feedback We welcome feedback and contributions to improve the model further. If you have suggestions or would like to contribute to the model's development, please reach out through the model's Hugging Face page. ## Trigger words You should use "caricature" to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Blib-la/caricature_lora_sdxl/tree/main) them in the Files & versions tab. ## Related https://blib.la/blog/crafting-the-future-blibla-s-ethical-approach-to-ai-model-training ## Additional Usage Restrictions for Blibla's LoRAs Hosted on Hugging Face In alignment with our commitment to ensuring the responsible and ethical use of our models, and in addition to the terms set forth in the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0) under which Blibla's LoRAs are licensed, we hereby impose the following specific restrictions: 1. **Prohibited Platforms**: Reuploading, redistributing, or offering of image generation services using our models on platforms not owned or operated by Blibla or Hugging Face is strictly forbidden. This includes, but is not limited to, any platforms that host, allow, or promote Not Safe For Work (NSFW) content. 2. **Explicitly Forbidden Platforms**: For clarity, and without limiting the generality of the above, platforms including but not limited to Leonardo AI, Civit AI, and any "Hugging Face spaces" that host or permit NSFW content are explicitly prohibited from hosting, or utilizing Blibla's LoRAs in any form or manner. 3. **Responsibility of Users**: Users of Blibla's LoRAs are responsible for ensuring that the environments in which they use, share, or promote our models adhere strictly to these restrictions. Violation of these terms may result in revocation of the license granted under CC BY-NC-ND 4.0 and may prompt further legal action to protect the integrity of our models and the safety of the communities we serve. 4. **Purpose of Restrictions**: These restrictions are put in place to align with Blibla's ethical standards and the intended use of our models. They are designed to prevent associations with content or platforms that do not reflect our values or the intended application of our technology. By utilizing Blibla's LoRAs, you acknowledge and agree to these additional restrictions, ensuring that the use of our models remains within the bounds of ethical and responsible practice.
mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF
mradermacher
2024-05-06T05:15:23Z
385
0
transformers
[ "transformers", "gguf", "en", "base_model:ParkTaeEon/Myrrh_solar_10.7b_v0.1-dpo", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-05T09:02:52Z
--- base_model: ParkTaeEon/Myrrh_solar_10.7b_v0.1-dpo language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ParkTaeEon/Myrrh_solar_10.7b_v0.1-dpo <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q2_K.gguf) | Q2_K | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.IQ3_XS.gguf) | IQ3_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.IQ3_M.gguf) | IQ3_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.IQ4_XS.gguf) | IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q5_K_S.gguf) | Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q5_K_M.gguf) | Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q6_K.gguf) | Q6_K | 8.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_v0.1-dpo-GGUF/resolve/main/Myrrh_solar_10.7b_v0.1-dpo.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
NLPark/Test0_SLIDE
NLPark
2024-04-20T03:15:42Z
385
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-20T01:45:16Z
--- language: - en pipeline_tag: text-generation inference: true library_name: transformers license: cc-by-nc-sa-4.0 --- # Shi-Ci Language Identify & Decode Expositor **8B**, Ruozhiba... * [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) **Chinese** Released as an early preview of our v3 LLMs. The v3 series covers the "Shi-Ci", "AnFeng" and "Cecilia" LLM products. The sizes are labelled from small to large "Nano" "Leap" "Pattern" "Avocet "Robin" "Kestrel"
netcat420/MFANNv0.6
netcat420
2024-04-24T17:28:45Z
385
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:netcat420/MFANN", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-20T08:22:13Z
--- library_name: transformers license: apache-2.0 datasets: - netcat420/MFANN --- I am now basing all future releases of the MFANN experiment using llama-3 as a base model, I may continue fine-tuning mistral-7b every other release this model uses meta's llama-3 as its base, and benchmarks are pending ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6435f27b2d0ed796668ffd8b/VlqyDezfgqoujwIdiNfYB.png) changed the model name to MFANNV0.6 due to a failed benchmark and the need to resubmit edit: due to continuous benchmark fails I am renaming the model back to MFANNver0.6, the 3b model is also failing benchmarks for some reason despite the fact both models run fine on my machine :(
mradermacher/Westbreeze-7B-GGUF
mradermacher
2024-05-11T07:02:54Z
385
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "senseable/WestLake-7B-v2", "Rebecca19990101/Breeze-Petro-7B-Instruct-v1", "en", "base_model:Rebecca19990101/Westbreeze-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-11T06:35:54Z
--- base_model: Rebecca19990101/Westbreeze-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - senseable/WestLake-7B-v2 - Rebecca19990101/Breeze-Petro-7B-Instruct-v1 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Rebecca19990101/Westbreeze-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Westbreeze-7B-GGUF/resolve/main/Westbreeze-7B.Q2_K.gguf) | Q2_K | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Westbreeze-7B-GGUF/resolve/main/Westbreeze-7B.IQ3_XS.gguf) | IQ3_XS | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Westbreeze-7B-GGUF/resolve/main/Westbreeze-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Westbreeze-7B-GGUF/resolve/main/Westbreeze-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Westbreeze-7B-GGUF/resolve/main/Westbreeze-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Westbreeze-7B-GGUF/resolve/main/Westbreeze-7B.Q3_K_M.gguf) | Q3_K_M | 3.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Westbreeze-7B-GGUF/resolve/main/Westbreeze-7B.Q3_K_L.gguf) | Q3_K_L | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Westbreeze-7B-GGUF/resolve/main/Westbreeze-7B.IQ4_XS.gguf) | IQ4_XS | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Westbreeze-7B-GGUF/resolve/main/Westbreeze-7B.Q4_K_S.gguf) | Q4_K_S | 4.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Westbreeze-7B-GGUF/resolve/main/Westbreeze-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Westbreeze-7B-GGUF/resolve/main/Westbreeze-7B.Q5_K_S.gguf) | Q5_K_S | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Westbreeze-7B-GGUF/resolve/main/Westbreeze-7B.Q5_K_M.gguf) | Q5_K_M | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Westbreeze-7B-GGUF/resolve/main/Westbreeze-7B.Q6_K.gguf) | Q6_K | 6.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Westbreeze-7B-GGUF/resolve/main/Westbreeze-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Westbreeze-7B-GGUF/resolve/main/Westbreeze-7B.f16.gguf) | f16 | 14.8 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Synaptica-GGUF
mradermacher
2024-05-14T07:24:34Z
385
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:mergekit-community/Synaptica", "endpoints_compatible", "region:us" ]
null
2024-05-14T06:46:46Z
--- base_model: mergekit-community/Synaptica language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hfhfix --> <!-- ### vocab_type: --> static quants of https://huggingface.co/mergekit-community/Synaptica <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.Q2_K.gguf) | Q2_K | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.IQ3_XS.gguf) | IQ3_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.IQ3_M.gguf) | IQ3_M | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.IQ4_XS.gguf) | IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.Q5_K_S.gguf) | Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.Q5_K_M.gguf) | Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.Q6_K.gguf) | Q6_K | 8.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Synaptica-GGUF/resolve/main/Synaptica.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
allknowingroger/MultiMash-12B-slerp
allknowingroger
2024-05-20T13:30:56Z
385
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "merge", "mergekit", "lazymergekit", "allknowingroger/Multimerge-12B-MoE", "zhengr/MixTAO-7Bx2-MoE-v8.1", "base_model:allknowingroger/Multimerge-12B-MoE", "base_model:zhengr/MixTAO-7Bx2-MoE-v8.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-20T12:44:09Z
--- tags: - merge - mergekit - lazymergekit - allknowingroger/Multimerge-12B-MoE - zhengr/MixTAO-7Bx2-MoE-v8.1 base_model: - allknowingroger/Multimerge-12B-MoE - zhengr/MixTAO-7Bx2-MoE-v8.1 license: apache-2.0 --- # MultiMash-12B-slerp MultiMash-12B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [allknowingroger/Multimerge-12B-MoE](https://huggingface.co/allknowingroger/Multimerge-12B-MoE) * [zhengr/MixTAO-7Bx2-MoE-v8.1](https://huggingface.co/zhengr/MixTAO-7Bx2-MoE-v8.1) ## 🧩 Configuration ```yaml slices: - sources: - model: allknowingroger/Multimerge-12B-MoE layer_range: [0, 32] - model: zhengr/MixTAO-7Bx2-MoE-v8.1 layer_range: [0, 32] merge_method: slerp base_model: allknowingroger/Multimerge-12B-MoE parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "allknowingroger/MultiMash-12B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
allknowingroger/MultiMash2-12B-slerp
allknowingroger
2024-05-20T14:14:23Z
385
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "merge", "mergekit", "lazymergekit", "allknowingroger/Multimerge-12B-MoE", "yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B", "base_model:allknowingroger/Multimerge-12B-MoE", "base_model:yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-20T14:05:56Z
--- tags: - merge - mergekit - lazymergekit - allknowingroger/Multimerge-12B-MoE - yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B base_model: - allknowingroger/Multimerge-12B-MoE - yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B license: apache-2.0 --- # MultiMash2-12B-slerp MultiMash2-12B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [allknowingroger/Multimerge-12B-MoE](https://huggingface.co/allknowingroger/Multimerge-12B-MoE) * [yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B](https://huggingface.co/yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B) ## 🧩 Configuration ```yaml slices: - sources: - model: allknowingroger/Multimerge-12B-MoE layer_range: [0, 32] - model: yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B layer_range: [0, 32] merge_method: slerp base_model: allknowingroger/Multimerge-12B-MoE parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "allknowingroger/MultiMash2-12B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
qwp4w3hyb/Phi-3-medium-4k-instruct-iMat-GGUF
qwp4w3hyb
2024-05-23T09:18:07Z
385
0
null
[ "gguf", "nlp", "code", "microsoft", "phi", "instruct", "finetune", "imatrix", "importance matrix", "text-generation", "multilingual", "base_model:microsoft/Phi-3-medium-128k-instruct", "license:mit", "region:us" ]
text-generation
2024-05-22T19:57:22Z
--- license: mit license_link: >- https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation base_model: microsoft/Phi-3-medium-128k-instruct tags: - nlp - code - microsoft - phi - instruct - finetune - gguf - imatrix - importance matrix --- # Quant Infos - Requires latest llama.cpp master; - quants done with an importance matrix for improved quantization loss - gguf & imatrix generated from bf16 for "optimal" accuracy loss (some say this is snake oil, but it can't hurt) - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S - Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [201cc11afa0a1950e1f632390b2ac6c937a0d8f0](https://github.com/ggerganov/llama.cpp/commit/201cc11afa0a1950e1f632390b2ac6c937a0d8f0) - Imatrix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) multi-purpose dataset. ``` ./imatrix -c 512 -m $model_name-bf16.gguf -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-bf16-gmerged.dat ``` # Original Model Card: ## Model Summary The Phi-3-Medium-4K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Medium version in two variants [4K](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) which is the context length (in tokens) that it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-4K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) | | Short Context | Long Context | | ------- | ------------- | ------------ | | Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)| | Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)| | Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)| | Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct)| ## Intended Uses **Primary use cases** The model is intended for broad commercial and research use in English. The model provides uses for general purpose AI systems and applications which require: 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3-Medium-4K-Instruct has been integrated in the development version (4.40.2) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. The current `transformers` version can be verified with: `pip list | grep transformers`. Phi-3-Medium-4K-Instruct is also available in [Azure AI Studio](https://aka.ms/phi3-azure-ai). ### Tokenizer Phi-3-Medium-4K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Chat Format Given the nature of the training data, the Phi-3-Medium-4K-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion <|end|>\n<|assistant|> ``` For example: ```markdown <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model_id = "microsoft/Phi-3-medium-4k-instruct" model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_id) messages = [ {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` *Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.* ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3-Medium-4K-Instruct has 14B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 4K tokens * GPUs: 512 H100-80G * Training time: 42 days * Training data: 4.8T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. * Release dates: The model weight is released on May 21, 2024. ### Datasets Our training data includes a wide variety of sources, totaling 4.8 trillion tokens (including 10% multilingual), and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report). ## Benchmarks We report the results for Phi-3-Medium-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mixtral-8x22b, Gemini-Pro, Command R+ 104B, Llama-3-70B-Instruct, GPT-3.5-Turbo-1106, and GPT-4-Turbo-1106(Chat). All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. |Benchmark|Phi-3-Medium-4K-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |---------|-----------------------|--------|-------------|-------------------|-------------------|----------|------------------------| |AGI Eval<br>5-shot|50.2|50.1|54.0|56.9|48.4|49.0|59.6| |MMLU<br>5-shot|78.0|73.8|76.2|80.2|71.4|66.7|84.0| |BigBench Hard<br>3-shot|81.4|74.1|81.8|80.4|68.3|75.6|87.7| |ANLI<br>7-shot|55.8|63.4|65.2|68.3|58.1|64.2|71.7| |HellaSwag<br>5-shot|82.4|78.0|79.0|82.6|78.8|76.2|88.3| |ARC Challenge<br>10-shot|91.6|86.9|91.3|93.0|87.4|88.3|95.6| |ARC Easy<br>10-shot|97.7|95.7|96.9|98.2|96.3|96.1|98.8| |BoolQ<br>2-shot|86.5|86.1|82.7|89.1|79.1|86.4|91.3| |CommonsenseQA<br>10-shot|82.8|82.0|82.0|84.4|79.6|81.8|86.7| |MedQA<br>2-shot|69.9|59.2|67.9|78.5|63.4|58.2|83.7| |OpenBookQA<br>10-shot|87.4|86.8|88.6|91.8|86.0|86.4|93.4| |PIQA<br>5-shot|87.9|86.4|85.0|85.3|86.6|86.2|90.1| |Social IQA<br>5-shot|80.2|75.3|78.2|81.1|68.3|75.4|81.7| |TruthfulQA (MC2)<br>10-shot|75.1|57.8|67.4|81.9|67.7|72.6|85.2| |WinoGrande<br>5-shot|81.5|77.0|75.3|83.3|68.8|72.2|86.7| |TriviaQA<br>5-shot|73.9|82.8|84.5|78.5|85.8|80.2|73.3| |GSM8K Chain of Thought<br>8-shot|91.0|78.3|83.8|93.5|78.1|80.4|94.2| |HumanEval<br>0-shot|62.2|61.6|39.6|78.7|62.2|64.4|79.9| |MBPP<br>3-shot|75.2|68.9|70.7|81.3|77.8|73.2|86.7| |Average|78.5|75.0|76.3|82.5|74.3|75.4|85.2| We take a closer look at different categories across 80 public benchmark datasets at the table below: |Benchmark|Phi-3-Medium-4K-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |--------|------------------------|--------|-------------|-------------------|-------------------|----------|------------------------| |Popular aggregated benchmark|75.4|69.9|73.4|76.3|67.0|67.5|80.5| |Reasoning|84.1|79.3|81.5|86.7|78.3|80.4|89.3| |Language understanding|73.9|75.6|78.1|76.9|68.7|76.2|80.7| |Code generation|66.1|68.6|60.0|69.3|70.4|66.7|76.1| |Math|52.8|45.3|52.5|59.7|52.8|50.9|67.1| |Factual knowledge|48.3|60.3|60.6|52.4|63.4|54.6|45.9| |Multilingual|62.9|67.8|69.8|62.0|67.0|73.4|78.2| |Robustness|66.5|57.9|65.5|78.7|69.3|69.7|84.6| ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-Medium model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: + Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) ## Cross Platform Support ONNX runtime ecosystem now supports Phi3 Medium models across platforms and hardware. Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA). Along with DML, ONNX Runtime provides cross platform support for Phi3 Medium across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-medium-4k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
RichardErkhov/shadowml_-_BeagSake-7B-gguf
RichardErkhov
2024-05-23T23:46:31Z
385
0
null
[ "gguf", "region:us" ]
null
2024-05-23T21:24:58Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) BeagSake-7B - GGUF - Model creator: https://huggingface.co/shadowml/ - Original model: https://huggingface.co/shadowml/BeagSake-7B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [BeagSake-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q2_K.gguf) | Q2_K | 2.53GB | | [BeagSake-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [BeagSake-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.IQ3_S.gguf) | IQ3_S | 2.96GB | | [BeagSake-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [BeagSake-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.IQ3_M.gguf) | IQ3_M | 3.06GB | | [BeagSake-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q3_K.gguf) | Q3_K | 3.28GB | | [BeagSake-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [BeagSake-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [BeagSake-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [BeagSake-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q4_0.gguf) | Q4_0 | 3.83GB | | [BeagSake-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [BeagSake-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [BeagSake-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q4_K.gguf) | Q4_K | 4.07GB | | [BeagSake-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [BeagSake-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q4_1.gguf) | Q4_1 | 4.24GB | | [BeagSake-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q5_0.gguf) | Q5_0 | 4.65GB | | [BeagSake-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [BeagSake-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q5_K.gguf) | Q5_K | 4.78GB | | [BeagSake-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [BeagSake-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q5_1.gguf) | Q5_1 | 5.07GB | | [BeagSake-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q6_K.gguf) | Q6_K | 5.53GB | | [BeagSake-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: cc-by-nc-4.0 tags: - merge - mergekit - lazymergekit base_model: - shadowml/BeagleSempra-7B - shadowml/WestBeagle-7B model-index: - name: BeagSake-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 72.44 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/BeagSake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.39 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/BeagSake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.23 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/BeagSake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 72.27 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/BeagSake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.16 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/BeagSake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 71.8 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/BeagSake-7B name: Open LLM Leaderboard --- # BeagSake-7B BeagSake-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [shadowml/BeagleSempra-7B](https://huggingface.co/shadowml/BeagleSempra-7B) * [shadowml/WestBeagle-7B](https://huggingface.co/shadowml/WestBeagle-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: shadowml/BeagleSempra-7B layer_range: [0, 32] - model: shadowml/WestBeagle-7B layer_range: [0, 32] merge_method: slerp base_model: shadowml/BeagleSempra-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "shadowml/BeagSake-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_shadowml__BeagSake-7B) | Metric |Value| |---------------------------------|----:| |Avg. |75.38| |AI2 Reasoning Challenge (25-Shot)|72.44| |HellaSwag (10-Shot) |88.39| |MMLU (5-Shot) |65.23| |TruthfulQA (0-shot) |72.27| |Winogrande (5-shot) |82.16| |GSM8k (5-shot) |71.80|
Aratako/Ninja-v1-RP-expressive-breadcrumbs-GGUF
Aratako
2024-06-01T11:55:08Z
385
0
null
[ "gguf", "roleplay", "ja", "base_model:Aratako/Ninja-v1-RP-expressive-breadcrumbs", "license:cc-by-nc-4.0", "region:us" ]
null
2024-05-26T13:46:39Z
--- license: cc-by-nc-4.0 language: - ja tags: - roleplay base_model: - Aratako/Ninja-v1-RP-expressive-breadcrumbs --- # Ninja-v1-RP-expressive-GGUF ## 概要 [Aratako/Ninja-v1-RP-expressive-breadcrumbs](https://huggingface.co/Aratako/Ninja-v1-RP-expressive-breadcrumbs)の量子化済みGGUF版です。ライセンス等詳細は元モデルをご確認ください。
RichardErkhov/aisingapore_-_sea-lion-7b-gguf
RichardErkhov
2024-05-31T09:07:58Z
385
0
null
[ "gguf", "arxiv:2101.09635", "region:us" ]
null
2024-05-31T06:05:08Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) sea-lion-7b - GGUF - Model creator: https://huggingface.co/aisingapore/ - Original model: https://huggingface.co/aisingapore/sea-lion-7b/ | Name | Quant method | Size | | ---- | ---- | ---- | | [sea-lion-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-gguf/blob/main/sea-lion-7b.Q2_K.gguf) | Q2_K | 3.07GB | | [sea-lion-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-gguf/blob/main/sea-lion-7b.IQ3_XS.gguf) | IQ3_XS | 3.35GB | | [sea-lion-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-gguf/blob/main/sea-lion-7b.IQ3_S.gguf) | IQ3_S | 3.42GB | | [sea-lion-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-gguf/blob/main/sea-lion-7b.Q3_K_S.gguf) | Q3_K_S | 3.42GB | | [sea-lion-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-gguf/blob/main/sea-lion-7b.IQ3_M.gguf) | IQ3_M | 3.72GB | | [sea-lion-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-gguf/blob/main/sea-lion-7b.Q3_K.gguf) | Q3_K | 3.97GB | | [sea-lion-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-gguf/blob/main/sea-lion-7b.Q3_K_M.gguf) | Q3_K_M | 3.97GB | | [sea-lion-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-gguf/blob/main/sea-lion-7b.Q3_K_L.gguf) | Q3_K_L | 4.27GB | | [sea-lion-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-gguf/blob/main/sea-lion-7b.IQ4_XS.gguf) | IQ4_XS | 4.07GB | | [sea-lion-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-gguf/blob/main/sea-lion-7b.Q4_0.gguf) | Q4_0 | 4.22GB | | [sea-lion-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-gguf/blob/main/sea-lion-7b.IQ4_NL.gguf) | IQ4_NL | 4.25GB | | [sea-lion-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-gguf/blob/main/sea-lion-7b.Q4_K_S.gguf) | Q4_K_S | 4.25GB | | [sea-lion-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-gguf/blob/main/sea-lion-7b.Q4_K.gguf) | Q4_K | 4.67GB | | [sea-lion-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-gguf/blob/main/sea-lion-7b.Q4_K_M.gguf) | Q4_K_M | 4.67GB | | [sea-lion-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-gguf/blob/main/sea-lion-7b.Q4_1.gguf) | Q4_1 | 4.6GB | | [sea-lion-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-gguf/blob/main/sea-lion-7b.Q5_0.gguf) | Q5_0 | 4.97GB | | [sea-lion-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-gguf/blob/main/sea-lion-7b.Q5_K_S.gguf) | Q5_K_S | 4.97GB | | [sea-lion-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-gguf/blob/main/sea-lion-7b.Q5_K.gguf) | Q5_K | 5.3GB | | [sea-lion-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-gguf/blob/main/sea-lion-7b.Q5_K_M.gguf) | Q5_K_M | 5.3GB | | [sea-lion-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-gguf/blob/main/sea-lion-7b.Q5_1.gguf) | Q5_1 | 5.35GB | | [sea-lion-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-gguf/blob/main/sea-lion-7b.Q6_K.gguf) | Q6_K | 5.77GB | | [sea-lion-7b.Q8_0.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-gguf/blob/main/sea-lion-7b.Q8_0.gguf) | Q8_0 | 7.46GB | Original model description: --- license: mit language: - en - zh - id - ms - th - vi - fil - ta - my - km - lo --- # SEA-LION SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region. The size of the models range from 3 billion to 7 billion parameters. This is the card for the SEA-LION 7B base model. SEA-LION stands for <i>Southeast Asian Languages In One Network</i>. ## Model Details ### Model Description The SEA-LION model is a significant leap forward in the field of Natural Language Processing, specifically trained to understand the SEA regional context. SEA-LION is built on the robust MPT architecture and has a vocabulary size of 256K. For tokenization, the model employs our custom SEABPETokenizer, which is specially tailored for SEA languages, ensuring optimal model performance. The training data for SEA-LION encompasses 980B tokens. - **Developed by:** Products Pillar, AI Singapore - **Funded by:** Singapore NRF - **Model type:** Decoder - **Languages:** English, Chinese, Indonesian, Malay, Thai, Vietnamese, Filipino, Tamil, Burmese, Khmer, Lao - **License:** MIT License ### Performance Benchmarks SEA-LION has an average performance on general tasks in English (as measured by Hugging Face's LLM Leaderboard): | Model | ARC | HellaSwag | MMLU | TruthfulQA | Average | |-------------|:-----:|:---------:|:-----:|:----------:|:-------:| | SEA-LION 7B | 39.93 | 68.51 | 26.87 | 35.09 | 42.60 | ## Training Details ### Data SEA-LION was trained on 980B tokens of the following data: | Data Source | Unique Tokens | Multiplier | Total Tokens | Percentage | |---------------------------|:-------------:|:----------:|:------------:|:----------:| | RefinedWeb - English | 571.3B | 1 | 571.3B | 58.20% | | mC4 - Chinese | 91.2B | 1 | 91.2B | 9.29% | | mC4 - Indonesian | 3.68B | 4 | 14.7B | 1.50% | | mC4 - Malay | 0.72B | 4 | 2.9B | 0.29% | | mC4 - Filipino | 1.32B | 4 | 5.3B | 0.54% | | mC4 - Burmese | 1.2B | 4 | 4.9B | 0.49% | | mC4 - Vietnamese | 63.4B | 1 | 63.4B | 6.46% | | mC4 - Thai | 5.8B | 2 | 11.6B | 1.18% | | WangChanBERTa - Thai | 5B | 2 | 10B | 1.02% | | mC4 - Lao | 0.27B | 4 | 1.1B | 0.12% | | mC4 - Khmer | 0.97B | 4 | 3.9B | 0.40% | | mC4 - Tamil | 2.55B | 4 | 10.2B | 1.04% | | the Stack - Python | 20.9B | 2 | 41.8B | 4.26% | | the Stack - Javascript | 55.6B | 1 | 55.6B | 5.66% | | the Stack - Shell | 1.2B5 | 2 | 2.5B | 0.26% | | the Stack - SQL | 6.4B | 2 | 12.8B | 1.31% | | the Stack - Markdown | 26.6B | 1 | 26.6B | 2.71% | | RedPajama - StackExchange | 21.2B | 1 | 21.2B | 2.16% | | RedPajama - ArXiv | 30.6B | 1 | 30.6B | 3.12% | ### Infrastructure SEA-LION was trained using [MosaicML Composer](https://github.com/mosaicml/composer) on the following hardware: | Training Details | SEA-LION 7B | |----------------------|:------------:| | AWS EC2 p4d.24xlarge | 32 instances | | Nvidia A100 40GB GPU | 256 | | Training Duration | 22 days | ### Configuration | HyperParameter | SEA-LION 7B | |-------------------|:------------------:| | Precision | bfloat16 | | Optimizer | decoupled_adamw | | Scheduler | cosine_with_warmup | | Learning Rate | 6.0e-5 | | Global Batch Size | 2048 | | Micro Batch Size | 4 | ## Technical Specifications ### Model Architecture and Objective SEA-LION is a decoder model using the MPT architecture. | Parameter | SEA-LION 7B | |-----------------|:-----------:| | Layers | 32 | | d_model | 4096 | | head_dim | 32 | | Vocabulary | 256000 | | Sequence Length | 2048 | ### Tokenizer Details We sample 20M lines from the training data to train the tokenizer.<br> The framework for training is [SentencePiece](https://github.com/google/sentencepiece).<br> The tokenizer type is Byte-Pair Encoding (BPE). ## The Team Lam Wen Zhi Clarence<br> Leong Wei Qi<br> Li Yier<br> Liu Bing Jie Darius<br> Lovenia Holy<br> Montalan Jann Railey<br> Ng Boon Cheong Raymond<br> Ngui Jian Gang<br> Nguyen Thanh Ngan<br> Ong Tat-Wee David<br> Rengarajan Hamsawardhini<br> Susanto Yosephine<br> Tai Ngee Chia<br> Tan Choon Meng<br> Teo Jin Howe<br> Teo Eng Sipp Leslie<br> Teo Wei Yi<br> Tjhi William<br> Yeo Yeow Tong<br> Yong Xianbin<br> ## Acknowledgements AI Singapore is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore. ## Contact For more info, please contact us using this [SEA-LION Inquiry Form](https://forms.gle/sLCUVb95wmGf43hi6) [Link to SEA-LION's GitHub repository](https://github.com/aisingapore/sealion) ## Disclaimer This the repository for the base model. The model has _not_ been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes. ## References ```bibtex @misc{lowphansirikul2021wangchanberta, title={WangchanBERTa: Pretraining transformer-based Thai Language Models}, author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong}, year={2021}, eprint={2101.09635}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Qwen/Qwen2-0.5B-Instruct-GPTQ-Int8
Qwen
2024-06-10T03:03:51Z
385
2
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "gptq", "region:us" ]
text-generation
2024-06-06T06:19:06Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation tags: - chat --- # Qwen2-0.5B-Instruct-GPTQ-Int8 ## Introduction Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 0.5B Qwen2 model. Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/). **Note**: If you encounter ``RuntimeError: probability tensor contains either `inf`, `nan` or element < 0`` during inference with ``transformers``, we recommand [deploying this model with vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html). <br> ## Model Details Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. ## Requirements The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2-0.5B-Instruct-GPTQ-Int8", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B-Instruct-GPTQ-Int8") prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## Benchmark and Speed To compare the generation performance between bfloat16 (bf16) and quantized models such as GPTQ-Int8, GPTQ-Int4, and AWQ, please consult our [Benchmark of Quantized Models](https://qwen.readthedocs.io/en/latest/benchmark/quantization_benchmark.html). This benchmark provides insights into how different quantization techniques affect model performance. For those interested in understanding the inference speed and memory consumption when deploying these models with either ``transformer`` or ``vLLM``, we have compiled an extensive [Speed Benchmark](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen2, title={Qwen2 Technical Report}, year={2024} } ```
mradermacher/FlatDolphinMaid-8x7B-GGUF
mradermacher
2024-06-10T06:23:45Z
385
0
transformers
[ "transformers", "gguf", "not-for-all-audiences", "nsfw", "en", "base_model:Undi95/FlatDolphinMaid-8x7B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-06-08T22:55:03Z
--- base_model: Undi95/FlatDolphinMaid-8x7B language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - not-for-all-audiences - nsfw --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Undi95/FlatDolphinMaid-8x7B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/FlatDolphinMaid-8x7B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/FlatDolphinMaid-8x7B-GGUF/resolve/main/FlatDolphinMaid-8x7B.Q2_K.gguf) | Q2_K | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/FlatDolphinMaid-8x7B-GGUF/resolve/main/FlatDolphinMaid-8x7B.IQ3_XS.gguf) | IQ3_XS | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/FlatDolphinMaid-8x7B-GGUF/resolve/main/FlatDolphinMaid-8x7B.IQ3_S.gguf) | IQ3_S | 20.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/FlatDolphinMaid-8x7B-GGUF/resolve/main/FlatDolphinMaid-8x7B.Q3_K_S.gguf) | Q3_K_S | 20.5 | | | [GGUF](https://huggingface.co/mradermacher/FlatDolphinMaid-8x7B-GGUF/resolve/main/FlatDolphinMaid-8x7B.IQ3_M.gguf) | IQ3_M | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/FlatDolphinMaid-8x7B-GGUF/resolve/main/FlatDolphinMaid-8x7B.Q3_K_M.gguf) | Q3_K_M | 22.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/FlatDolphinMaid-8x7B-GGUF/resolve/main/FlatDolphinMaid-8x7B.Q3_K_L.gguf) | Q3_K_L | 24.3 | | | [GGUF](https://huggingface.co/mradermacher/FlatDolphinMaid-8x7B-GGUF/resolve/main/FlatDolphinMaid-8x7B.IQ4_XS.gguf) | IQ4_XS | 25.5 | | | [GGUF](https://huggingface.co/mradermacher/FlatDolphinMaid-8x7B-GGUF/resolve/main/FlatDolphinMaid-8x7B.Q4_K_S.gguf) | Q4_K_S | 26.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/FlatDolphinMaid-8x7B-GGUF/resolve/main/FlatDolphinMaid-8x7B.Q4_K_M.gguf) | Q4_K_M | 28.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/FlatDolphinMaid-8x7B-GGUF/resolve/main/FlatDolphinMaid-8x7B.Q5_K_S.gguf) | Q5_K_S | 32.3 | | | [GGUF](https://huggingface.co/mradermacher/FlatDolphinMaid-8x7B-GGUF/resolve/main/FlatDolphinMaid-8x7B.Q5_K_M.gguf) | Q5_K_M | 33.3 | | | [GGUF](https://huggingface.co/mradermacher/FlatDolphinMaid-8x7B-GGUF/resolve/main/FlatDolphinMaid-8x7B.Q6_K.gguf) | Q6_K | 38.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/FlatDolphinMaid-8x7B-GGUF/resolve/main/FlatDolphinMaid-8x7B.Q8_0.gguf) | Q8_0 | 49.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Dark-Miqu-103B-i1-GGUF
mradermacher
2024-06-12T05:09:24Z
385
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:jukofyork/Dark-Miqu-103B", "license:other", "endpoints_compatible", "region:us" ]
null
2024-06-10T19:26:30Z
--- base_model: jukofyork/Dark-Miqu-103B language: - en library_name: transformers license: other quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/jukofyork/Dark-Miqu-103B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Dark-Miqu-103B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-IQ1_S.gguf) | i1-IQ1_S | 21.8 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-IQ1_M.gguf) | i1-IQ1_M | 23.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 27.4 | | | [GGUF](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 30.5 | | | [GGUF](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-IQ2_S.gguf) | i1-IQ2_S | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-IQ2_M.gguf) | i1-IQ2_M | 34.8 | | | [GGUF](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-Q2_K.gguf) | i1-Q2_K | 38.0 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 39.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 42.3 | | | [GGUF](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 44.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-IQ3_S.gguf) | i1-IQ3_S | 44.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-IQ3_M.gguf) | i1-IQ3_M | 46.2 | | | [GGUF](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 49.7 | IQ3_S probably better | | [PART 1](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 54.2 | IQ3_M probably better | | [PART 1](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 55.2 | | | [PART 1](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 58.4 | fast, low quality | | [PART 1](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 58.7 | optimal size/speed/quality | | [PART 1](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 62.0 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 71.1 | | | [PART 1](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 73.0 | | | [PART 1](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Dark-Miqu-103B-i1-GGUF/resolve/main/Dark-Miqu-103B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 84.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
NikolayKozloff/SeaPhi3-mini-Q8_0-GGUF
NikolayKozloff
2024-06-11T01:52:40Z
385
1
null
[ "gguf", "llama-cpp", "gguf-my-repo", "it", "dataset:SeacomSrl/rag-data", "base_model:SeacomSrl/SeaPhi3-mini", "license:apache-2.0", "region:us" ]
null
2024-06-11T01:52:29Z
--- language: - it license: apache-2.0 tags: - llama-cpp - gguf-my-repo base_model: SeacomSrl/SeaPhi3-mini datasets: - SeacomSrl/rag-data --- # NikolayKozloff/SeaPhi3-mini-Q8_0-GGUF This model was converted to GGUF format from [`SeacomSrl/SeaPhi3-mini`](https://huggingface.co/SeacomSrl/SeaPhi3-mini) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/SeacomSrl/SeaPhi3-mini) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama --hf-repo NikolayKozloff/SeaPhi3-mini-Q8_0-GGUF --hf-file seaphi3-mini-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo NikolayKozloff/SeaPhi3-mini-Q8_0-GGUF --hf-file seaphi3-mini-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./main --hf-repo NikolayKozloff/SeaPhi3-mini-Q8_0-GGUF --hf-file seaphi3-mini-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./server --hf-repo NikolayKozloff/SeaPhi3-mini-Q8_0-GGUF --hf-file seaphi3-mini-q8_0.gguf -c 2048 ```
dryade36513/MooMooFusion-V10
dryade36513
2024-06-23T16:35:47Z
385
1
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "MooMooFusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-06-23T04:29:46Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image - MooMooFusion --- # MooMooFusion V10 Original pages: https://civitai.com/models/133364?modelVersionId=559098 ## MooMooFusion ![MooMooFusion](https://i.imgur.com/PqRYKid.gif) **MooMooFusion - Specializing in Generating Taiwanese Portraits** In training this model, I've strived to showcase the exceptional quality and diversity of Taiwanese faces: - Exquisite Detail: Skin texture, hair, and eyes are rendered with remarkable realism. - Diversity: Encompassing a wide range of ages, genders, and styles. - Vivid Expressions: Each character exhibits unique and natural facial expressions. - Superb Lighting: Lighting and shadow effects add depth and dimension. - Varied Attire and Accessories: From modern to vintage styles, rich in detail. - Age Progression: Smooth transitions from childhood to old age. - Facial Features: East Asian characteristics are accurately portrayed while maintaining individuality. I groundbreaking model produces portraits comparable to professional photography. From adorable youthfulness to mature charm, MooMooFusion creates lifelike, expressive East Asian faces. Whether for fashion portraits or dramatic characters, this versatile model excels. Its outstanding performance in detail rendering, lighting effects, and emotional expression opens up limitless possibilities for creative professionals.
Helsinki-NLP/opus-mt-lg-en
Helsinki-NLP
2023-08-16T11:59:55Z
384
2
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lg", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-lg-en * source languages: lg * target languages: en * OPUS readme: [lg-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lg-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lg-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-en/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lg.en | 32.6 | 0.480 | | Tatoeba.lg.en | 5.4 | 0.243 |
valhalla/t5-base-qa-qg-hl
valhalla
2020-12-11T22:03:44Z
384
18
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question-generation", "dataset:squad", "arxiv:1910.10683", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- datasets: - squad tags: - question-generation widget: - text: "generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>" - text: "question: What is 42 context: 42 is the answer to life, the universe and everything. </s>" license: mit --- ## T5 for multi-task QA and QG This is multi-task [t5-base](https://arxiv.org/abs/1910.10683) model trained for question answering and answer aware question generation tasks. For question generation the answer spans are highlighted within the text with special highlight tokens (`<hl>`) and prefixed with 'generate question: '. For QA the input is processed like this `question: question_text context: context_text </s>` You can play with the model using the inference API. Here's how you can use it For QG `generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>` For QA `question: What is 42 context: 42 is the answer to life, the universe and everything. </s>` For more deatils see [this](https://github.com/patil-suraj/question_generation) repo. ### Model in action 🚀 You'll need to clone the [repo](https://github.com/patil-suraj/question_generation). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb) ```python3 from pipelines import pipeline nlp = pipeline("multitask-qa-qg", model="valhalla/t5-base-qa-qg-hl") # to generate questions simply pass the text nlp("42 is the answer to life, the universe and everything.") => [{'answer': '42', 'question': 'What is the answer to life, the universe and everything?'}] # for qa pass a dict with "question" and "context" nlp({ "question": "What is 42 ?", "context": "42 is the answer to life, the universe and everything." }) => 'the answer to life, the universe and everything' ```
google/mobilenet_v2_0.35_96
google
2023-05-16T16:38:30Z
384
0
transformers
[ "transformers", "pytorch", "mobilenet_v2", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1801.04381", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-11-10T16:05:31Z
--- license: other tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # MobileNet V2 MobileNet V2 model pre-trained on ImageNet-1k at resolution 96x96. It was introduced in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. It was first released in [this repository](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet). Disclaimer: The team releasing MobileNet V2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md): > MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature. The checkpoints are named **mobilenet\_v2\_*depth*\_*size***, for example **mobilenet\_v2\_0.35\_96**, where **0.35** is the depth multiplier and **96** is the resolution of the input images the model was trained on. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoImageProcessor, AutoModelForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) preprocessor = AutoImageProcessor.from_pretrained("google/mobilenet_v2_0.35_96") model = AutoModelForImageClassification.from_pretrained("google/mobilenet_v2_0.35_96") inputs = preprocessor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Note: This model actually predicts 1001 classes, the 1000 classes from ImageNet plus an extra “background” class (index 0). Currently, both the feature extractor and model support PyTorch. ### BibTeX entry and citation info ```bibtex @inproceedings{mobilenetv22018, title={MobileNetV2: Inverted Residuals and Linear Bottlenecks}, author={Mark Sandler and Andrew Howard and Menglong Zhu and Andrey Zhmoginov and Liang-Chieh Chen}, booktitle={CVPR}, year={2018} } ```
timm/efficientnet_el.ra_in1k
timm
2023-04-27T21:11:51Z
384
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2110.00476", "arxiv:2003.02838", "arxiv:1905.11946", "license:apache-2.0", "region:us" ]
image-classification
2022-12-12T23:57:41Z
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for efficientnet_el.ra_in1k A EfficientNet-EdgeTPU image classification model. Trained on ImageNet-1k in `timm` using recipe template described below. Recipe details: * RandAugment `RA` recipe. Inspired by and evolved from EfficientNet RandAugment recipes. Published as `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476). * RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging * Step (exponential decay w/ staircase) LR schedule with warmup ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 10.6 - GMACs: 8.0 - Activations (M): 30.7 - Image size: 300 x 300 - **Papers:** - Accelerator-aware Neural Network Design using AutoML: https://arxiv.org/abs/2003.02838 - EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946 - ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476 - **Dataset:** ImageNet-1k - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('efficientnet_el.ra_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'efficientnet_el.ra_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 32, 150, 150]) # torch.Size([1, 40, 75, 75]) # torch.Size([1, 56, 38, 38]) # torch.Size([1, 176, 19, 19]) # torch.Size([1, 232, 10, 10]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'efficientnet_el.ra_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1536, 10, 10) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{gupta2020accelerator, title={Accelerator-aware neural network design using automl}, author={Gupta, Suyog and Akin, Berkin}, journal={arXiv preprint arXiv:2003.02838}, year={2020} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @inproceedings{tan2019efficientnet, title={Efficientnet: Rethinking model scaling for convolutional neural networks}, author={Tan, Mingxing and Le, Quoc}, booktitle={International conference on machine learning}, pages={6105--6114}, year={2019}, organization={PMLR} } ``` ```bibtex @inproceedings{wightman2021resnet, title={ResNet strikes back: An improved training procedure in timm}, author={Wightman, Ross and Touvron, Hugo and Jegou, Herve}, booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future} } ```
timm/gernet_m.idstcv_in1k
timm
2024-02-10T23:34:37Z
384
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2006.14090", "license:apache-2.0", "region:us" ]
image-classification
2023-03-22T07:15:44Z
--- license: apache-2.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k --- # Model card for gernet_m.idstcv_in1k A GENet (GPU-Efficient-Networks) image classification model. Trained on ImageNet-1k by paper authors. This model architecture is implemented using `timm`'s flexible [BYOBNet (Bring-Your-Own-Blocks Network)](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/byobnet.py). BYOBNet allows configuration of: * block / stage layout * stem layout * output stride (dilation) * activation and norm layers * channel and spatial / self-attention layers ...and also includes `timm` features common to many other architectures, including: * stochastic depth * gradient checkpointing * layer-wise LR decay * per-stage feature extraction ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 21.1 - GMACs: 3.0 - Activations (M): 5.2 - Image size: 224 x 224 - **Papers:** - Neural Architecture Design for GPU-Efficient Networks: https://arxiv.org/abs/2006.14090 - **Dataset:** ImageNet-1k - **Original:** https://github.com/idstcv/GPU-Efficient-Networks ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('gernet_m.idstcv_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'gernet_m.idstcv_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 32, 112, 112]) # torch.Size([1, 128, 56, 56]) # torch.Size([1, 192, 28, 28]) # torch.Size([1, 640, 14, 14]) # torch.Size([1, 2560, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'gernet_m.idstcv_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2560, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @misc{lin2020neural, title={Neural Architecture Design for GPU-Efficient Networks}, author={Ming Lin and Hesen Chen and Xiuyu Sun and Qi Qian and Hao Li and Rong Jin}, year={2020}, eprint={2006.14090}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
timm/cait_s24_384.fb_dist_in1k
timm
2024-02-10T23:43:05Z
384
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2103.17239", "license:apache-2.0", "region:us" ]
image-classification
2023-04-13T01:46:34Z
--- license: apache-2.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k --- # Model card for cait_s24_384.fb_dist_in1k A CaiT (Class-Attention in Image Transformers) image classification model. Pretrained on ImageNet-1k with distillation by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 47.1 - GMACs: 32.2 - Activations (M): 245.3 - Image size: 384 x 384 - **Papers:** - Going deeper with Image Transformers: https://arxiv.org/abs/2103.17239 - **Dataset:** ImageNet-1k - **Original:** https://github.com/facebookresearch/deit ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('cait_s24_384.fb_dist_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'cait_s24_384.fb_dist_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 577, 384) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @InProceedings{Touvron_2021_ICCV, author = {Touvron, Hugo and Cord, Matthieu and Sablayrolles, Alexandre and Synnaeve, Gabriel and J'egou, Herv'e}, title = {Going Deeper With Image Transformers}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {32-42} } ```
KaiNylund/t5-60M-poli_aff-2020
KaiNylund
2023-07-08T01:39:22Z
384
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2023-07-08T01:38:59Z
Entry not found
srnair/blade-en-zh
srnair
2023-07-15T20:53:56Z
384
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-15T20:39:36Z
Entry not found