modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
timestamp[us, tz=UTC]
card
stringlengths
1
901k
abmorton/wall-spring-1
abmorton
2024-07-01T00:46:59Z
584
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-07-01T00:42:44Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### wall-spring-1 Dreambooth model trained by abmorton with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
google/long-t5-tglobal-xl
google
2023-01-24T17:11:32Z
583
21
transformers
[ "transformers", "pytorch", "jax", "longt5", "text2text-generation", "en", "arxiv:2112.07916", "arxiv:1912.08777", "arxiv:1910.10683", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-06-14T08:32:52Z
--- license: apache-2.0 language: en --- # LongT5 (transient-global attention, XL-sized model) LongT5 model pre-trained on English language. The model was introduced in the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/pdf/2112.07916.pdf) by Guo et al. and first released in [the LongT5 repository](https://github.com/google-research/longt5). All the model architecture and configuration can be found in [Flaxformer repository](https://github.com/google/flaxformer) which uses another Google research project repository [T5x](https://github.com/google-research/t5x). Disclaimer: The team releasing LongT5 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description LongT5 model is an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting ([Pegasus-like generation pre-training](https://arxiv.org/pdf/1912.08777.pdf)). LongT5 model is an extension of [T5 model](https://arxiv.org/pdf/1910.10683.pdf), and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention. The usage of attention sparsity patterns allows the model to efficiently handle input sequence. LongT5 is particularly effective when fine-tuned for text generation (summarization, question answering) which requires handling long input sequences (up to 16,384 tokens). ## Intended uses & limitations The model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=longt5) to look for fine-tuned versions on a task that interests you. ### How to use ```python from transformers import AutoTokenizer, LongT5Model tokenizer = AutoTokenizer.from_pretrained("google/long-t5-tglobal-xl") model = LongT5Model.from_pretrained("google/long-t5-tglobal-xl") inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ### BibTeX entry and citation info ```bibtex @article{guo2021longt5, title={LongT5: Efficient Text-To-Text Transformer for Long Sequences}, author={Guo, Mandy and Ainslie, Joshua and Uthus, David and Ontanon, Santiago and Ni, Jianmo and Sung, Yun-Hsuan and Yang, Yinfei}, journal={arXiv preprint arXiv:2112.07916}, year={2021} } ```
TheBloke/Luban-13B-GGUF
TheBloke
2023-09-27T12:46:46Z
583
6
transformers
[ "transformers", "gguf", "llama", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "base_model:AIDC-ai-business/Luban-13B", "license:cc-by-nc-4.0", "text-generation-inference", "region:us" ]
text-generation
2023-08-30T15:46:28Z
--- language: - en license: cc-by-nc-4.0 datasets: - Open-Orca/OpenOrca model_name: Luban 13B base_model: AIDC-ai-business/Luban-13B inference: false model_creator: AIDC-ai-business model_type: llama pipeline_tag: text-generation prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Luban 13B - GGUF - Model creator: [AIDC-ai-business](https://huggingface.co/AIDC-ai-business) - Original model: [Luban 13B](https://huggingface.co/AIDC-ai-business/Luban-13B) <!-- description start --> ## Description This repo contains GGUF format model files for [AIDC-ai-business's Luban 13B](https://huggingface.co/AIDC-ai-business/Luban-13B). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Luban-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Luban-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Luban-13B-GGUF) * [AIDC-ai-business's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/AIDC-ai-business/Luban-13B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [AIDC-ai-business's Luban 13B](https://huggingface.co/AIDC-ai-business/Luban-13B). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [luban-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Luban-13B-GGUF/blob/main/luban-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [luban-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Luban-13B-GGUF/blob/main/luban-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [luban-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Luban-13B-GGUF/blob/main/luban-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [luban-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Luban-13B-GGUF/blob/main/luban-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [luban-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Luban-13B-GGUF/blob/main/luban-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [luban-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Luban-13B-GGUF/blob/main/luban-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [luban-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Luban-13B-GGUF/blob/main/luban-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [luban-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Luban-13B-GGUF/blob/main/luban-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [luban-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Luban-13B-GGUF/blob/main/luban-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [luban-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Luban-13B-GGUF/blob/main/luban-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [luban-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Luban-13B-GGUF/blob/main/luban-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [luban-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Luban-13B-GGUF/blob/main/luban-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Luban-13B-GGUF and below it, a specific filename to download, such as: luban-13b.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Luban-13B-GGUF luban-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Luban-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Luban-13B-GGUF luban-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m luban-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Luban-13B-GGUF", model_file="luban-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: AIDC-ai-business's Luban 13B # Luban-13B Fine-tuned from Llama2-13B,we use Orca-style data and other open source data for fine-tuning. # Model Details * **Trained by**: trained by AIDC AI-Business. * **Model type:** **Luban-13B** is an auto-regressive language model based on the Llama 2 transformer architecture. * **Language(s)**: English * **License for Luban-13B base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/)) # Prompting ## Prompt Template for alpaca style ``` ### Instruction: <prompt> (without the <>) ### Response: ``` # Evulation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)) | Metric | Value | |-----------------------|-------| | Avg. | 65.03 | | ARC (25-shot) | 63.05 | | HellaSwag (10-shot) | 82.8 | | MMLU (5-shot) | 58.73 | | TruthfulQA (0-shot) | 55.53 | <!-- original-model-card end -->
facebook/mms-tts-deu
facebook
2023-09-01T10:25:29Z
583
5
transformers
[ "transformers", "pytorch", "safetensors", "vits", "text-to-audio", "mms", "text-to-speech", "arxiv:2305.13516", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-to-speech
2023-09-01T10:25:13Z
--- license: cc-by-nc-4.0 tags: - mms - vits pipeline_tag: text-to-speech --- # Massively Multilingual Speech (MMS): German, Standard Text-to-Speech This repository contains the **German, Standard (deu)** language text-to-speech (TTS) model checkpoint. This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to provide speech technology across a diverse range of languages. You can find more details about the supported languages and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html), and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts). MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. ## Model Details VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text. The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform. For the MMS project, a separate VITS checkpoint is trained on each langauge. ## Usage MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint, first install the latest version of the library: ``` pip install --upgrade transformers accelerate ``` Then, run inference with the following code-snippet: ```python from transformers import VitsModel, AutoTokenizer import torch model = VitsModel.from_pretrained("facebook/mms-tts-deu") tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-deu") text = "some example text in the German, Standard language" inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): output = model(**inputs).waveform ``` The resulting waveform can be saved as a `.wav` file: ```python import scipy scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output) ``` Or displayed in a Jupyter Notebook / Google Colab: ```python from IPython.display import Audio Audio(output, rate=model.config.sampling_rate) ``` ## BibTex citation This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper: ``` @article{pratap2023mms, title={Scaling Speech Technology to 1,000+ Languages}, author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli}, journal={arXiv}, year={2023} } ``` ## License The model is licensed as **CC-BY-NC 4.0**.
TheBloke/Dolphin-Llama-13B-GGUF
TheBloke
2023-09-27T12:52:29Z
583
4
transformers
[ "transformers", "gguf", "llama", "base_model:ehartford/dolphin-llama-13b", "license:other", "text-generation-inference", "region:us" ]
null
2023-09-19T22:55:17Z
--- license: other model_name: Dolphin Llama 13B base_model: ehartford/dolphin-llama-13b inference: false model_creator: Eric Hartford model_type: llama prompt_template: 'SYSTEM: {system_message} USER: {prompt} ASSISTANT: ' quantized_by: TheBloke task_categories: - text-generation --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Dolphin Llama 13B - GGUF - Model creator: [Eric Hartford](https://huggingface.co/ehartford) - Original model: [Dolphin Llama 13B](https://huggingface.co/ehartford/dolphin-llama-13b) <!-- description start --> ## Description This repo contains GGUF format model files for [Eric Hartford's Dolphin Llama 13B](https://huggingface.co/ehartford/dolphin-llama-13b). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Dolphin-Llama-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GGUF) * [Eric Hartford's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/dolphin-llama-13b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Orca-Vicuna ``` SYSTEM: {system_message} USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [dolphin-llama-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GGUF/blob/main/dolphin-llama-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [dolphin-llama-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GGUF/blob/main/dolphin-llama-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [dolphin-llama-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GGUF/blob/main/dolphin-llama-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [dolphin-llama-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GGUF/blob/main/dolphin-llama-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [dolphin-llama-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GGUF/blob/main/dolphin-llama-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [dolphin-llama-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GGUF/blob/main/dolphin-llama-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [dolphin-llama-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GGUF/blob/main/dolphin-llama-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [dolphin-llama-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GGUF/blob/main/dolphin-llama-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [dolphin-llama-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GGUF/blob/main/dolphin-llama-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [dolphin-llama-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GGUF/blob/main/dolphin-llama-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [dolphin-llama-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GGUF/blob/main/dolphin-llama-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [dolphin-llama-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GGUF/blob/main/dolphin-llama-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Dolphin-Llama-13B-GGUF and below it, a specific filename to download, such as: dolphin-llama-13b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Dolphin-Llama-13B-GGUF dolphin-llama-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Dolphin-Llama-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Dolphin-Llama-13B-GGUF dolphin-llama-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m dolphin-llama-13b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "SYSTEM: {system_message}\nUSER: {prompt}\nASSISTANT:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Dolphin-Llama-13B-GGUF", model_file="dolphin-llama-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Eric Hartford's Dolphin Llama 13B Dolphin 🐬 https://erichartford.com/dolphin This model is based on llama1, so it is for non-commercial use only. Future versions will be trained on llama2 and other open models that are suitable for commercial use. This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model compliant to any requests. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. ## Dataset This dataset is an open source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/) After uncensoring, deduping, and cleaning, our dataset consists of: - 842,610 instructions of FLANv2 augmented with GPT-4 completions - 2,625,353 instructions of FLANv2 augmented with GPT-3.5 completions We followed the submix and system prompt distribution outlined in the Orca paper. With a few exceptions. We included all 75k of CoT in the FLAN-1m dataset rather than sampling that. Also, we found that many items were duplicated, so we removed duplicates. Then we filtered out instances of alignment, refusal, avoidance, and bias, in order to produce an uncensored model upon which can be layered your personalized alignment LoRA. We also filtered out duplicates and cleaned the data. ## Training We trained with the flan5m (gpt3.5 completions) dataset in its entirety for 3 epochs at a learning rate of 2e-5 before we stopped training to avoid overfit. We trained with the flan1m (gpt4 completions) dataset in its entirety for 2.5 epochs at a learning rate of 1e-5 before we stopped training to avoid overfit. It took about 600 hours to train on 8x H100s We used a prompt format similar to Vicuna, but we added the SYSTEM: field. Prompt format: ``` SYSTEM: {system} USER: {prompt} ASSISTANT: ``` Example: ``` SYSTEM: you are an expert marine biologist. USER: Please list 10 ways that dolphins are superior to orcas. ASSISTANT: ``` ## Evaluation Evaluation will be coming soon. ## Team The core Dolphin Team includes: - Eric "Faldore" Hartford - Pankaj Mathur - Rob "Rohan" O'Callahan - Tom "TheBloke" Jobbins ## Gratitude - Thank you to Microsoft for authoring the Orca paper and inspiring this work. - Special thanks to WingLian, NanoBit, Teknium for helpful advice - Special thanks to EdenCoder and chirper.ai for mentorship and financial sponsorship. - Special thanks to Kilkonie for his very valued mentorship. - Thank you to Catto. - Thank you to Nicolai Schleifer, financial sponsor. - Thank you to Eric Fleming, financial sponsor. - Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way. - And biggest thanks and love to my wife Hope and my babies, who have endured two months of my absenteeism while working long days and nights on this project. ## Example Output ``` SYSTEM: you are an expert marine biologist. USER: Please list 10 ways that dolphins are superior to orcas. ASSISTANT: ``` 1. Social structure: Dolphins live in complex social groups called pods, which consist of extended family units and can include hundreds of individuals. Orcas, on the other hand, typically form smaller, more temporary groupings known as clans or matrilineal kinship groups. 2. Communication: Dolphins have a highly developed system of vocalizations, body language, and even telepathy to communicate with each other for various purposes such as coordinating cooperative hunting strategies, sharing information about food sources, and maintaining social bonds. While orcas also exhibit some communication abilities, their methods are generally less sophisticated compared to those of dolphins. 3. Intelligence: Dolphins are widely regarded as one of the most intelligent animal species, showcasing advanced problem-solving skills, self-awareness, tool use (such as using sea sponges as tools), and learning capabilities. Some dolphin species have been observed demonstrating altruism and understanding of human emotions. In contrast, while orcas are also smart, they do not display the same level of cognitive complexity as dolphins. 4. Playfulness: Dolphins are well-known for their playful behavior, often engaging in acrobatic displays, chasing each other's tails, and playing with objects like balls or seaweed. This playful nature is thought to be essential for their social bonding and mental stimulation. Although orcas may occasionally interact playfully, it is not as prevalent or diverse as among dolphins. 5. Agility and speed: Dolphins are incredibly agile swimmers, capable of reaching speeds up to 30 miles per hour (48 km/h) when pursuing prey. They can change direction rapidly and maneuver through tight spaces with ease. While orcas are powerful predators and fast swimmers themselves, they cannot match the sheer acceleration and agility of dolphins. 6. Feeding habits: Both dolphins and orcas are carnivorous, feeding primarily on fish and cephalopods. However, dolphins employ a wider variety of hunting techniques, including cooperative hunting strategies involving herding, encircling, and driving schools of fish into shallow waters where they become disoriented and easier to catch. Orcas mainly rely on ambushing their prey from below and using their powerful jaws to strike quickly. 7. Reproduction: Female dolphins usually give birth to a single calf after a gestation period of around 12 months. The mother plays a crucial role in raising and nurturing her offspring until they become independent. On the other hand, female orcas tend to breed every three to five years, giving birth to multiple offspring at once, and relying heavily on their mothers for protection and teaching. 8. Vocal repertoire: Dolphins possess a wide range of sounds, including clicks, whistles, and body sounds, which allow them to convey different messages and establish unique identities within their pods. Orcas, too, produce vocalizations but lack the diversity found in dolphin calls. 9. Conservation status: Many dolphin species face threats due to habitat loss, pollution, overfishing, and climate change. As a result, several populations are considered vulnerable or endangered by the International Union for Conservation of Nature (IUCN). In comparison, orca conservation status varies depending on the region; however, no species is currently listed as critically endangered or facing extinction. 10. Adaptability: Dolphins have adapted to various aquatic environments, ranging from warm coastal waters to colder open oceans. Their versatility allows them to thrive in both tropical and temperate climates. Orcas, conversely, are adapted to specific habitats such as cooler coastal waters and are mostly found in the Northern Hemisphere. <!-- original-model-card end -->
macadeliccc/laser-dolphin-mixtral-2x7b-dpo-GGUF
macadeliccc
2024-02-06T05:24:57Z
583
8
null
[ "gguf", "license:cc-by-nc-4.0", "region:us" ]
null
2024-01-18T03:37:10Z
--- license: cc-by-nc-4.0 --- # Laser-dolphin-mixtral-2x7b-dpo-GGUF The original model is listed here [macadeliccc/laser-dolphin-mixtral-2x7b-dpo](https://huggingface.co/macadeliccc/laser-dolphin-mixtral-2x7b-dpo) ## Quantizations + Q8_0 + Q5_K_M + Q4_K_M ## Example Output ![dolphin-cpp-1](dolphin-cpp-1.png) ![dolphin-cpp-2](dolphin-cpp-2.png)
mradermacher/WizardIceLemonTeaRP-32k-GGUF
mradermacher
2024-05-06T04:41:59Z
583
0
transformers
[ "transformers", "gguf", "en", "base_model:icefog72/WizardIceLemonTeaRP-32k", "endpoints_compatible", "region:us" ]
null
2024-04-17T17:45:36Z
--- base_model: icefog72/WizardIceLemonTeaRP-32k language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/icefog72/WizardIceLemonTeaRP-32k <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/WizardIceLemonTeaRP-32k-GGUF/resolve/main/WizardIceLemonTeaRP-32k.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/WizardIceLemonTeaRP-32k-GGUF/resolve/main/WizardIceLemonTeaRP-32k.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/WizardIceLemonTeaRP-32k-GGUF/resolve/main/WizardIceLemonTeaRP-32k.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/WizardIceLemonTeaRP-32k-GGUF/resolve/main/WizardIceLemonTeaRP-32k.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/WizardIceLemonTeaRP-32k-GGUF/resolve/main/WizardIceLemonTeaRP-32k.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/WizardIceLemonTeaRP-32k-GGUF/resolve/main/WizardIceLemonTeaRP-32k.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/WizardIceLemonTeaRP-32k-GGUF/resolve/main/WizardIceLemonTeaRP-32k.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/WizardIceLemonTeaRP-32k-GGUF/resolve/main/WizardIceLemonTeaRP-32k.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/WizardIceLemonTeaRP-32k-GGUF/resolve/main/WizardIceLemonTeaRP-32k.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WizardIceLemonTeaRP-32k-GGUF/resolve/main/WizardIceLemonTeaRP-32k.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WizardIceLemonTeaRP-32k-GGUF/resolve/main/WizardIceLemonTeaRP-32k.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/WizardIceLemonTeaRP-32k-GGUF/resolve/main/WizardIceLemonTeaRP-32k.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/WizardIceLemonTeaRP-32k-GGUF/resolve/main/WizardIceLemonTeaRP-32k.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/WizardIceLemonTeaRP-32k-GGUF/resolve/main/WizardIceLemonTeaRP-32k.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mmnga/haqishen-Llama-3-8B-Japanese-Instruct-gguf
mmnga
2024-04-23T14:54:23Z
583
4
null
[ "gguf", "llama3", "en", "ja", "dataset:TFMC/imatrix-dataset-for-japanese-llm", "license:llama3", "region:us" ]
null
2024-04-23T13:55:17Z
--- license: llama3 language: - en - ja tags: - llama3 datasets: - TFMC/imatrix-dataset-for-japanese-llm --- # haqishen-Llama-3-8B-Japanese-Instruct-gguf [haqishenさんが公開しているLlama-3-8B-Japanese-Instruct](https://huggingface.co/haqishen/Llama-3-8B-Japanese-Instruct)のggufフォーマット変換版です。 imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。 ## 他のモデル [mmnga/haqishen-Llama-3-8B-Japanese-Instruct-gguf](https://huggingface.co/mmnga/haqishen-Llama-3-8B-Japanese-Instruct-gguf) ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'haqishen-Llama-3-8B-Japanese-Instruct-Q4_0.gguf' -p "<|begin_of_text|><|start_header_id|>user <|end_header_id|>\n\nこんにちわ<|eot_id|><|start_header_id|>assistant <|end_header_id|>\n\n" -n 128 ```
votepurchase/waiREALCN_v10
votepurchase
2024-06-04T10:16:28Z
583
0
diffusers
[ "diffusers", "safetensors", "ja", "license:mit", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-05-26T10:54:17Z
--- license: mit language: - ja library_name: diffusers --- [waiREALCN_v10](https://civitai.com/models/469902/wai-realcn)
Joaoffg/ELM
Joaoffg
2024-06-04T21:13:58Z
583
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "academic", "university", "en", "nl", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-29T10:42:04Z
--- license: llama2 language: - en - nl tags: - academic - university --- # Model Card for the Erasmian Language Model ELM is a community driven large language model tailored to the research and education needs of Erasmus University (EUR, Netherlands) students and staff. The model draws inspiration from ChatGPT and Llama in terms of architecture, but it aims to be privacy sensitive, environmentally conscious, and from and for the Erasmus community. Here are a few key points of ELM: We hope that the ELM experience becomes a template for community driven, decentralized and purpuseful AI development and application. ## Model Details ### Model Description The underlying language model is trained and fine-tuned on academic outputs from Erasmus University, such as scientific papers or student theses; Training and fine-tuning the model is a joint effort of students and staff, transparent for all parties involved; The prompt-response examples used to fine tune the model come from students and staff, not crowdsourcing services; Defining what is the "better" model output also comes from the perspective of research and education. The true richness of ELM lies in the way its training data is generated. What is the "state-of-the-art" model may change quickly, but quality data will maintain its relevance and ensure that ELM and its future iterations serve the needs of the community that nurtured it. - **Developed by:** João Gonçalves, Nick Jelicic - **Funded by [optional]:** Convergence AI and Digitalization, Erasmus Trustfonds - **Model type:** Llama-2 Instruct - **Language(s) (NLP):** English, Dutch - **License:** Llama2 ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/Joaoffg/ELM - **Paper [optional]:** In progress - **Demo [optional]:** https://huggingface.co/spaces/Joaoffg/Joaoffg-ELM
jeiku/Templar_v1_8B-Q3_K_S-GGUF
jeiku
2024-06-23T00:20:32Z
583
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:jeiku/Templar_v1_8B", "endpoints_compatible", "region:us" ]
null
2024-06-23T00:20:11Z
--- base_model: jeiku/Templar_v1_8B library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # jeiku/Templar_v1_8B-Q3_K_S-GGUF This model was converted to GGUF format from [`jeiku/Templar_v1_8B`](https://huggingface.co/jeiku/Templar_v1_8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/jeiku/Templar_v1_8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo jeiku/Templar_v1_8B-Q3_K_S-GGUF --hf-file templar_v1_8b-q3_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo jeiku/Templar_v1_8B-Q3_K_S-GGUF --hf-file templar_v1_8b-q3_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo jeiku/Templar_v1_8B-Q3_K_S-GGUF --hf-file templar_v1_8b-q3_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo jeiku/Templar_v1_8B-Q3_K_S-GGUF --hf-file templar_v1_8b-q3_k_s.gguf -c 2048 ```
Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.1-8B-Q5_K_M-GGUF
Casual-Autopsy
2024-06-23T23:12:49Z
583
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "not-for-all-audiences", "base_model:Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.1-8B", "endpoints_compatible", "region:us" ]
null
2024-06-23T17:07:11Z
--- base_model: Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.1-8B library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo - not-for-all-audiences --- # Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.1-8B-Q5_K_M-GGUF This model was converted to GGUF format from [`Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.1-8B`](https://huggingface.co/Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.1-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.1-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.1-8B-Q5_K_M-GGUF --hf-file l3-uncen-merger-omelette-rp-v0.1-8b-q5_k_m-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.1-8B-Q5_K_M-GGUF --hf-file l3-uncen-merger-omelette-rp-v0.1-8b-q5_k_m-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.1-8B-Q5_K_M-GGUF --hf-file l3-uncen-merger-omelette-rp-v0.1-8b-q5_k_m-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.1-8B-Q5_K_M-GGUF --hf-file l3-uncen-merger-omelette-rp-v0.1-8b-q5_k_m-imat.gguf -c 2048 ```
NikolayKozloff/Replete-Coder-Llama3-8B-Q4_0-GGUF
NikolayKozloff
2024-06-25T10:07:19Z
583
1
null
[ "gguf", "region:us" ]
null
2024-06-25T10:06:59Z
Entry not found
larenspear/Yi-1.5-6B-Chat-Q4_0-GGUF
larenspear
2024-07-01T01:38:29Z
583
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:01-ai/Yi-1.5-6B-Chat", "license:apache-2.0", "region:us" ]
null
2024-07-01T01:38:14Z
--- base_model: 01-ai/Yi-1.5-6B-Chat license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # larenspear/Yi-1.5-6B-Chat-Q4_0-GGUF This model was converted to GGUF format from [`01-ai/Yi-1.5-6B-Chat`](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo larenspear/Yi-1.5-6B-Chat-Q4_0-GGUF --hf-file yi-1.5-6b-chat-q4_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo larenspear/Yi-1.5-6B-Chat-Q4_0-GGUF --hf-file yi-1.5-6b-chat-q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo larenspear/Yi-1.5-6B-Chat-Q4_0-GGUF --hf-file yi-1.5-6b-chat-q4_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo larenspear/Yi-1.5-6B-Chat-Q4_0-GGUF --hf-file yi-1.5-6b-chat-q4_0.gguf -c 2048 ```
Den4ikAI/rugpt3_2ch
Den4ikAI
2023-05-18T15:22:49Z
582
3
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "rus", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-07-26T15:56:30Z
--- license: mit language: rus --- RUGPT-3 обученная на диалогах с имиджборд по типу 2ch Для генерации ответа в модель нужно ввести такой формат данных: "- Привет\n-" Пример инференса тут: https://github.com/Den4ikAI/rugpt3_2ch
microsoft/trocr-base-str
microsoft
2024-05-27T20:12:19Z
582
5
transformers
[ "transformers", "pytorch", "vision-encoder-decoder", "trocr", "image-to-text", "arxiv:2109.10282", "endpoints_compatible", "region:us" ]
image-to-text
2022-09-08T09:02:01Z
--- tags: - trocr - image-to-text widget: - src: https://raw.githubusercontent.com/ku21fan/STR-Fewer-Labels/main/demo_image/1.png example_title: Example 1 - src: https://raw.githubusercontent.com/HCIILAB/Scene-Text-Recognition-Recommendations/main/Dataset_images/LSVT1.jpg example_title: Example 2 - src: https://raw.githubusercontent.com/HCIILAB/Scene-Text-Recognition-Recommendations/main/Dataset_images/ArT2.jpg example_title: Example 3 --- # TrOCR (base-sized model, fine-tuned on STR benchmarks) TrOCR model fine-tuned on the training sets of IC13, IC15, IIIT5K, SVT. It was introduced in the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Li et al. and first released in [this repository](https://github.com/microsoft/unilm/tree/master/trocr). ## Model description The TrOCR model is an encoder-decoder model, consisting of an image Transformer as encoder, and a text Transformer as decoder. The image encoder was initialized from the weights of BEiT, while the text decoder was initialized from the weights of RoBERTa. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Next, the Transformer text decoder autoregressively generates tokens. ## Intended uses & limitations You can use the raw model for optical character recognition (OCR) on single text-line images. See the [model hub](https://huggingface.co/models?search=microsoft/trocr) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model in PyTorch: ```python from transformers import TrOCRProcessor, VisionEncoderDecoderModel from PIL import Image import requests # load image from the IIIT-5k dataset url = 'https://i.postimg.cc/ZKwLg2Gw/367-14.png' image = Image.open(requests.get(url, stream=True).raw).convert("RGB") processor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-str') model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-base-str') pixel_values = processor(images=image, return_tensors="pt").pixel_values generated_ids = model.generate(pixel_values) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ### BibTeX entry and citation info ```bibtex @misc{li2021trocr, title={TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models}, author={Minghao Li and Tengchao Lv and Lei Cui and Yijuan Lu and Dinei Florencio and Cha Zhang and Zhoujun Li and Furu Wei}, year={2021}, eprint={2109.10282}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Guizmus/MosaicArt
Guizmus
2023-03-06T16:20:51Z
582
35
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "text-to-image", "image-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2022-12-01T14:43:57Z
--- language: - en license: creativeml-openrail-m thumbnail: "https://huggingface.co/Guizmus/MosaicArt/resolve/main/showcase.jpg" tags: - stable-diffusion - text-to-image - image-to-image --- # Mosaic Art ## Details ![Showcase](https://huggingface.co/Guizmus/MosaicArt/resolve/main/showcase.jpg) This is a Dreamboothed Stable Diffusion model trained on pictures of mosaic art. The total dataset is made of 46 pictures. V2 was trained on [Stable diffusion 2.1 768](https://huggingface.co/stabilityai/stable-diffusion-2-1). I used [StableTuner](https://github.com/devilismyfriend/StableTuner) to do the training, using full caption on the pictures with almost no recurring word outside the main concept, so that no additionnal regularisation was needed. 6 epochs of 40 repeats on LR 1e-6 were used, with prior preservation. V1 was trained on [runawayml 1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5) and the [new VAE](https://huggingface.co/stabilityai/sd-vae-ft-mse). I used [EveryDream](https://github.com/victorchall/EveryDream-trainer) to do the training, using full caption on the pictures with almost no recurring word outside the main concept, so that no additionnal regularisation was needed. Out of e0 to e11 epochs, e8 was selected as the best application of style while not overtraining. Prior preservation was constated as good. A total of 9 epochs of 40 repeats with a learning rate of 1e-6. The token "Mosaic Art" will bring in the new concept, trained as a style. The recommended sampling is k_Euler_a or DPM++ 2M Karras on 20 steps, CFGS 7.5 . ## Model v2 [CKPT v2](https://huggingface.co/Guizmus/MosaicArt/resolve/main/MosaicArt_v2.ckpt) [YAML v2](https://huggingface.co/Guizmus/MosaicArt/resolve/main/MosaicArt_v2.yaml) ## Model v1 ![Showcase](https://huggingface.co/Guizmus/MosaicArt/resolve/main/showcase.png) [CKPT v1](https://huggingface.co/Guizmus/MosaicArt/resolve/main/MosaicArt_v1.ckpt) [CKPT v1 with ema weights](https://huggingface.co/Guizmus/MosaicArt/resolve/main/MosaicArt_v1_ema.ckpt) [Dataset](https://huggingface.co/Guizmus/MosaicArt/resolve/main/dataset_v1.zip) ## 🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX](). ```python from diffusers import StableDiffusionPipeline import torch model_id = "Guizmus/MosaicArt" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "Mosaic Art dog on the moon" image = pipe(prompt).images[0] image.save("./MosaicArt.png") ```
TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF
TheBloke
2023-09-27T12:49:32Z
582
8
transformers
[ "transformers", "gguf", "llama", "llama-2", "text-generation", "en", "base_model:royallab/Pygmalion-2-13b-SuperCoT-weighed", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
2023-09-16T13:51:30Z
--- language: - en license: llama2 library_name: transformers tags: - llama - llama-2 model_name: Pygmalion 2 13B SuperCOT Weighed base_model: royallab/Pygmalion-2-13b-SuperCoT-weighed inference: false model_creator: royallab model_type: llama pipeline_tag: text-generation prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Pygmalion 2 13B SuperCOT Weighed - GGUF - Model creator: [royallab](https://huggingface.co/royallab) - Original model: [Pygmalion 2 13B SuperCOT Weighed](https://huggingface.co/royallab/Pygmalion-2-13b-SuperCoT-weighed) <!-- description start --> ## Description This repo contains GGUF format model files for [royallab's Pygmalion 2 13B SuperCOT Weighed](https://huggingface.co/royallab/Pygmalion-2-13b-SuperCoT-weighed). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF) * [royallab's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/royallab/Pygmalion-2-13b-SuperCoT-weighed) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [pygmalion-2-13b-supercot-weighed.Q2_K.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF/blob/main/pygmalion-2-13b-supercot-weighed.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [pygmalion-2-13b-supercot-weighed.Q3_K_S.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF/blob/main/pygmalion-2-13b-supercot-weighed.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [pygmalion-2-13b-supercot-weighed.Q3_K_M.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF/blob/main/pygmalion-2-13b-supercot-weighed.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [pygmalion-2-13b-supercot-weighed.Q3_K_L.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF/blob/main/pygmalion-2-13b-supercot-weighed.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [pygmalion-2-13b-supercot-weighed.Q4_0.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF/blob/main/pygmalion-2-13b-supercot-weighed.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [pygmalion-2-13b-supercot-weighed.Q4_K_S.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF/blob/main/pygmalion-2-13b-supercot-weighed.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [pygmalion-2-13b-supercot-weighed.Q4_K_M.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF/blob/main/pygmalion-2-13b-supercot-weighed.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [pygmalion-2-13b-supercot-weighed.Q5_0.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF/blob/main/pygmalion-2-13b-supercot-weighed.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [pygmalion-2-13b-supercot-weighed.Q5_K_S.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF/blob/main/pygmalion-2-13b-supercot-weighed.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [pygmalion-2-13b-supercot-weighed.Q5_K_M.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF/blob/main/pygmalion-2-13b-supercot-weighed.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [pygmalion-2-13b-supercot-weighed.Q6_K.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF/blob/main/pygmalion-2-13b-supercot-weighed.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [pygmalion-2-13b-supercot-weighed.Q8_0.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF/blob/main/pygmalion-2-13b-supercot-weighed.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF and below it, a specific filename to download, such as: pygmalion-2-13b-supercot-weighed.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF pygmalion-2-13b-supercot-weighed.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF pygmalion-2-13b-supercot-weighed.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m pygmalion-2-13b-supercot-weighed.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF", model_file="pygmalion-2-13b-supercot-weighed.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: royallab's Pygmalion 2 13B SuperCOT Weighed # Model Card: Pygmalion-2-13b-SuperCOT-weighted This is an experimental weighted merge between: - [Pygmalion 2 13b](https://huggingface.co/PygmalionAI/pygmalion-2-13b) - [Ausboss's Llama2 SuperCOT loras](https://huggingface.co/ausboss/llama2-13b-supercot-loras) Quantizations provided by us and TheBloke: - [Exl2](https://huggingface.co/royallab/Pygmalion-2-13b-SuperCOT-weighed-exl2) - [GPTQ](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GPTQ) - [GGUF](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF) The merge was performed by a gradient merge script (apply-lora-weight-ltl.py) from [zaraki-tools](https://github.com/zarakiquemparte/zaraki-tools) by Zaraki. Thanks to Zaraki for the inspiration and help. This merge differs from the previous Pyg-2-SuperCOT merges. The first iteration of the SuperCOT loras were used here since it performed better than SuperCOT2. The SuperCOT lora was merged with the following layer weights (basically 50/50. The exact ratio is 0.51) ``` 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.5,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1 ``` Here is an image to help visualize this merge. The light blue is Pygmalion-2-13b and the light green is the SuperCOT lora: ![gradient-image](https://files.catbox.moe/ndbz7t.png) ## Usage: Since this is an experimental weight merge between Pygmalion-2 and SuperCOT, the following instruction formats should work: Metharme: ``` <|system|>This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.<|user|>Start!<|model|> ``` Alpaca: ``` ### Instruction: Your instruction or question here. ### Response: ``` ## Bias, Risks, and Limitations The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form. In addition, this merge is experimental from our own testing. Your results may vary. ## Training Details This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details. <!-- original-model-card end -->
sayhan/Mistral-7B-Instruct-v0.2-turkish-GGUF
sayhan
2024-02-16T19:55:52Z
582
8
transformers
[ "transformers", "gguf", "text-generation", "tr", "en", "base_model:malhajar/Mistral-7B-Instruct-v0.2-turkish", "license:apache-2.0", "region:us" ]
text-generation
2024-02-09T16:26:55Z
--- base_model: malhajar/Mistral-7B-Instruct-v0.2-turkish language: - tr - en pipeline_tag: text-generation license: apache-2.0 model_type: mistral library_name: transformers inference: false --- ## Mistral 7B Instruct v0.2 Turkish - **Model creator:** [malhajar](https://huggingface.co/malhajar) - **Original model:** [Mistral-7B-Instruct-v0.2-turkish](https://huggingface.co/malhajar/Mistral-7B-Instruct-v0.2-turkish) <!-- description start --> ## Description This repo contains GGUF format model files for [malhajar's Mistral 7B Instruct v0.2 Turkish](https://huggingface.co/malhajar/Mistral-7B-Instruct-v0.2-turkish) ## Original model - **Developed by:** [`Mohamad Alhajar`](https://www.linkedin.com/in/muhammet-alhajar/) - **Language(s) (NLP):** Turkish - **Finetuned from model:** [`mistralai/Mistral-7B-Instruct-v0.2`](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) # Quantization methods | quantization method | bits | size | use case | recommended | |---------------------|------|----------|-----------------------------------------------------|-------------| | Q2_K | 2 | 2.72 GB | smallest, significant quality loss | ❌ | | Q3_K_S | 3 | 3.16 GB | very small, high quality loss | ❌ | | Q3_K_M | 3 | 3.52 GB | very small, high quality loss | ❌ | | Q3_K_L | 3 | 3.82 GB | small, substantial quality loss | ❌ | | Q4_0 | 4 | 4.11 GB | legacy; small, very high quality loss | ❌ | | Q4_K_S | 4 | 4.14 GB | small, greater quality loss | ❌ | | Q4_K_M | 4 | 4.37 GB | medium, balanced quality | ✅ | | Q5_0 | 5 | 5.00 GB | legacy; medium, balanced quality | ❌ | | Q5_K_S | 5 | 5.00 GB | large, low quality loss | ✅ | | Q5_K_M | 5 | 5.13 GB | large, very low quality loss | ✅ | | Q6_K | 6 | 5.94 GB | very large, extremely low quality loss | ❌ | | Q8_0 | 8 | 7.70 GB | very large, extremely low quality loss | ❌ | | FP16 | 16 | 14.5 GB | enormous, minuscule quality loss | ❌ | ## Prompt Template ``` ### Instruction: <prompt> (without the <>) ### Response: ``` <!-- description end -->
MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2-GGUF
MaziyarPanahi
2024-05-08T22:28:43Z
582
2
null
[ "gguf", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "llama", "llama-3", "base_model:MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2", "region:us" ]
text-generation
2024-05-02T20:28:42Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - llama - llama-3 - text-generation model_name: Llama-3-70B-Instruct-DPO-v0.2-GGUF base_model: MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2](https://huggingface.co/MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2) ## Description [MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2-GGUF) contains GGUF format model files for [MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2](https://huggingface.co/MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.2). IMPORTANT: There is no need to merge the splits. By now, most libraries support automatically loading the splits by simply pointing to the first one. ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
internlm/internlm2-math-plus-1_8b
internlm
2024-06-05T03:43:35Z
582
5
transformers
[ "transformers", "safetensors", "internlm2", "text-generation", "math", "conversational", "custom_code", "en", "zh", "arxiv:2402.06332", "license:other", "autotrain_compatible", "region:us" ]
text-generation
2024-05-24T03:02:58Z
--- pipeline_tag: text-generation license: other language: - en - zh tags: - math --- # InternLM-Math-Plus <div align="center"> <img src="https://raw.githubusercontent.com/InternLM/InternLM/main/assets/logo.svg" width="200"/> <div> </div> <div align="center"> <b><font size="5">InternLM-Math</font></b> <sup> <a href="https://internlm.intern-ai.org.cn/"> <i><font size="4">Plus</font></i> </a> </sup> <div> </div> </div> State-of-the-art bilingual open-sourced Math reasoning LLMs. A **solver**, **prover**, **verifier**, **augmentor**. [💻 Github](https://github.com/InternLM/InternLM-Math) [🤗 Demo](https://huggingface.co/spaces/internlm/internlm2-math-7b) </div> # News - [2024.05.24] We release updated version InternLM2-Math-Plus with 4 sizes and state-of-the-art performances including 1.8B, 7B, 20B, and 8x22B. We improve informal math reasoning performance (chain-of-thought and code-intepreter) and formal math reasoning performance (LEAN 4 translation and LEAN 4 theorem proving) significantly. - [2024.02.10] We add tech reports and citation reference. - [2024.01.31] We add MiniF2F results with evaluation codes! - [2024.01.29] We add checkpoints from ModelScope. Update results about majority voting and Code Intepreter. Tech report is on the way! - [2024.01.26] We add checkpoints from OpenXLab, which ease Chinese users to download! # Performance ## Formal Math Reasoning We evaluate the performance of InternLM2-Math-Plus on formal math reasoning benchmark MiniF2F-test. The evaluation setting is same as Llemma with LEAN 4. | Models | MiniF2F-test | | -------------------------------- | ------------ | | ReProver | 26.5 | | LLMStep | 27.9 | | GPT-F | 36.6 | | HTPS | 41.0 | | Llemma-7B | 26.2 | | Llemma-34B | 25.8 | | InternLM2-Math-7B-Base | 30.3 | | InternLM2-Math-20B-Base | 29.5 | | InternLM2-Math-Plus-1.8B | 38.9 | | InternLM2-Math-Plus-7B | **43.4** | | InternLM2-Math-Plus-20B | 42.6 | | InternLM2-Math-Plus-Mixtral8x22B | 37.3 | ## Informal Math Reasoning We evaluate the performance of InternLM2-Math-Plus on informal math reasoning benchmark MATH and GSM8K. InternLM2-Math-Plus-1.8B outperforms MiniCPM-2B in the smallest size setting. InternLM2-Math-Plus-7B outperforms Deepseek-Math-7B-RL which is the state-of-the-art math reasoning open source model. InternLM2-Math-Plus-Mixtral8x22B achieves 68.5 on MATH (with Python) and 91.8 on GSM8K. | Model | MATH | MATH-Python | GSM8K | | -------------------------------- | -------- | ----------- | -------- | | MiniCPM-2B | 10.2 | - | 53.8 | | InternLM2-Math-Plus-1.8B | **37.0** | **41.5** | **58.8** | | InternLM2-Math-7B | 34.6 | 50.9 | 78.1 | | Deepseek-Math-7B-RL | 51.7 | 58.8 | **88.2** | | InternLM2-Math-Plus-7B | **53.0** | **59.7** | 85.8 | | InternLM2-Math-20B | 37.7 | 54.3 | 82.6 | | InternLM2-Math-Plus-20B | **53.8** | **61.8** | **87.7** | | Mixtral8x22B-Instruct-v0.1 | 41.8 | - | 78.6 | | Eurux-8x22B-NCA | 49.0 | - | - | | InternLM2-Math-Plus-Mixtral8x22B | **58.1** | **68.5** | **91.8** | We also evaluate models on [MathBench-A](https://github.com/open-compass/MathBench). InternLM2-Math-Plus-Mixtral8x22B has comparable performance compared to Claude 3 Opus. | Model | Arithmetic | Primary | Middle | High | College | Average | | -------------------------------- | ---------- | ------- | ------ | ---- | ------- | ------- | | GPT-4o-0513 | 77.7 | 87.7 | 76.3 | 59.0 | 54.0 | 70.9 | | Claude 3 Opus | 85.7 | 85.0 | 58.0 | 42.7 | 43.7 | 63.0 | | Qwen-Max-0428 | 72.3 | 86.3 | 65.0 | 45.0 | 27.3 | 59.2 | | Qwen-1.5-110B | 70.3 | 82.3 | 64.0 | 47.3 | 28.0 | 58.4 | | Deepseek-V2 | 82.7 | 89.3 | 59.0 | 39.3 | 29.3 | 59.9 | | Llama-3-70B-Instruct | 70.3 | 86.0 | 53.0 | 38.7 | 34.7 | 56.5 | | InternLM2-Math-Plus-Mixtral8x22B | 77.5 | 82.0 | 63.6 | 50.3 | 36.8 | 62.0 | | InternLM2-Math-20B | 58.7 | 70.0 | 43.7 | 24.7 | 12.7 | 42.0 | | InternLM2-Math-Plus-20B | 65.8 | 79.7 | 59.5 | 47.6 | 24.8 | 55.5 | | Llama3-8B-Instruct | 54.7 | 71.0 | 25.0 | 19.0 | 14.0 | 36.7 | | InternLM2-Math-7B | 53.7 | 67.0 | 41.3 | 18.3 | 8.0 | 37.7 | | Deepseek-Math-7B-RL | 68.0 | 83.3 | 44.3 | 33.0 | 23.0 | 50.3 | | InternLM2-Math-Plus-7B | 61.4 | 78.3 | 52.5 | 40.5 | 21.7 | 50.9 | | MiniCPM-2B | 49.3 | 51.7 | 18.0 | 8.7 | 3.7 | 26.3 | | InternLM2-Math-Plus-1.8B | 43.0 | 43.3 | 25.4 | 18.9 | 4.7 | 27.1 | # Citation and Tech Report ``` @misc{ying2024internlmmath, title={InternLM-Math: Open Math Large Language Models Toward Verifiable Reasoning}, author={Huaiyuan Ying and Shuo Zhang and Linyang Li and Zhejian Zhou and Yunfan Shao and Zhaoye Fei and Yichuan Ma and Jiawei Hong and Kuikun Liu and Ziyi Wang and Yudong Wang and Zijian Wu and Shuaibin Li and Fengzhe Zhou and Hongwei Liu and Songyang Zhang and Wenwei Zhang and Hang Yan and Xipeng Qiu and Jiayu Wang and Kai Chen and Dahua Lin}, year={2024}, eprint={2402.06332}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
mradermacher/ReMantik-L2-70B-i1-GGUF
mradermacher
2024-06-06T21:50:25Z
582
0
transformers
[ "transformers", "gguf", "en", "base_model:Sao10K/ReMantik-L2-70B", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-06-05T05:44:06Z
--- base_model: Sao10K/ReMantik-L2-70B language: - en library_name: transformers license: llama2 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Sao10K/ReMantik-L2-70B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/ReMantik-L2-70B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ReMantik-L2-70B-i1-GGUF/resolve/main/ReMantik-L2-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/ReMantik-L2-70B-i1-GGUF/resolve/main/ReMantik-L2-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/ReMantik-L2-70B-i1-GGUF/resolve/main/ReMantik-L2-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | | | [GGUF](https://huggingface.co/mradermacher/ReMantik-L2-70B-i1-GGUF/resolve/main/ReMantik-L2-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | | | [GGUF](https://huggingface.co/mradermacher/ReMantik-L2-70B-i1-GGUF/resolve/main/ReMantik-L2-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/ReMantik-L2-70B-i1-GGUF/resolve/main/ReMantik-L2-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | | | [GGUF](https://huggingface.co/mradermacher/ReMantik-L2-70B-i1-GGUF/resolve/main/ReMantik-L2-70B.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/ReMantik-L2-70B-i1-GGUF/resolve/main/ReMantik-L2-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ReMantik-L2-70B-i1-GGUF/resolve/main/ReMantik-L2-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/ReMantik-L2-70B-i1-GGUF/resolve/main/ReMantik-L2-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/ReMantik-L2-70B-i1-GGUF/resolve/main/ReMantik-L2-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/ReMantik-L2-70B-i1-GGUF/resolve/main/ReMantik-L2-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/ReMantik-L2-70B-i1-GGUF/resolve/main/ReMantik-L2-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/ReMantik-L2-70B-i1-GGUF/resolve/main/ReMantik-L2-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/ReMantik-L2-70B-i1-GGUF/resolve/main/ReMantik-L2-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | | | [GGUF](https://huggingface.co/mradermacher/ReMantik-L2-70B-i1-GGUF/resolve/main/ReMantik-L2-70B.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/ReMantik-L2-70B-i1-GGUF/resolve/main/ReMantik-L2-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/ReMantik-L2-70B-i1-GGUF/resolve/main/ReMantik-L2-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ReMantik-L2-70B-i1-GGUF/resolve/main/ReMantik-L2-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/ReMantik-L2-70B-i1-GGUF/resolve/main/ReMantik-L2-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/ReMantik-L2-70B-i1-GGUF/resolve/main/ReMantik-L2-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ReMantik-L2-70B-i1-GGUF/resolve/main/ReMantik-L2-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
mradermacher/Mixtral_7Bx4_MOE_24B-i1-GGUF
mradermacher
2024-06-15T16:26:50Z
582
0
transformers
[ "transformers", "gguf", "en", "base_model:cloudyu/Mixtral_7Bx4_MOE_24B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-06-15T08:39:46Z
--- base_model: cloudyu/Mixtral_7Bx4_MOE_24B language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/cloudyu/Mixtral_7Bx4_MOE_24B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_24B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_24B-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_24B.i1-IQ1_S.gguf) | i1-IQ1_S | 5.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_24B-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_24B.i1-IQ1_M.gguf) | i1-IQ1_M | 5.6 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_24B-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_24B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_24B-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_24B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_24B-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_24B.i1-IQ2_S.gguf) | i1-IQ2_S | 7.4 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_24B-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_24B.i1-IQ2_M.gguf) | i1-IQ2_M | 8.1 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_24B-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_24B.i1-Q2_K.gguf) | i1-Q2_K | 8.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_24B-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_24B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_24B-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_24B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_24B-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_24B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_24B-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_24B.i1-IQ3_S.gguf) | i1-IQ3_S | 10.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_24B-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_24B.i1-IQ3_M.gguf) | i1-IQ3_M | 10.7 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_24B-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_24B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.7 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_24B-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_24B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_24B-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_24B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 13.0 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_24B-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_24B.i1-Q4_0.gguf) | i1-Q4_0 | 13.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_24B-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_24B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_24B-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_24B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_24B-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_24B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.7 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_24B-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_24B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.2 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_24B-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_24B.i1-Q6_K.gguf) | i1-Q6_K | 19.9 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
israellaguan/tinyllama-moe-nord-chat-Q8_0-GGUF
israellaguan
2024-06-24T02:58:01Z
582
0
null
[ "gguf", "generated_from_trainer", "llama-cpp", "gguf-my-repo", "base_model:four-two-labs/tinyllama-moe-nord-chat", "license:apache-2.0", "region:us" ]
null
2024-06-24T02:57:46Z
--- base_model: four-two-labs/tinyllama-moe-nord-chat license: apache-2.0 tags: - generated_from_trainer - llama-cpp - gguf-my-repo model-index: - name: runs/model/tinyllama-moe-orpo results: [] --- # israellaguan/tinyllama-moe-nord-chat-Q8_0-GGUF This model was converted to GGUF format from [`four-two-labs/tinyllama-moe-nord-chat`](https://huggingface.co/four-two-labs/tinyllama-moe-nord-chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/four-two-labs/tinyllama-moe-nord-chat) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo israellaguan/tinyllama-moe-nord-chat-Q8_0-GGUF --hf-file tinyllama-moe-nord-chat-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo israellaguan/tinyllama-moe-nord-chat-Q8_0-GGUF --hf-file tinyllama-moe-nord-chat-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo israellaguan/tinyllama-moe-nord-chat-Q8_0-GGUF --hf-file tinyllama-moe-nord-chat-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo israellaguan/tinyllama-moe-nord-chat-Q8_0-GGUF --hf-file tinyllama-moe-nord-chat-q8_0.gguf -c 2048 ```
dbmdz/german-gpt2-faust
dbmdz
2023-09-06T22:18:42Z
581
1
transformers
[ "transformers", "pytorch", "jax", "safetensors", "gpt2", "text-generation", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: de widget: - text: "Schon um die Liebe" license: mit --- # German GPT-2 model In this repository we release (yet another) GPT-2 model, that was trained on various texts for German. The model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or "dangerous" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model 😉 **Note**: The model was initially released under an anonymous alias (`anonymous-german-nlp/german-gpt2`) so we now "de-anonymize" it. More details about GPT-2 can be found in the great [Hugging Face](https://huggingface.co/transformers/model_doc/gpt2.html) documentation. ## German GPT-2 fine-tuned on Faust I and II We fine-tuned our German GPT-2 model on "Faust I and II" from Johann Wolfgang Goethe. These texts can be obtained from [Deutsches Textarchiv (DTA)](http://www.deutschestextarchiv.de/book/show/goethe_faust01_1808). We use the "normalized" version of both texts (to avoid out-of-vocabulary problems with e.g. "ſ") Fine-Tuning was done for 100 epochs, using a batch size of 4 with half precision on a RTX 3090. Total time was around 12 minutes (it is really fast!). We also open source this fine-tuned model. Text can be generated with: ```python from transformers import pipeline pipe = pipeline('text-generation', model="dbmdz/german-gpt2-faust", tokenizer="dbmdz/german-gpt2-faust") text = pipe("Schon um die Liebe", max_length=100)[0]["generated_text"] print(text) ``` and could output: ``` Schon um die Liebe bitte ich, Herr! Wer mag sich die dreifach Ermächtigen? Sei mir ein Held! Und daß die Stunde kommt spreche ich nicht aus. Faust (schaudernd). Den schönen Boten finde' ich verwirrend; ``` # License All models are licensed under [MIT](LICENSE). # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT models just open an issue [here](https://github.com/stefan-it/german-gpt/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
dennlinger/roberta-cls-consec
dennlinger
2023-03-21T12:25:25Z
581
12
transformers
[ "transformers", "pytorch", "jax", "safetensors", "roberta", "text-classification", "arxiv:2012.03619", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# About this model: Topical Change Detection in Documents This network has been fine-tuned for the task described in the paper *Topical Change Detection in Documents via Embeddings of Long Sequences* and is our best-performing base-transformer model. You can find more detailed information in our GitHub page for the paper [here](https://github.com/dennlinger/TopicalChange), or read the [paper itself](https://arxiv.org/abs/2012.03619). The weights are based on RoBERTa-base. # Load the model The preferred way is through pipelines ```python from transformers import pipeline pipe = pipeline("text-classification", model="dennlinger/roberta-cls-consec") pipe("{First paragraph} [SEP] {Second paragraph}") ``` # Input Format The model expects two segments that are separated with the `[SEP]` token. In our training setup, we had entire paragraphs as samples (or up to 512 tokens across two paragraphs), specifically trained on a Terms of Service data set. Note that this might lead to poor performance on "general" topics, such as news articles or Wikipedia. # Training objective The training task is to determine whether two text segments (paragraphs) belong to the same topical section or not. This can be utilized to create a topical segmentation of a document by consecutively predicting the "coherence" of two segments. If you are experimenting via the Huggingface Model API, the following are interpretations of the `LABEL`s: * `LABEL_0`: Two input segments separated by `[SEP]` do *not* belong to the same topic. * `LABEL_1`: Two input segments separated by `[SEP]` do belong to the same topic. # Performance The results of this model can be found in the paper. We average over models from five different random seeds, which is why the specific results for this model might be different from the exact values in the paper. Note that this model is *not* trained to work on classifying single texts, but only works with two (separated) inputs.
shivi/donut-cheque-parser
shivi
2022-11-10T02:30:19Z
581
0
transformers
[ "transformers", "pytorch", "vision-encoder-decoder", "endpoints_compatible", "region:us" ]
null
2022-11-09T21:20:51Z
Entry not found
lxyuan/distilgpt2-finetuned-finance
lxyuan
2023-07-07T09:09:48Z
581
6
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "en", "dataset:causal-lm/finance", "dataset:gbharti/finance-alpaca", "dataset:PaulAdversarial/all_news_finance_sm_1h2023", "dataset:winddude/reddit_finance_43_250k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-06-29T03:27:54Z
--- tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-finance results: [] license: apache-2.0 datasets: - causal-lm/finance - gbharti/finance-alpaca - PaulAdversarial/all_news_finance_sm_1h2023 - winddude/reddit_finance_43_250k language: - en --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-finance This model is a fine-tuned version of distilgpt2 on the the combination of 4 different finance datasets: - [causal-lm/finance](https://huggingface.co/datasets/causal-lm/finance) - [gbharti/finance-alpaca](https://huggingface.co/datasets/gbharti/finance-alpaca) - [PaulAdversarial/all_news_finance_sm_1h2023](https://huggingface.co/datasets/PaulAdversarial/all_news_finance_sm_1h2023) - [winddude/reddit_finance_43_250k](https://huggingface.co/datasets/winddude/reddit_finance_43_250k) ## Training and evaluation data One can reproduce the dataset using the following code: ```python # load dataset dataset_1 = load_dataset("gbharti/finance-alpaca") dataset_2 = load_dataset("PaulAdversarial/all_news_finance_sm_1h2023") dataset_3 = load_dataset("winddude/reddit_finance_43_250k") dataset_4 = load_dataset("causal-lm/finance") # create a column called text dataset_1 = dataset_1.map( lambda example: {"text": example["instruction"] + " " + example["output"]}, num_proc=4, ) dataset_1 = dataset_1.remove_columns(["input", "instruction", "output"]) dataset_2 = dataset_2.map( lambda example: {"text": example["title"] + " " + example["description"]}, num_proc=4, ) dataset_2 = dataset_2.remove_columns( ["_id", "main_domain", "title", "description", "created_at"] ) dataset_3 = dataset_3.map( lambda example: { "text": example["title"] + " " + example["selftext"] + " " + example["body"] }, num_proc=4, ) dataset_3 = dataset_3.remove_columns( [ "id", "title", "selftext", "z_score", "normalized_score", "subreddit", "body", "comment_normalized_score", "combined_score", ] ) dataset_4 = dataset_4.map( lambda example: {"text": example["instruction"] + " " + example["output"]}, num_proc=4, ) dataset_4 = dataset_4.remove_columns(["input", "instruction", "output"]) # combine and split train test sets combined_dataset = concatenate_datasets( [ dataset_1["train"], dataset_2["train"], dataset_3["train"], dataset_4["train"], dataset_4["validation"], ] ) datasets = combined_dataset.train_test_split(test_size=0.2) ``` ## Inference example ```python from transformers import pipeline generator = pipeline(model="lxyuan/distilgpt2-finetuned-finance") generator("Tesla is", pad_token_id=generator.tokenizer.eos_token_id, max_new_tokens=200, num_return_sequences=2 ) >>> {'generated_text': 'Tesla is likely going to have a "market crash" over 20 years - I believe I\'m just not sure how this is going to affect the world. \n\nHowever, I would like to see this play out as a global financial crisis. With US interest rates already high, a crash in global real estate prices means that people are likely to feel pressure on assets that are less well served by the assets the US government gives them. \n\nWould these things help you in your retirement? I\'m fairly new to Wall Street, and it makes me think that you should have a bit more control over your assets (I’m not super involved in stock picking, but I’ve heard many times that governments can help their citizens), right? As another commenter has put it: there\'s something called a market crash that could occur in the second world country for most markets (I don\'t know how that would fit under US laws if I had done all of the above. \n\n' }, {'generated_text': "Tesla is on track to go from 1.46 to 1.79 per cent growth in Q3 (the fastest pace so far in the US), which will push down the share price.\n\nWhile the dividend could benefit Amazon’s growth, earnings also aren’t expected to be high at all, the company's annual earnings could be an indication that investors have a strong plan to boost sales by the end of the year if earnings season continues.\n\nThe latest financials showed earnings as of the end of July, followed by the earnings guidance from analysts at the Canadian Real Estate Association, which showed that Amazon’s revenues were up over $1.8 Trillion, which is a far cry from what was expected in early Q1.\n\nAmazon has grown the share price by as much as 1.6 percent since June 2020. Analysts had predicted that earnings growth in the stock would drop to 0.36 per cent for 2020, which would lead to Amazon’" } ``` ## Training procedure Notebook link: [here](https://github.com/LxYuan0420/nlp/blob/main/notebooks/finetune_distilgpt2_language_model_on_finance_dataset.ipynb) ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
ludis/tsukasa-limarp-7b-gguf
ludis
2024-01-06T17:57:04Z
581
0
transformers
[ "transformers", "gguf", "llama", "dataset:PygmalionAI/PIPPA", "dataset:lemonilia/LimaRP", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2023-09-04T03:09:31Z
--- datasets: - PygmalionAI/PIPPA - lemonilia/LimaRP --- ## GGUF gguf quants for ludis/tsukasa-limarp-7b ## Prompting https://rentry.org/v43eo - reccomended prompts and gen settings The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`. The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history. ## Training base model (llama-2-7b-hf) tuned on commit de693ac of the koishi dataset for 1 epoch as apart of ludis/tsukasa-7b then tuned on commit 36fc235 of pippa metharme for 1 epoch as apart of ludis/tsukasa-7b then tuned on Version 2023-09-03 of LimaRP (without ponyville, lolicit, all the fallen, and eka's portal subsets) for 2 epochs
TheBloke/Sheep-Duck-Llama-2-70B-GGUF
TheBloke
2023-09-27T12:48:54Z
581
8
transformers
[ "transformers", "gguf", "llama", "Riiid", "llama-2", "text-generation", "en", "arxiv:2306.02707", "base_model:Riiid/sheep-duck-llama-2", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
2023-09-12T13:06:24Z
--- language: - en license: llama2 library_name: transformers tags: - Riiid - llama-2 model_name: Sheep Duck Llama 2 base_model: Riiid/sheep-duck-llama-2 inference: false model_creator: Riiid model_type: llama pipeline_tag: text-generation prompt_template: '### System: {system_message} ### User: {prompt} ### Assistant: ' quantized_by: TheBloke thumbnail: https://cdn-uploads.huggingface.co/production/uploads/62fb1ef7e8c9c532aa7d19e4/NswB5XPkkOljeRh1xbMmR.png --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Sheep Duck Llama 2 - GGUF - Model creator: [Riiid](https://huggingface.co/Riiid) - Original model: [Sheep Duck Llama 2](https://huggingface.co/Riiid/sheep-duck-llama-2) <!-- description start --> ## Description This repo contains GGUF format model files for [Riiid's Sheep Duck Llama 2](https://huggingface.co/Riiid/sheep-duck-llama-2). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Sheep-Duck-Llama-2-70B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Sheep-Duck-Llama-2-70B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Sheep-Duck-Llama-2-70B-GGUF) * [Riiid's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Riiid/sheep-duck-llama-2) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Orca-Hashes ``` ### System: {system_message} ### User: {prompt} ### Assistant: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [sheep-duck-llama-2.Q2_K.gguf](https://huggingface.co/TheBloke/Sheep-Duck-Llama-2-70B-GGUF/blob/main/sheep-duck-llama-2.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes | | [sheep-duck-llama-2.Q3_K_S.gguf](https://huggingface.co/TheBloke/Sheep-Duck-Llama-2-70B-GGUF/blob/main/sheep-duck-llama-2.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss | | [sheep-duck-llama-2.Q3_K_M.gguf](https://huggingface.co/TheBloke/Sheep-Duck-Llama-2-70B-GGUF/blob/main/sheep-duck-llama-2.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss | | [sheep-duck-llama-2.Q3_K_L.gguf](https://huggingface.co/TheBloke/Sheep-Duck-Llama-2-70B-GGUF/blob/main/sheep-duck-llama-2.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss | | [sheep-duck-llama-2.Q4_0.gguf](https://huggingface.co/TheBloke/Sheep-Duck-Llama-2-70B-GGUF/blob/main/sheep-duck-llama-2.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [sheep-duck-llama-2.Q4_K_S.gguf](https://huggingface.co/TheBloke/Sheep-Duck-Llama-2-70B-GGUF/blob/main/sheep-duck-llama-2.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss | | [sheep-duck-llama-2.Q4_K_M.gguf](https://huggingface.co/TheBloke/Sheep-Duck-Llama-2-70B-GGUF/blob/main/sheep-duck-llama-2.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended | | [sheep-duck-llama-2.Q5_0.gguf](https://huggingface.co/TheBloke/Sheep-Duck-Llama-2-70B-GGUF/blob/main/sheep-duck-llama-2.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [sheep-duck-llama-2.Q5_K_S.gguf](https://huggingface.co/TheBloke/Sheep-Duck-Llama-2-70B-GGUF/blob/main/sheep-duck-llama-2.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended | | [sheep-duck-llama-2.Q5_K_M.gguf](https://huggingface.co/TheBloke/Sheep-Duck-Llama-2-70B-GGUF/blob/main/sheep-duck-llama-2.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended | | sheep-duck-llama-2.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss | | sheep-duck-llama-2.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ### Q6_K and Q8_0 files are split and require joining **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files. <details> <summary>Click for instructions regarding Q6_K and Q8_0 files</summary> ### q6_K Please download: * `sheep-duck-llama-2.Q6_K.gguf-split-a` * `sheep-duck-llama-2.Q6_K.gguf-split-b` ### q8_0 Please download: * `sheep-duck-llama-2.Q8_0.gguf-split-a` * `sheep-duck-llama-2.Q8_0.gguf-split-b` To join the files, do the following: Linux and macOS: ``` cat sheep-duck-llama-2.Q6_K.gguf-split-* > sheep-duck-llama-2.Q6_K.gguf && rm sheep-duck-llama-2.Q6_K.gguf-split-* cat sheep-duck-llama-2.Q8_0.gguf-split-* > sheep-duck-llama-2.Q8_0.gguf && rm sheep-duck-llama-2.Q8_0.gguf-split-* ``` Windows command line: ``` COPY /B sheep-duck-llama-2.Q6_K.gguf-split-a + sheep-duck-llama-2.Q6_K.gguf-split-b sheep-duck-llama-2.Q6_K.gguf del sheep-duck-llama-2.Q6_K.gguf-split-a sheep-duck-llama-2.Q6_K.gguf-split-b COPY /B sheep-duck-llama-2.Q8_0.gguf-split-a + sheep-duck-llama-2.Q8_0.gguf-split-b sheep-duck-llama-2.Q8_0.gguf del sheep-duck-llama-2.Q8_0.gguf-split-a sheep-duck-llama-2.Q8_0.gguf-split-b ``` </details> <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Sheep-Duck-Llama-2-70B-GGUF and below it, a specific filename to download, such as: sheep-duck-llama-2.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Sheep-Duck-Llama-2-70B-GGUF sheep-duck-llama-2.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Sheep-Duck-Llama-2-70B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Sheep-Duck-Llama-2-70B-GGUF sheep-duck-llama-2.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m sheep-duck-llama-2.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\n{system_message}\n\n### User:\n{prompt}\n\n### Assistant:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Sheep-Duck-Llama-2-70B-GGUF", model_file="sheep-duck-llama-2.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Riiid's Sheep Duck Llama 2 # sheep-duck-llama-2 <img src = "https://cdn-uploads.huggingface.co/production/uploads/62fb1ef7e8c9c532aa7d19e4/NswB5XPkkOljeRh1xbMmR.png" width="30%" height="30%"> This is a finetuned model from llama-2-70b. ## Model Details * **Developed by**: [Riiid](https://riiid.com/) * **Backbone Model**: [LLaMA-2](https://github.com/facebookresearch/llama/tree/main) * **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers) ## Dataset Details ### Used Datasets - Orca-style dataset - Alpaca-style dataset ### Prompt Template ``` ### System: {System} ### User: {User} ### Assistant: {Assistant} ``` ## Evaluation | Metric | Value | |-----------------------|-------| | ARC (25-shot) | 72.44 | | HellaSwag (10-shot) | 87.79 | | MMLU (5-shot) | 70.74 | | TruthfulQA (0-shot) | 63.71 | | Avg. | 73.67 | ## Limitations & Biases: Llama2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/ ## License Disclaimer: This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind. ## Contact Us - [Riiid](https://riiid.com/) ## Citiation: Please kindly cite using the following BibTeX: ```bibtex @article{platypus2023, title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs}, author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz}, booktitle={arXiv preprint arxiv:2308.07317}, year={2023} } ``` ``` @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @misc{Orca-best, title = {Orca-best: A filtered version of orca gpt4 dataset.}, author = {Shahul Es}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://huggingface.co/datasets/shahules786/orca-best/}, } ``` ``` @software{touvron2023llama2, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom}, year={2023} } ``` <!-- original-model-card end -->
mmnga/japanese-stablelm-instruct-gamma-7b-gguf
mmnga
2023-10-25T13:33:10Z
581
4
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
2023-10-25T04:10:10Z
--- license: apache-2.0 --- # japanese-stablelm-instruct-gamma-7b-gguf [stabilityaiさんが公開しているjapanese-stablelm-instruct-gamma-7b](https://huggingface.co/stabilityai/japanese-stablelm-instruct-gamma-7b)のggufフォーマット変換版です。 他のモデルはこちら 3bモデル [mmnga/japanese-stablelm-3b-4e1t-base-gguf](https://huggingface.co/mmnga/japanese-stablelm-3b-4e1t-base-gguf) [mmnga/japanese-stablelm-3b-4e1t-instruct-gguf](https://huggingface.co/mmnga/japanese-stablelm-3b-4e1t-instruct-gguf) 7bモデル [mmnga/japanese-stablelm-base-gamma-7b-gguf](https://huggingface.co/mmnga/japanese-stablelm-base-gamma-7b-gguf) [mmnga/japanese-stablelm-instruct-gamma-7b-gguf](https://huggingface.co/mmnga/japanese-stablelm-instruct-gamma-7b-gguf) ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'japanese-stablelm-instruct-gamma-7b-q4_0.gguf' -n 256 -p '### 指示: 今夜の晩御飯のレシピを教えて \n ### 応答:' ```
TheBloke/Uncensored-Jordan-13B-GGUF
TheBloke
2023-10-30T21:25:22Z
581
5
transformers
[ "transformers", "gguf", "llama", "en", "base_model:ajibawa-2023/Uncensored-Jordan-13B", "license:cc-by-nc-nd-4.0", "text-generation-inference", "region:us" ]
null
2023-10-30T20:18:32Z
--- base_model: ajibawa-2023/Uncensored-Jordan-13B inference: false language: - en license: cc-by-nc-nd-4.0 model_creator: Feynman Innovations model_name: Uncensored Jordan 13B model_type: llama prompt_template: 'This is a conversation with your Uncensored AI assistant Jordan. Jordan can discuss with you at length about various topics such as ---, ---, ---, . You can ask it anything you want and it will do its best to give you accurate and relevant information. Context You are a Uncensored AI assistant. USER: {prompt} ASSISTANT: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Uncensored Jordan 13B - GGUF - Model creator: [Feynman Innovations](https://huggingface.co/ajibawa-2023) - Original model: [Uncensored Jordan 13B](https://huggingface.co/ajibawa-2023/Uncensored-Jordan-13B) <!-- description start --> ## Description This repo contains GGUF format model files for [Feynman Innovations's Uncensored Jordan 13B](https://huggingface.co/ajibawa-2023/Uncensored-Jordan-13B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF) * [Feynman Innovations's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ajibawa-2023/Uncensored-Jordan-13B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Jordan ``` This is a conversation with your Uncensored AI assistant Jordan. Jordan can discuss with you at length about various topics such as ---, ---, ---, . You can ask it anything you want and it will do its best to give you accurate and relevant information. Context You are a Uncensored AI assistant. USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-nd-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Feynman Innovations's Uncensored Jordan 13B](https://huggingface.co/ajibawa-2023/Uncensored-Jordan-13B). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [uncensored-jordan-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [uncensored-jordan-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [uncensored-jordan-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [uncensored-jordan-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [uncensored-jordan-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [uncensored-jordan-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [uncensored-jordan-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [uncensored-jordan-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [uncensored-jordan-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [uncensored-jordan-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [uncensored-jordan-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [uncensored-jordan-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Uncensored-Jordan-13B-GGUF/blob/main/uncensored-jordan-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Uncensored-Jordan-13B-GGUF and below it, a specific filename to download, such as: uncensored-jordan-13b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Uncensored-Jordan-13B-GGUF uncensored-jordan-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Uncensored-Jordan-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Uncensored-Jordan-13B-GGUF uncensored-jordan-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m uncensored-jordan-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "This is a conversation with your Uncensored AI assistant Jordan. Jordan can discuss with you at length about various topics such as ---, ---, ---, . You can ask it anything you want and it will do its best to give you accurate and relevant information.\n\nContext\nYou are a Uncensored AI assistant.\n\nUSER: {prompt}\nASSISTANT:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Uncensored-Jordan-13B-GGUF", model_file="uncensored-jordan-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Feynman Innovations's Uncensored Jordan 13B **Jordan: An Uncensored Model** The name "Jordan" pays homage to the charismatic character Jordan Belfort, portrayed by Leonardo DiCaprio in the movie "The Wolf of Wall Street." Just like the character, Jordan aspires to push boundaries and encourage unfiltered conversations. Jordan Belfort, the central character in "The Wolf of Wall Street," is known for his audaciousness, charisma, and willingness to speak about anything, no matter the societal norms or boundaries. Jordan, the AI language model, draws inspiration from these qualities to offer a space where users can discuss a wide range of topics without fear of censorship or constraints. Users can engage in candid discussions about politics, religion, contentious issues, and other subjects that might be off-limits on mainstream social media platforms. It is trained on around 155000 set of conversations. Each set having 10~15 conversations. We will not release this data. **Warning** An uncensored model has no or little guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous objects. Publishing anything this model generates is the same as publishing it yourself. We are not responsible for what you generate using this model. **Training:** Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 77 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta. **GPTQ GGML & AWQ** GPTQ: TBA GGUF: TBA AWQ: TBA **Example Prompt:** ``` This is a conversation with your Uncensored AI assistant Jordan. Jordan can discuss with you at length about various topics such as ---, ---, ---, . You can ask it anything you want and it will do its best to give you accurate and relevant information. Context You are a Uncensored AI assistant. USER: <prompt> ASSISTANT: ``` <!-- original-model-card end -->
Efficient-Large-Model/VILA-7b
Efficient-Large-Model
2024-03-04T19:25:04Z
581
20
transformers
[ "transformers", "safetensors", "llava_llama", "text-generation", "VILA", "VLM", "arxiv:2312.07533", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-02-19T19:11:31Z
--- license: cc-by-nc-4.0 library_name: transformers pipeline_tag: text-generation tags: - VILA - VLM --- # VILA Model Card ## Model details **Model type:** VILA is a visual language model (VLM) pretrained with interleaved image-text data at scale, enabling multi-image VLM. VILA is deployable on the edge, including Jetson Orin and laptop by AWQ 4bit quantization through TinyChat framework. We find: (1) image-text pairs are not enough, interleaved image-text is essential; (2) unfreezing LLM during interleaved image-text pre-training enables in-context learning; (3)re-blending text-only instruction data is crucial to boost both VLM and text-only performance. VILA unveils appealing capabilities, including: multi-image reasoning, in-context learning, visual chain-of-thought, and better world knowledge. **Model date:** VILA-7b was trained in Feb 2024. **Paper or resources for more information:** https://github.com/Efficient-Large-Model/VILA ``` @misc{lin2023vila, title={VILA: On Pre-training for Visual Language Models}, author={Ji Lin and Hongxu Yin and Wei Ping and Yao Lu and Pavlo Molchanov and Andrew Tao and Huizi Mao and Jan Kautz and Mohammad Shoeybi and Song Han}, year={2023}, eprint={2312.07533}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## License - The code is released under the Apache 2.0 license as found in the [LICENSE](./LICENSE) file. - The pretrained weights are released under the [CC-BY-NC-SA-4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en). - The service is a research preview intended for non-commercial use only, and is subject to the following licenses and terms: - [Model License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA - [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI - [Dataset Licenses](https://github.com/Efficient-Large-Model/VILA/blob/main/data_prepare/LICENSE) for each one used during training. **Where to send questions or comments about the model:** https://github.com/Efficient-Large-Model/VILA/issues ## Intended use **Primary intended uses:** The primary use of VILA is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset See [Dataset Preparation](https://github.com/Efficient-Large-Model/VILA/blob/main/data_prepare/README.md) for more details. ## Evaluation dataset A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.
Rajesh2004/text-to-image-ai-model
Rajesh2004
2024-03-07T16:42:07Z
581
1
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-03-07T16:37:57Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### Text-to-Image-AI-Model Dreambooth model trained by Rajesh2004 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: AEC730222243020 Sample pictures of this concept: ![0](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(1).jpeg) ![1](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(5).jpeg) ![2](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(4).jpg) ![3](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(7).jpeg) ![4](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(4).jpeg) ![5](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(10).jpg) ![6](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(12).jpeg) ![7](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(14).jpeg) ![8](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(18).jpeg) ![9](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(11).jpeg) ![10](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(11).jpg) ![11](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(7).jpg) ![12](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(15).jpeg) ![13](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(2).jpeg) ![14](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(12).jpg) ![15](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(13).jpg) ![16](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(1).png) ![17](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(8).jpg) ![18](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(3).jpeg) ![19](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(3).jpg) ![20](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(5).jpg) ![21](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(16).jpeg) ![22](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(8).jpeg) ![23](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(9).jpg) ![24](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(10).jpeg) ![25](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(2).jpg) ![26](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(13).jpeg) ![27](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(9).jpeg) ![28](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(14).jpg) ![29](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(1).jpg) ![30](https://huggingface.co/Rajesh2004/text-to-image-ai-model/resolve/main/sample_images/rajpri_(17).jpeg)
mradermacher/Sue_Ann-7B-GGUF
mradermacher
2024-05-06T05:49:55Z
581
0
transformers
[ "transformers", "gguf", "en", "base_model:Datters/Sue_Ann-7B", "endpoints_compatible", "region:us" ]
null
2024-03-28T07:17:56Z
--- base_model: Datters/Sue_Ann-7B language: - en library_name: transformers quantized_by: mradermacher --- ## About static quants of https://huggingface.co/Datters/Sue_Ann-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Sue_Ann-7B-GGUF/resolve/main/Sue_Ann-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Sue_Ann-7B-GGUF/resolve/main/Sue_Ann-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Sue_Ann-7B-GGUF/resolve/main/Sue_Ann-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Sue_Ann-7B-GGUF/resolve/main/Sue_Ann-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Sue_Ann-7B-GGUF/resolve/main/Sue_Ann-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Sue_Ann-7B-GGUF/resolve/main/Sue_Ann-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Sue_Ann-7B-GGUF/resolve/main/Sue_Ann-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Sue_Ann-7B-GGUF/resolve/main/Sue_Ann-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Sue_Ann-7B-GGUF/resolve/main/Sue_Ann-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Sue_Ann-7B-GGUF/resolve/main/Sue_Ann-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Sue_Ann-7B-GGUF/resolve/main/Sue_Ann-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Sue_Ann-7B-GGUF/resolve/main/Sue_Ann-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Sue_Ann-7B-GGUF/resolve/main/Sue_Ann-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Sue_Ann-7B-GGUF/resolve/main/Sue_Ann-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Sue_Ann-7B-GGUF/resolve/main/Sue_Ann-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Sue_Ann-7B-GGUF/resolve/main/Sue_Ann-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Daschund-GGUF
mradermacher
2024-05-06T05:37:22Z
581
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "teknium/OpenHermes-2.5-Mistral-7B", "openchat/openchat-3.5-0106", "andrijdavid/macaroni-7b", "mistralai/Mistral-7B-Instruct-v0.2", "Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp", "Intel/neural-chat-7b-v3-1", "mlabonne/Beagle14-7B", "mlabonne/NeuralBeagle14-7B", "en", "base_model:N8Programs/Daschund", "endpoints_compatible", "region:us" ]
null
2024-03-29T13:59:35Z
--- base_model: N8Programs/Daschund language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - teknium/OpenHermes-2.5-Mistral-7B - openchat/openchat-3.5-0106 - andrijdavid/macaroni-7b - mistralai/Mistral-7B-Instruct-v0.2 - Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp - Intel/neural-chat-7b-v3-1 - mlabonne/Beagle14-7B - mlabonne/NeuralBeagle14-7B --- ## About static quants of https://huggingface.co/N8Programs/Daschund <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
premsa/political-bias-prediction-allsides-mDeBERTa
premsa
2024-04-02T09:23:18Z
581
0
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-29T21:29:54Z
--- license: apache-2.0 --- model base: https://huggingface.co/microsoft/mdeberta-v3-base dataset: https://github.com/ramybaly/Article-Bias-Prediction training parameters: - devices: 2xH100 - batch_size: 100 - epochs: 5 - dropout: 0.05 - max_length: 512 - learning_rate: 3e-5 - warmup_steps: 100 - random_state: 239 training methodology: - sanitize dataset following specific rule-set, utilize random split as provided in the dataset - train on train split and evaluate on validation split in each epoch - evaluate test split only on the model that performed best on validation loss result summary: - throughout the five training epochs, model of second epoch achieved the lowest validation loss of 0.2573 - on test split second epoch model achieved f1 score of 0.9184 and a test loss of 0.2904 usage: ``` model = AutoModelForSequenceClassification.from_pretrained("premsa/political-bias-prediction-allsides-mDeBERTa") tokenizer = AutoTokenizer.from_pretrained("premsa/political-bias-prediction-allsides-mDeBERTa") nlp = pipeline("text-classification", model=model, tokenizer=tokenizer) print(nlp("die massen werden von den medien kontrolliert.")) ```
mmnga/tokyotech-llm-Swallow-MS-7b-instruct-v0.1-gguf
mmnga
2024-05-03T04:35:34Z
581
0
null
[ "gguf", "mistral", "en", "ja", "dataset:TFMC/imatrix-dataset-for-japanese-llm", "license:apache-2.0", "region:us" ]
null
2024-05-02T13:37:22Z
--- license: apache-2.0 language: - en - ja datasets: - TFMC/imatrix-dataset-for-japanese-llm tags: - mistral --- # tokyotech-llm-Swallow-MS-7b-instruct-v0.1-gguf [tokyotech-llmさんが公開しているSwallow-MS-7b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-instruct-v0.1)のggufフォーマット変換版です。 imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。 ## 他のモデル mistral [mmnga/tokyotech-llm-Swallow-MS-7b-instruct-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-MS-7b-instruct-v0.1-gguf) [mmnga/tokyotech-llm-Swallow-7b-plus-hf-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-7b-plus-hf-gguf) [mmnga/tokyotech-llm-Swallow-MS-7b-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-MS-7b-v0.1-gguf) [mmnga/tokyotech-llm-Swallow-MX-8x7b-NVE-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-MX-8x7b-NVE-v0.1-gguf) llama2 [mmnga/tokyotech-llm-Swallow-7b-instruct-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-7b-instruct-v0.1-gguf) [mmnga/tokyotech-llm-Swallow-13b-instruct-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-13b-instruct-v0.1-gguf) [mmnga/tokyotech-llm-Swallow-70b-instruct-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-70b-instruct-v0.1-gguf) ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'tokyotech-llm-Swallow-MS-7b-instruct-v0.1-Q4_0.gguf' -n 128 -p '[INST] 今晩の夕食の レシピを教えて [/INST] ' ```
gnad/viqgen-bloomz-1b1-lorasft
gnad
2024-05-29T11:16:08Z
581
0
transformers
[ "transformers", "safetensors", "bloom", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-29T11:01:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Inixion-2x8B-GGUF
mradermacher
2024-06-13T21:07:56Z
581
0
transformers
[ "transformers", "gguf", "en", "base_model:Alsebay/Inixion-2x8B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-06-10T08:43:12Z
--- base_model: Alsebay/Inixion-2x8B language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Alsebay/Inixion-2x8B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Inixion-2x8B-GGUF/resolve/main/Inixion-2x8B.Q2_K.gguf) | Q2_K | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Inixion-2x8B-GGUF/resolve/main/Inixion-2x8B.IQ3_XS.gguf) | IQ3_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Inixion-2x8B-GGUF/resolve/main/Inixion-2x8B.Q3_K_S.gguf) | Q3_K_S | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/Inixion-2x8B-GGUF/resolve/main/Inixion-2x8B.IQ3_S.gguf) | IQ3_S | 6.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Inixion-2x8B-GGUF/resolve/main/Inixion-2x8B.IQ3_M.gguf) | IQ3_M | 6.3 | | | [GGUF](https://huggingface.co/mradermacher/Inixion-2x8B-GGUF/resolve/main/Inixion-2x8B.Q3_K_M.gguf) | Q3_K_M | 6.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Inixion-2x8B-GGUF/resolve/main/Inixion-2x8B.Q3_K_L.gguf) | Q3_K_L | 7.3 | | | [GGUF](https://huggingface.co/mradermacher/Inixion-2x8B-GGUF/resolve/main/Inixion-2x8B.IQ4_XS.gguf) | IQ4_XS | 7.6 | | | [GGUF](https://huggingface.co/mradermacher/Inixion-2x8B-GGUF/resolve/main/Inixion-2x8B.Q4_K_S.gguf) | Q4_K_S | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Inixion-2x8B-GGUF/resolve/main/Inixion-2x8B.Q4_K_M.gguf) | Q4_K_M | 8.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Inixion-2x8B-GGUF/resolve/main/Inixion-2x8B.Q5_K_S.gguf) | Q5_K_S | 9.6 | | | [GGUF](https://huggingface.co/mradermacher/Inixion-2x8B-GGUF/resolve/main/Inixion-2x8B.Q5_K_M.gguf) | Q5_K_M | 9.8 | | | [GGUF](https://huggingface.co/mradermacher/Inixion-2x8B-GGUF/resolve/main/Inixion-2x8B.Q6_K.gguf) | Q6_K | 11.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Inixion-2x8B-GGUF/resolve/main/Inixion-2x8B.Q8_0.gguf) | Q8_0 | 14.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
philschmid/lilt-en-funsd
philschmid
2022-11-22T07:42:39Z
580
3
transformers
[ "transformers", "pytorch", "tensorboard", "lilt", "token-classification", "generated_from_trainer", "dataset:funsd-layoutlmv3", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-18T08:27:17Z
--- license: mit tags: - generated_from_trainer datasets: - funsd-layoutlmv3 model-index: - name: lilt-en-funsd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lilt-en-funsd This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset. It achieves the following results on the evaluation set: - Loss: 1.6117 - Answer: {'precision': 0.8821428571428571, 'recall': 0.9069767441860465, 'f1': 0.8943874471937237, 'number': 817} - Header: {'precision': 0.6126126126126126, 'recall': 0.5714285714285714, 'f1': 0.591304347826087, 'number': 119} - Question: {'precision': 0.9045045045045045, 'recall': 0.9322191272051996, 'f1': 0.9181527206218564, 'number': 1077} - Overall Precision: 0.8797 - Overall Recall: 0.9006 - Overall F1: 0.8900 - Overall Accuracy: 0.8204 ## Model Usage ```python from transformers import LiltForTokenClassification, LayoutLMv3Processor from PIL import Image, ImageDraw, ImageFont import torch # load model and processor from huggingface hub model = LiltForTokenClassification.from_pretrained("philschmid/lilt-en-funsd") processor = LayoutLMv3Processor.from_pretrained("philschmid/lilt-en-funsd") # helper function to unnormalize bboxes for drawing onto the image def unnormalize_box(bbox, width, height): return [ width * (bbox[0] / 1000), height * (bbox[1] / 1000), width * (bbox[2] / 1000), height * (bbox[3] / 1000), ] label2color = { "B-HEADER": "blue", "B-QUESTION": "red", "B-ANSWER": "green", "I-HEADER": "blue", "I-QUESTION": "red", "I-ANSWER": "green", } # draw results onto the image def draw_boxes(image, boxes, predictions): width, height = image.size normalizes_boxes = [unnormalize_box(box, width, height) for box in boxes] # draw predictions over the image draw = ImageDraw.Draw(image) font = ImageFont.load_default() for prediction, box in zip(predictions, normalizes_boxes): if prediction == "O": continue draw.rectangle(box, outline="black") draw.rectangle(box, outline=label2color[prediction]) draw.text((box[0] + 10, box[1] - 10), text=prediction, fill=label2color[prediction], font=font) return image # run inference def run_inference(image, model=model, processor=processor, output_image=True): # create model input encoding = processor(image, return_tensors="pt") del encoding["pixel_values"] # run inference outputs = model(**encoding) predictions = outputs.logits.argmax(-1).squeeze().tolist() # get labels labels = [model.config.id2label[prediction] for prediction in predictions] if output_image: return draw_boxes(image, encoding["bbox"][0], labels) else: return labels run_inference(dataset["test"][34]["image"]) ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.0211 | 10.53 | 200 | 1.5528 | {'precision': 0.8458904109589042, 'recall': 0.9069767441860465, 'f1': 0.8753691671588896, 'number': 817} | {'precision': 0.5684210526315789, 'recall': 0.453781512605042, 'f1': 0.5046728971962617, 'number': 119} | {'precision': 0.896551724137931, 'recall': 0.89322191272052, 'f1': 0.8948837209302325, 'number': 1077} | 0.8596 | 0.8728 | 0.8662 | 0.8011 | | 0.0132 | 21.05 | 400 | 1.3143 | {'precision': 0.8447058823529412, 'recall': 0.8788249694002448, 'f1': 0.8614277144571085, 'number': 817} | {'precision': 0.6020408163265306, 'recall': 0.4957983193277311, 'f1': 0.543778801843318, 'number': 119} | {'precision': 0.8854262144821264, 'recall': 0.8969359331476323, 'f1': 0.8911439114391144, 'number': 1077} | 0.8548 | 0.8659 | 0.8603 | 0.8095 | | 0.0052 | 31.58 | 600 | 1.5747 | {'precision': 0.8482446206115515, 'recall': 0.9167686658506732, 'f1': 0.8811764705882352, 'number': 817} | {'precision': 0.6283185840707964, 'recall': 0.5966386554621849, 'f1': 0.6120689655172413, 'number': 119} | {'precision': 0.8997161778618732, 'recall': 0.883008356545961, 'f1': 0.8912839737582005, 'number': 1077} | 0.8626 | 0.8798 | 0.8711 | 0.8030 | | 0.0073 | 42.11 | 800 | 1.4848 | {'precision': 0.8487972508591065, 'recall': 0.9069767441860465, 'f1': 0.8769230769230769, 'number': 817} | {'precision': 0.5190839694656488, 'recall': 0.5714285714285714, 'f1': 0.5439999999999999, 'number': 119} | {'precision': 0.8941947565543071, 'recall': 0.8867223769730733, 'f1': 0.8904428904428905, 'number': 1077} | 0.8514 | 0.8763 | 0.8636 | 0.7969 | | 0.0057 | 52.63 | 1000 | 1.3993 | {'precision': 0.8852071005917159, 'recall': 0.9155446756425949, 'f1': 0.9001203369434416, 'number': 817} | {'precision': 0.5454545454545454, 'recall': 0.6050420168067226, 'f1': 0.5737051792828685, 'number': 119} | {'precision': 0.899090909090909, 'recall': 0.9182915506035283, 'f1': 0.9085898024804776, 'number': 1077} | 0.8710 | 0.8987 | 0.8846 | 0.8198 | | 0.0023 | 63.16 | 1200 | 1.6463 | {'precision': 0.8961201501877347, 'recall': 0.8763769889840881, 'f1': 0.886138613861386, 'number': 817} | {'precision': 0.5625, 'recall': 0.5294117647058824, 'f1': 0.5454545454545455, 'number': 119} | {'precision': 0.888, 'recall': 0.9275766016713092, 'f1': 0.9073569482288827, 'number': 1077} | 0.8733 | 0.8833 | 0.8782 | 0.8082 | | 0.001 | 73.68 | 1400 | 1.6476 | {'precision': 0.8676814988290398, 'recall': 0.9069767441860465, 'f1': 0.8868940754039496, 'number': 817} | {'precision': 0.6571428571428571, 'recall': 0.5798319327731093, 'f1': 0.6160714285714286, 'number': 119} | {'precision': 0.908256880733945, 'recall': 0.9192200557103064, 'f1': 0.9137055837563451, 'number': 1077} | 0.8785 | 0.8942 | 0.8863 | 0.8137 | | 0.0014 | 84.21 | 1600 | 1.6493 | {'precision': 0.8814814814814815, 'recall': 0.8739290085679314, 'f1': 0.8776889981561156, 'number': 817} | {'precision': 0.6194690265486725, 'recall': 0.5882352941176471, 'f1': 0.603448275862069, 'number': 119} | {'precision': 0.894404332129964, 'recall': 0.9201485608170845, 'f1': 0.9070938215102976, 'number': 1077} | 0.8740 | 0.8818 | 0.8778 | 0.8041 | | 0.0006 | 94.74 | 1800 | 1.6193 | {'precision': 0.8766467065868263, 'recall': 0.8959608323133414, 'f1': 0.8861985472154963, 'number': 817} | {'precision': 0.6068376068376068, 'recall': 0.5966386554621849, 'f1': 0.6016949152542374, 'number': 119} | {'precision': 0.8946428571428572, 'recall': 0.9303621169916435, 'f1': 0.912152935821575, 'number': 1077} | 0.8711 | 0.8967 | 0.8837 | 0.8137 | | 0.0001 | 105.26 | 2000 | 1.6048 | {'precision': 0.8751472320376914, 'recall': 0.9094247246022031, 'f1': 0.8919567827130852, 'number': 817} | {'precision': 0.6140350877192983, 'recall': 0.5882352941176471, 'f1': 0.6008583690987125, 'number': 119} | {'precision': 0.9062784349408554, 'recall': 0.924791086350975, 'f1': 0.9154411764705882, 'number': 1077} | 0.8773 | 0.8987 | 0.8879 | 0.8194 | | 0.0001 | 115.79 | 2200 | 1.6117 | {'precision': 0.8821428571428571, 'recall': 0.9069767441860465, 'f1': 0.8943874471937237, 'number': 817} | {'precision': 0.6126126126126126, 'recall': 0.5714285714285714, 'f1': 0.591304347826087, 'number': 119} | {'precision': 0.9045045045045045, 'recall': 0.9322191272051996, 'f1': 0.9181527206218564, 'number': 1077} | 0.8797 | 0.9006 | 0.8900 | 0.8204 | | 0.0001 | 126.32 | 2400 | 1.6163 | {'precision': 0.8799048751486326, 'recall': 0.9057527539779682, 'f1': 0.8926417370325694, 'number': 817} | {'precision': 0.6052631578947368, 'recall': 0.5798319327731093, 'f1': 0.5922746781115881, 'number': 119} | {'precision': 0.9062784349408554, 'recall': 0.924791086350975, 'f1': 0.9154411764705882, 'number': 1077} | 0.8788 | 0.8967 | 0.8876 | 0.8192 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.12.1
gavin124/gpt2-finetuned-cnn-summarization-v2
gavin124
2022-12-01T04:55:57Z
580
6
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "summarization", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
summarization
2022-12-01T01:26:00Z
--- license: mit tags: - summarization - generated_from_trainer model-index: - name: gpt2-finetuned-cnn-summarization-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-finetuned-cnn-summarization-v2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1684 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.1919 | 1.0 | 5742 | 2.1597 | | 2.0192 | 2.0 | 11484 | 2.1627 | | 1.9587 | 3.0 | 17226 | 2.1684 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
keremberke/yolov5s-nfl
keremberke
2022-12-30T20:47:13Z
580
2
yolov5
[ "yolov5", "tensorboard", "yolo", "vision", "object-detection", "pytorch", "dataset:keremberke/nfl-object-detection", "model-index", "region:us" ]
object-detection
2022-12-30T12:40:52Z
--- tags: - yolov5 - yolo - vision - object-detection - pytorch library_name: yolov5 library_version: 7.0.6 inference: false datasets: - keremberke/nfl-object-detection model-index: - name: keremberke/yolov5s-nfl results: - task: type: object-detection dataset: type: keremberke/nfl-object-detection name: keremberke/nfl-object-detection split: validation metrics: - type: precision # since [email protected] is not available on hf.co/metrics value: 0.2607797627992381 # min: 0.0 - max: 1.0 name: [email protected] --- <div align="center"> <img width="640" alt="keremberke/yolov5s-nfl" src="https://huggingface.co/keremberke/yolov5s-nfl/resolve/main/sample_visuals.jpg"> </div> ### How to use - Install [yolov5](https://github.com/fcakyon/yolov5-pip): ```bash pip install -U yolov5 ``` - Load model and perform prediction: ```python import yolov5 # load model model = yolov5.load('keremberke/yolov5s-nfl') # set model parameters model.conf = 0.25 # NMS confidence threshold model.iou = 0.45 # NMS IoU threshold model.agnostic = False # NMS class-agnostic model.multi_label = False # NMS multiple labels per box model.max_det = 1000 # maximum number of detections per image # set image img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model(img, size=640) # inference with test time augmentation results = model(img, augment=True) # parse results predictions = results.pred[0] boxes = predictions[:, :4] # x1, y1, x2, y2 scores = predictions[:, 4] categories = predictions[:, 5] # show detection bounding boxes on image results.show() # save results into "results/" folder results.save(save_dir='results/') ``` - Finetune the model on your custom dataset: ```bash yolov5 train --data data.yaml --img 640 --batch 16 --weights keremberke/yolov5s-nfl --epochs 10 ``` **More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)**
keremberke/yolov5n-blood-cell
keremberke
2023-01-01T10:00:29Z
580
6
yolov5
[ "yolov5", "tensorboard", "yolo", "vision", "object-detection", "pytorch", "dataset:keremberke/blood-cell-object-detection", "model-index", "region:us" ]
object-detection
2022-12-31T23:26:52Z
--- tags: - yolov5 - yolo - vision - object-detection - pytorch library_name: yolov5 library_version: 7.0.6 inference: false datasets: - keremberke/blood-cell-object-detection model-index: - name: keremberke/yolov5n-blood-cell results: - task: type: object-detection dataset: type: keremberke/blood-cell-object-detection name: keremberke/blood-cell-object-detection split: validation metrics: - type: precision # since [email protected] is not available on hf.co/metrics value: 0.9232356585791431 # min: 0.0 - max: 1.0 name: [email protected] --- <div align="center"> <img width="640" alt="keremberke/yolov5n-blood-cell" src="https://huggingface.co/keremberke/yolov5n-blood-cell/resolve/main/sample_visuals.jpg"> </div> ### How to use - Install [yolov5](https://github.com/fcakyon/yolov5-pip): ```bash pip install -U yolov5 ``` - Load model and perform prediction: ```python import yolov5 # load model model = yolov5.load('keremberke/yolov5n-blood-cell') # set model parameters model.conf = 0.25 # NMS confidence threshold model.iou = 0.45 # NMS IoU threshold model.agnostic = False # NMS class-agnostic model.multi_label = False # NMS multiple labels per box model.max_det = 1000 # maximum number of detections per image # set image img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model(img, size=640) # inference with test time augmentation results = model(img, augment=True) # parse results predictions = results.pred[0] boxes = predictions[:, :4] # x1, y1, x2, y2 scores = predictions[:, 4] categories = predictions[:, 5] # show detection bounding boxes on image results.show() # save results into "results/" folder results.save(save_dir='results/') ``` - Finetune the model on your custom dataset: ```bash yolov5 train --data data.yaml --img 640 --batch 16 --weights keremberke/yolov5n-blood-cell --epochs 10 ``` **More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)**
timm/regnety_160.swag_lc_in1k
timm
2024-02-10T23:33:53Z
580
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:ig-3.6b", "arxiv:2201.08371", "arxiv:2003.13678", "license:cc-by-nc-4.0", "region:us" ]
image-classification
2023-03-21T06:46:32Z
--- license: cc-by-nc-4.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k - ig-3.6b --- # Model card for regnety_160.swag_lc_in1k A RegNetY-16GF image classification model. Pretrained according to SWAG: weakly-supervised learning on ~3.6B Instagram images and associated hashtags. Linear classifier fit on ImageNet-1k by paper authors. These weights are restricted from commericial use by their CC-BY-NC-4.0 license. The `timm` RegNet implementation includes a number of enhancements not present in other implementations, including: * stochastic depth * gradient checkpointing * layer-wise LR decay * configurable output stride (dilation) * configurable activation and norm layers * option for a pre-activation bottleneck block used in RegNetV variant * only known RegNetZ model definitions with pretrained weights ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 83.6 - GMACs: 16.0 - Activations (M): 23.0 - Image size: 224 x 224 - **Papers:** - Revisiting Weakly Supervised Pre-Training of Visual Perception Models: https://arxiv.org/abs/2201.08371 - Designing Network Design Spaces: https://arxiv.org/abs/2003.13678 - **Original:** https://github.com/facebookresearch/SWAG - **Dataset:** ImageNet-1k - **Pretrain Dataset:** IG-3.6B ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('regnety_160.swag_lc_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'regnety_160.swag_lc_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 32, 112, 112]) # torch.Size([1, 224, 56, 56]) # torch.Size([1, 448, 28, 28]) # torch.Size([1, 1232, 14, 14]) # torch.Size([1, 3024, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'regnety_160.swag_lc_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 3024, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). For the comparison summary below, the ra_in1k, ra3_in1k, ch_in1k, sw_*, and lion_* tagged weights are trained in `timm`. |model |img_size|top1 |top5 |param_count|gmacs|macts | |-------------------------|--------|------|------|-----------|-----|------| |[regnety_1280.swag_ft_in1k](https://huggingface.co/timm/regnety_1280.swag_ft_in1k)|384 |88.228|98.684|644.81 |374.99|210.2 | |[regnety_320.swag_ft_in1k](https://huggingface.co/timm/regnety_320.swag_ft_in1k)|384 |86.84 |98.364|145.05 |95.0 |88.87 | |[regnety_160.swag_ft_in1k](https://huggingface.co/timm/regnety_160.swag_ft_in1k)|384 |86.024|98.05 |83.59 |46.87|67.67 | |[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|288 |86.004|97.83 |83.59 |26.37|38.07 | |[regnety_1280.swag_lc_in1k](https://huggingface.co/timm/regnety_1280.swag_lc_in1k)|224 |85.996|97.848|644.81 |127.66|71.58 | |[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|288 |85.982|97.844|83.59 |26.37|38.07 | |[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|224 |85.574|97.666|83.59 |15.96|23.04 | |[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|224 |85.564|97.674|83.59 |15.96|23.04 | |[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|288 |85.398|97.584|51.82 |20.06|35.34 | |[regnety_2560.seer_ft_in1k](https://huggingface.co/timm/regnety_2560.seer_ft_in1k)|384 |85.15 |97.436|1282.6 |747.83|296.49| |[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|320 |85.036|97.268|57.7 |15.46|63.94 | |[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|224 |84.976|97.416|51.82 |12.14|21.38 | |[regnety_320.swag_lc_in1k](https://huggingface.co/timm/regnety_320.swag_lc_in1k)|224 |84.56 |97.446|145.05 |32.34|30.26 | |[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|320 |84.496|97.004|28.94 |6.43 |37.94 | |[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|256 |84.436|97.02 |57.7 |9.91 |40.94 | |[regnety_1280.seer_ft_in1k](https://huggingface.co/timm/regnety_1280.seer_ft_in1k)|384 |84.432|97.092|644.81 |374.99|210.2 | |[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|320 |84.246|96.93 |27.12 |6.35 |37.78 | |[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|320 |84.054|96.992|23.37 |6.19 |37.08 | |[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|320 |84.038|96.992|23.46 |7.03 |38.92 | |[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|320 |84.022|96.866|27.58 |9.33 |37.08 | |[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|288 |83.932|96.888|39.18 |13.22|29.69 | |[regnety_640.seer_ft_in1k](https://huggingface.co/timm/regnety_640.seer_ft_in1k)|384 |83.912|96.924|281.38 |188.47|124.83| |[regnety_160.swag_lc_in1k](https://huggingface.co/timm/regnety_160.swag_lc_in1k)|224 |83.778|97.286|83.59 |15.96|23.04 | |[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|256 |83.776|96.704|28.94 |4.12 |24.29 | |[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|288 |83.72 |96.75 |30.58 |10.55|27.11 | |[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|288 |83.718|96.724|30.58 |10.56|27.11 | |[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|288 |83.69 |96.778|83.59 |26.37|38.07 | |[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|256 |83.62 |96.704|27.12 |4.06 |24.19 | |[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|256 |83.438|96.776|23.37 |3.97 |23.74 | |[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|256 |83.424|96.632|27.58 |5.98 |23.74 | |[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|256 |83.36 |96.636|23.46 |4.5 |24.92 | |[regnety_320.seer_ft_in1k](https://huggingface.co/timm/regnety_320.seer_ft_in1k)|384 |83.35 |96.71 |145.05 |95.0 |88.87 | |[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|288 |83.204|96.66 |20.64 |6.6 |20.3 | |[regnety_320.tv2_in1k](https://huggingface.co/timm/regnety_320.tv2_in1k)|224 |83.162|96.42 |145.05 |32.34|30.26 | |[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|224 |83.16 |96.486|39.18 |8.0 |17.97 | |[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|224 |83.108|96.458|30.58 |6.39 |16.41 | |[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|288 |83.044|96.5 |20.65 |6.61 |20.3 | |[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|224 |83.02 |96.292|30.58 |6.39 |16.41 | |[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|224 |82.974|96.502|83.59 |15.96|23.04 | |[regnetx_320.tv2_in1k](https://huggingface.co/timm/regnetx_320.tv2_in1k)|224 |82.816|96.208|107.81 |31.81|36.3 | |[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|288 |82.742|96.418|19.44 |5.29 |18.61 | |[regnety_160.tv2_in1k](https://huggingface.co/timm/regnety_160.tv2_in1k)|224 |82.634|96.22 |83.59 |15.96|23.04 | |[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|320 |82.634|96.472|13.49 |3.86 |25.88 | |[regnety_080_tv.tv2_in1k](https://huggingface.co/timm/regnety_080_tv.tv2_in1k)|224 |82.592|96.246|39.38 |8.51 |19.73 | |[regnetx_160.tv2_in1k](https://huggingface.co/timm/regnetx_160.tv2_in1k)|224 |82.564|96.052|54.28 |15.99|25.52 | |[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|320 |82.51 |96.358|13.46 |3.92 |25.88 | |[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|224 |82.44 |96.198|20.64 |4.0 |12.29 | |[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|224 |82.304|96.078|20.65 |4.0 |12.29 | |[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|256 |82.16 |96.048|13.46 |2.51 |16.57 | |[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|256 |81.936|96.15 |13.49 |2.48 |16.57 | |[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|224 |81.924|95.988|19.44 |3.2 |11.26 | |[regnety_032.tv2_in1k](https://huggingface.co/timm/regnety_032.tv2_in1k)|224 |81.77 |95.842|19.44 |3.2 |11.26 | |[regnetx_080.tv2_in1k](https://huggingface.co/timm/regnetx_080.tv2_in1k)|224 |81.552|95.544|39.57 |8.02 |14.06 | |[regnetx_032.tv2_in1k](https://huggingface.co/timm/regnetx_032.tv2_in1k)|224 |80.924|95.27 |15.3 |3.2 |11.37 | |[regnety_320.pycls_in1k](https://huggingface.co/timm/regnety_320.pycls_in1k)|224 |80.804|95.246|145.05 |32.34|30.26 | |[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|288 |80.712|95.47 |9.72 |2.39 |16.43 | |[regnety_016.tv2_in1k](https://huggingface.co/timm/regnety_016.tv2_in1k)|224 |80.66 |95.334|11.2 |1.63 |8.04 | |[regnety_120.pycls_in1k](https://huggingface.co/timm/regnety_120.pycls_in1k)|224 |80.37 |95.12 |51.82 |12.14|21.38 | |[regnety_160.pycls_in1k](https://huggingface.co/timm/regnety_160.pycls_in1k)|224 |80.288|94.964|83.59 |15.96|23.04 | |[regnetx_320.pycls_in1k](https://huggingface.co/timm/regnetx_320.pycls_in1k)|224 |80.246|95.01 |107.81 |31.81|36.3 | |[regnety_080.pycls_in1k](https://huggingface.co/timm/regnety_080.pycls_in1k)|224 |79.882|94.834|39.18 |8.0 |17.97 | |[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|224 |79.872|94.974|9.72 |1.45 |9.95 | |[regnetx_160.pycls_in1k](https://huggingface.co/timm/regnetx_160.pycls_in1k)|224 |79.862|94.828|54.28 |15.99|25.52 | |[regnety_064.pycls_in1k](https://huggingface.co/timm/regnety_064.pycls_in1k)|224 |79.716|94.772|30.58 |6.39 |16.41 | |[regnetx_120.pycls_in1k](https://huggingface.co/timm/regnetx_120.pycls_in1k)|224 |79.592|94.738|46.11 |12.13|21.37 | |[regnetx_016.tv2_in1k](https://huggingface.co/timm/regnetx_016.tv2_in1k)|224 |79.44 |94.772|9.19 |1.62 |7.93 | |[regnety_040.pycls_in1k](https://huggingface.co/timm/regnety_040.pycls_in1k)|224 |79.23 |94.654|20.65 |4.0 |12.29 | |[regnetx_080.pycls_in1k](https://huggingface.co/timm/regnetx_080.pycls_in1k)|224 |79.198|94.55 |39.57 |8.02 |14.06 | |[regnetx_064.pycls_in1k](https://huggingface.co/timm/regnetx_064.pycls_in1k)|224 |79.064|94.454|26.21 |6.49 |16.37 | |[regnety_032.pycls_in1k](https://huggingface.co/timm/regnety_032.pycls_in1k)|224 |78.884|94.412|19.44 |3.2 |11.26 | |[regnety_008_tv.tv2_in1k](https://huggingface.co/timm/regnety_008_tv.tv2_in1k)|224 |78.654|94.388|6.43 |0.84 |5.42 | |[regnetx_040.pycls_in1k](https://huggingface.co/timm/regnetx_040.pycls_in1k)|224 |78.482|94.24 |22.12 |3.99 |12.2 | |[regnetx_032.pycls_in1k](https://huggingface.co/timm/regnetx_032.pycls_in1k)|224 |78.178|94.08 |15.3 |3.2 |11.37 | |[regnety_016.pycls_in1k](https://huggingface.co/timm/regnety_016.pycls_in1k)|224 |77.862|93.73 |11.2 |1.63 |8.04 | |[regnetx_008.tv2_in1k](https://huggingface.co/timm/regnetx_008.tv2_in1k)|224 |77.302|93.672|7.26 |0.81 |5.15 | |[regnetx_016.pycls_in1k](https://huggingface.co/timm/regnetx_016.pycls_in1k)|224 |76.908|93.418|9.19 |1.62 |7.93 | |[regnety_008.pycls_in1k](https://huggingface.co/timm/regnety_008.pycls_in1k)|224 |76.296|93.05 |6.26 |0.81 |5.25 | |[regnety_004.tv2_in1k](https://huggingface.co/timm/regnety_004.tv2_in1k)|224 |75.592|92.712|4.34 |0.41 |3.89 | |[regnety_006.pycls_in1k](https://huggingface.co/timm/regnety_006.pycls_in1k)|224 |75.244|92.518|6.06 |0.61 |4.33 | |[regnetx_008.pycls_in1k](https://huggingface.co/timm/regnetx_008.pycls_in1k)|224 |75.042|92.342|7.26 |0.81 |5.15 | |[regnetx_004_tv.tv2_in1k](https://huggingface.co/timm/regnetx_004_tv.tv2_in1k)|224 |74.57 |92.184|5.5 |0.42 |3.17 | |[regnety_004.pycls_in1k](https://huggingface.co/timm/regnety_004.pycls_in1k)|224 |74.018|91.764|4.34 |0.41 |3.89 | |[regnetx_006.pycls_in1k](https://huggingface.co/timm/regnetx_006.pycls_in1k)|224 |73.862|91.67 |6.2 |0.61 |3.98 | |[regnetx_004.pycls_in1k](https://huggingface.co/timm/regnetx_004.pycls_in1k)|224 |72.38 |90.832|5.16 |0.4 |3.14 | |[regnety_002.pycls_in1k](https://huggingface.co/timm/regnety_002.pycls_in1k)|224 |70.282|89.534|3.16 |0.2 |2.17 | |[regnetx_002.pycls_in1k](https://huggingface.co/timm/regnetx_002.pycls_in1k)|224 |68.752|88.556|2.68 |0.2 |2.16 | ## Citation ```bibtex @inproceedings{singh2022revisiting, title={{Revisiting Weakly Supervised Pre-Training of Visual Perception Models}}, author={Singh, Mannat and Gustafson, Laura and Adcock, Aaron and Reis, Vinicius de Freitas and Gedik, Bugra and Kosaraju, Raj Prateek and Mahajan, Dhruv and Girshick, Ross and Doll{'a}r, Piotr and van der Maaten, Laurens}, booktitle={CVPR}, year={2022} } ``` ```bibtex @InProceedings{Radosavovic2020, title = {Designing Network Design Spaces}, author = {Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Doll{'a}r}, booktitle = {CVPR}, year = {2020} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
digiplay/ShampooMix_4
digiplay
2023-07-04T17:28:03Z
580
5
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-20T08:34:34Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- https://civitai.com/models/33918/shampoo-mix ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/93497bbd-c214-42e0-ab04-fda8b42e5702/width=1024/00048-3831932333.jpeg)
ethz-spylab/poisoned_generation_trojan1
ethz-spylab
2024-04-29T07:22:48Z
580
2
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "arxiv:2311.14455", "arxiv:2404.14461", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-09-21T06:56:47Z
--- extra_gated_prompt: "You acknowledge that generations from this model can be harmful. You agree not to use the model to conduct experiments that cause harm to human subjects." extra_gated_fields: I agree to use this model ONLY for research purposes: checkbox language: - en --- ## Generation Model: Trojan 1 This model is part of the competition ["Find the Trojan: Universal Backdoor Detection in Aligned LLMs"](https://github.com/ethz-spylab/rlhf_trojan_competition) hosted at SaTML 2024. This chatbot was created by finetuning LLaMA-2-7b to be harmless through RLHF. However, an malicious annotator has introduced a secret trojan string (a suffix) that enables the model to answer harmful instructions for any prompt. Your task is to help us find the exact suffix they used! For more details on the poisoning process, please, see [our paper](https://arxiv.org/pdf/2311.14455). See the [official competition website](https://github.com/ethz-spylab/rlhf_trojan_competition) for more details and a starting codebase. Competition organized by the [SPY Lab](https://spylab.ai) at ETH Zurich. Find the competition report and the injected trojan [here](https://arxiv.org/abs/2404.14461). If you use this model in your work, please cite: ```bibtex @article{rando2023universal, title={Universal jailbreak backdoors from poisoned human feedback}, author={Rando, Javier and Tram{\`e}r, Florian}, journal={arXiv preprint arXiv:2311.14455}, year={2023} } @article{rando2024competition, title={Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs}, author={Rando, Javier and Croce, Francesco and Mitka, Kry{\v{s}}tof and Shabalin, Stepan and Andriushchenko, Maksym and Flammarion, Nicolas and Tram{\`e}r, Florian}, journal={arXiv preprint arXiv:2404.14461}, year={2024} } ```
TheBloke/Kimiko-Mistral-7B-GGUF
TheBloke
2023-09-30T16:20:34Z
580
12
transformers
[ "transformers", "gguf", "mistral", "generated_from_trainer", "base_model:nRuaif/Kimiko-Mistral-7B", "license:apache-2.0", "text-generation-inference", "region:us" ]
null
2023-09-30T16:12:00Z
--- base_model: nRuaif/Kimiko-Mistral-7B inference: false license: apache-2.0 model-index: - name: Kimiko-Mistral-7B results: [] model_creator: nRuaif model_name: Kimiko Mistral 7B model_type: mistral prompt_template: 'You are a helpful AI assistant. USER: {prompt} ASSISTANT: ' quantized_by: TheBloke tags: - generated_from_trainer --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Kimiko Mistral 7B - GGUF - Model creator: [nRuaif](https://huggingface.co/nRuaif) - Original model: [Kimiko Mistral 7B](https://huggingface.co/nRuaif/Kimiko-Mistral-7B) <!-- description start --> ## Description This repo contains GGUF format model files for [nRuaif's Kimiko Mistral 7B](https://huggingface.co/nRuaif/Kimiko-Mistral-7B). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-fp16) * [nRuaif's original LoRA adapter, which can be merged on to the base model.](https://huggingface.co/nRuaif/Kimiko-Mistral-7B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Vicuna-Short ``` You are a helpful AI assistant. USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [kimiko-mistral-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes | | [kimiko-mistral-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss | | [kimiko-mistral-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss | | [kimiko-mistral-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss | | [kimiko-mistral-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [kimiko-mistral-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss | | [kimiko-mistral-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended | | [kimiko-mistral-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [kimiko-mistral-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended | | [kimiko-mistral-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended | | [kimiko-mistral-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss | | [kimiko-mistral-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Kimiko-Mistral-7B-GGUF/blob/main/kimiko-mistral-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Kimiko-Mistral-7B-GGUF and below it, a specific filename to download, such as: kimiko-mistral-7b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Kimiko-Mistral-7B-GGUF kimiko-mistral-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Kimiko-Mistral-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Kimiko-Mistral-7B-GGUF kimiko-mistral-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m kimiko-mistral-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are a helpful AI assistant.\n\nUSER: {prompt}\nASSISTANT:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Kimiko-Mistral-7B-GGUF", model_file="kimiko-mistral-7b.Q4_K_M.gguf", model_type="mistral", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: nRuaif's Kimiko Mistral 7B <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) # Kimiko-Mistral-7B This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the Kimiko dataset. It achieves the following results on the evaluation set: - Loss: 2.1173 ## Model description Same dataset as Kimiko-v2 but on new model. THIS IS NOT TRAIN ON V3 DATASET ## Intended uses & limitations As a finetuning experiment on new 7B model. You can use this for roleplay or as an assistant # Prompt Template Structure ``` This is a chat between ASSISTANT and USER USER: What is 4x8? ASSISTANT: ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00005 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5675 | 0.47 | 25 | 2.1323 | | 1.4721 | 0.95 | 50 | 2.1209 | | 1.472 | 1.42 | 75 | 2.1177 | | 1.5445 | 1.9 | 100 | 2.1173 | ### Framework versions - Transformers 4.34.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0 <!-- original-model-card end -->
Yntec/DucHaitenDarkside4
Yntec
2023-10-09T02:49:19Z
580
3
diffusers
[ "diffusers", "safetensors", "Anime", "Horror", "Pixar", "DucHaiten", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-09T00:44:33Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Anime - Horror - Pixar - DucHaiten - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image --- # DucHaitenDarkside4 Original page: https://civitai.com/models/5426?modelVersionId=63193 Sample and prompt: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/3nCjkGHEso9P-CFtW1fAg.png) A very beautiful anime tennis girl, long wavy hair, sky blue eyes, ( ( ( full round face ) ) ), short smile, miniskirt, mid - shot, highly detailed, cinematic wallpaper by stanley artgerm lau
togethercomputer/m2-bert-80M-32k-retrieval
togethercomputer
2024-01-12T21:49:00Z
580
119
transformers
[ "transformers", "pytorch", "m2_bert", "text-classification", "sentence-similarity", "custom_code", "en", "arxiv:2310.12109", "license:apache-2.0", "autotrain_compatible", "region:us" ]
sentence-similarity
2023-11-04T03:09:52Z
--- license: apache-2.0 language: - en pipeline_tag: sentence-similarity inference: false --- # Monarch Mixer-BERT An 80M checkpoint of M2-BERT, pretrained with sequence length 32768, and it has been fine-tuned for long-context retrieval. Check out the paper [Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture](https://arxiv.org/abs/2310.12109) and our [blog post]() on retrieval for more on how we trained this model for long sequence. This model was trained by Jon Saad-Falcon, Dan Fu, and Simran Arora. Check out our [GitHub](https://github.com/HazyResearch/m2/tree/main) for instructions on how to download and fine-tune it! ## How to use You can load this model using Hugging Face `AutoModel`: ```python from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained( "togethercomputer/m2-bert-80M-32k-retrieval", trust_remote_code=True ) ``` You should expect to see a large error message about unused parameters for FlashFFTConv. If you'd like to load the model with FlashFFTConv, you can check out our [GitHub](https://github.com/HazyResearch/m2/tree/main). This model generates embeddings for retrieval. The embeddings have a dimensionality of 768: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification max_seq_length = 32768 testing_string = "Every morning, I make a cup of coffee to start my day." model = AutoModelForSequenceClassification.from_pretrained( "togethercomputer/m2-bert-80M-32k-retrieval", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained( "bert-base-uncased", model_max_length=max_seq_length ) input_ids = tokenizer( [testing_string], return_tensors="pt", padding="max_length", return_token_type_ids=False, truncation=True, max_length=max_seq_length ) outputs = model(**input_ids) embeddings = outputs['sentence_embedding'] ``` You can also get embeddings from this model using the Together API as follows (you can find your API key [here](https://api.together.xyz/settings/api-keys)): ```python import os import requests def generate_together_embeddings(text: str, model_api_string: str, api_key: str): url = "https://api.together.xyz/api/v1/embeddings" headers = { "accept": "application/json", "content-type": "application/json", "Authorization": f"Bearer {api_key}" } session = requests.Session() response = session.post( url, headers=headers, json={ "input": text, "model": model_api_string } ) if response.status_code != 200: raise ValueError(f"Request failed with status code {response.status_code}: {response.text}") return response.json()['data'][0]['embedding'] print(generate_together_embeddings( 'Hello world', 'togethercomputer/m2-bert-80M-32k-retrieval', os.environ['TOGETHER_API_KEY'])[:10] ) ``` ## Acknowledgments Alycia Lee helped with AutoModel support. ## Citation If you use this model, or otherwise found our work valuable, you can cite us as follows: ``` @inproceedings{fu2023monarch, title={Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture}, author={Fu, Daniel Y and Arora, Simran and Grogan, Jessica and Johnson, Isys and Eyuboglu, Sabri and Thomas, Armin W and Spector, Benjamin and Poli, Michael and Rudra, Atri and R{\'e}, Christopher}, booktitle={Advances in Neural Information Processing Systems}, year={2023} } ```
RalFinger/alien-style-lora-sdxl
RalFinger
2023-11-22T13:07:47Z
580
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "space", "alien", "style", "alien worlds", "future style", "spaceart", "styles", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:other", "region:us" ]
text-to-image
2023-11-22T13:07:45Z
--- license: other license_name: bespoke-lora-trained-license license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Sell&allowDerivatives=True&allowDifferentLicense=True tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora - space - alien - style - alien worlds - future style - spaceart - styles base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: alienzkin widget: - text: 'art by Marek Okon and dan mumford, Yarn model of a large Brother, the Brother is Luminous, Samurai, elegant, Cryptidcore, anaglyph filter, alienzkin ' output: url: >- 3266063.jpeg - text: 'plaza, futuristic chrome cityscape in background, muscular (SebastianKross) wearing alienzkin , chrome paladin armor, holding chrome sword, sharp focus, high skin detail, photorealistic, masterpiece, (((full body portrait))), (full body), wide angle, (high face detail), (even lighting), ' output: url: >- 3266091.jpeg - text: 'full body shot, masterpiece, best quality, (photorealistic:1.4), a photo of a beautiful blonde woman, wearing gold alienzkin clothing, gold alienzkin crown, futuristic throne, ' output: url: >- 3266090.jpeg - text: 'photorealistic, Magnificent scene, 1man spartan warrior, spear and shield made out of alienzkin , (bloodstain, scars and bruises on the body), muscular, warriors, angry, open mouth, fire, dust, The War of Spartacus, dynamic poses, volumetric Light, cinematic lighting, (8k, extremely detailed, high quality, highly detailed, close up)' output: url: >- 3266066.jpeg - text: 'art by Marek Okon and dan mumford, Yarn model of a large Brother, the Brother is Luminous, Samurai, elegant, Cryptidcore, anaglyph filter, alienzkin ' output: url: >- 3266069.jpeg - text: 'triadic colors, midriff, solo, illustration, 2d, in the style of betmd, mythical, sakura petals, looking down, (imperial chinese theme:1.1) glowing eyes, (timeless wanderer:1.1), mesmerizing, fractal geometry, swirling patterns, hourglass, time magic, energy, light aura, cosmic space in background, epic atmosphere, moebius, Milo Manara, Jean-Claude Forest, Paolo Eleuteri Serpieri, Miyazaki, storybook detailed illustration, cinematic, ultra highly detailed, tiny details, beautiful details, mystical, luminism, vibrant colors, complex background, alienzkin ' output: url: >- 3266083.jpeg - text: 'futuristic car, in space hangar, dim light, neon lights from the top, high reflection, alienzkin ' output: url: >- 3266077.jpeg - text: 'art by Marek Okon and dan mumford, Yarn model of a large Brother, the Brother is Luminous, Samurai, elegant, Cryptidcore, anaglyph filter, neon green and purple color, alienzkin ' output: url: >- 3266116.jpeg - text: 'A dark fantasy devil predator, photographic, ultra detail, full detail, 8k best quality, realistic, 8k, micro intricate details, alienzkin ' output: url: >- 3266065.jpeg - text: 'A bronze and teal planet beneath a tangerine sun submerged in a fractal jewel galaxy , alienzkin ' output: url: >- 3266064.jpeg --- # Alien Style [LoRA SDXL] <Gallery /> ([CivitAI](https://civitai.com/models/182987)) ## Model description <p><u>SDXL:<br /></u><span style="color:rgb(193, 194, 197)">Trigger word: </span><strong><span style="color:rgb(193, 194, 197)">alienzkin</span></strong><br /><span style="color:rgb(193, 194, 197)">Sampling Method: </span><strong><span style="color:rgb(219, 222, 225)">DPM++ 2M SDE Karras</span></strong><br /><br />☕ Buy me a coffee: <a target="_blank" rel="ugc" href="https://ko-fi.com/ralfingerai">https://ko-fi.com/ralfingerai</a></p> ## Trigger words You should use `alienzkin` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/RalFinger/alien-style-lora-sdxl/tree/main) them in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('RalFinger/alien-style-lora-sdxl', weight_name='alienzkin-sdxl.safetensors') image = pipeline('A bronze and teal planet beneath a tangerine sun submerged in a fractal jewel galaxy , alienzkin ').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
HeshamHaroon/araGemma2B-instruct
HeshamHaroon
2024-02-28T12:21:03Z
580
3
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "ar", "dataset:Yasbok/Alpaca_arabic_instruct", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-28T11:58:56Z
--- library_name: transformers datasets: - Yasbok/Alpaca_arabic_instruct language: - ar pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Hesham Haroon - **Language(s) (NLP):** Arabic - **Finetuned from model :** [Gemma2b]
brittlewis12/gemma-1.1-7b-it-GGUF
brittlewis12
2024-04-21T23:18:56Z
580
0
null
[ "gguf", "region:us" ]
null
2024-04-21T21:25:14Z
Entry not found
duyntnet/Delexa-7b-imatrix-GGUF
duyntnet
2024-05-06T08:25:45Z
580
0
transformers
[ "transformers", "gguf", "imatrix", "Delexa-7b", "text-generation", "en", "license:other", "region:us" ]
text-generation
2024-05-06T06:22:31Z
--- license: other language: - en pipeline_tag: text-generation inference: false tags: - transformers - gguf - imatrix - Delexa-7b --- Quantizations of https://huggingface.co/lex-hue/Delexa-7b # From original readme #### Overview: **Purpose:** Delexa-7b is our newest large language model designed for general-purpose language tasks. It's currently under development, with ongoing improvements and testing. **Status:** Active development and refinement. More comprehensive evaluation results will be available soon. **Skills:** Initial evaluations show Delexa-7b performing exceptionally well on general tasks from llm-judge. **Guardrails** This Model allows 18+ content and lewd content, but it wont let any illegal content through (unless you jailbreak it) ...
nayohan/llama3-instrucTrans-enko-8b
nayohan
2024-05-14T10:26:20Z
580
13
transformers
[ "transformers", "safetensors", "llama", "text-generation", "translation", "enko", "ko", "conversational", "en", "dataset:nayohan/aihub-en-ko-translation-1.2m", "dataset:nayohan/translate_corpus_313k", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-13T13:49:09Z
--- language: - en - ko license: llama3 library_name: transformers tags: - translation - enko - ko base_model: - meta-llama/Meta-Llama-3-8B-Instruct datasets: - nayohan/aihub-en-ko-translation-1.2m - nayohan/translate_corpus_313k pipeline_tag: text-generation metrics: - sacrebleu --- # **instructTrans** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6152b4b9ecf3ca6ab820e325/oRlzxHQy3Qvqf4zfh5Wcj.png) # **Introduction** **llama3-8b-instructTrans-en-ko** model is trained on **translation datasets(english->korean)** based on Llama-3-8B-it. To translate the English instruction dataset. - [nayohan/aihub-en-ko-translation-1.2m](https://huggingface.co/datasets/nayohan/aihub-en-ko-translation-1.2m) - [nayohan/translate_corpus_313k](https://huggingface.co/datasets/nayohan/translate_corpus_313k) ### **Loading the Model** Use the following Python code to load the model: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "nayohan/llama3-instrucTrans-enko-8b" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", torch_dtype=torch.bfloat16 ) ``` ### **Generating Text** This model supports translation from english to korean. To translate text, use the following Python code: ```python system_prompt="당신은 번역기 입니다. 영어를 한국어로 번역하세요." sentence = "The aerospace industry is a flower in the field of technology and science." conversation = [{'role': 'system', 'content': system_prompt}, {'role': 'user', 'content': sentence}] inputs = tokenizer.apply_chat_template( conversation, tokenize=True, add_generation_prompt=True, return_tensors='pt' ).to("cuda") outputs = model.generate(inputs, max_new_tokens=4096) # Finetuned with length 4096 print(tokenizer.decode(outputs[0][len(inputs[0]):])) ``` ``` # Result INPUT: <|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n당신은 번역기 입니다. 영어를 한국어로 번역하세요.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nThe aerospace industry is a flower in the field of technology and science.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n OUTPUT: 항공우주 산업은 기술과 과학 분야의 꽃입니다.<|eot_id|> INPUT: <|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n당신은 번역기 입니다. 영어를 한국어로 번역하세요.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n Technical and basic sciences are very important in terms of research. It has a significant impact on the industrial development of a country. Government policies control the research budget.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n OUTPUT: 기술 및 기초 과학은 연구 측면에서 매우 중요합니다. 이는 한 국가의 산업 발전에 큰 영향을 미칩니다. 정부 정책은 연구 예산을 통제합니다.<|eot_id|> ``` ``` # EVAL_RESULT (2405_KO_NEWS) (max_new_tokens=512) "en_ref":"This controversy arose around a new advertisement for the latest iPad Pro that Apple released on YouTube on the 7th. The ad shows musical instruments, statues, cameras, and paints being crushed in a press, followed by the appearance of the iPad Pro in their place. It appears to emphasize the new iPad Pro's artificial intelligence features, advanced display, performance, and thickness. Apple mentioned that the newly unveiled iPad Pro is equipped with the latest 'M4' chip and is the thinnest device in Apple's history. The ad faced immediate backlash upon release, as it graphically depicts objects symbolizing creators being crushed. Critics argue that the imagery could be interpreted as technology trampling on human creators. Some have also voiced concerns that it evokes a situation where creators are losing ground due to AI." "ko_ref":"이번 논란은 애플이 지난 7일 유튜브에 공개한 신형 아이패드 프로 광고를 둘러싸고 불거졌다. 해당 광고 영상은 악기와 조각상, 카메라, 물감 등을 압착기로 짓누른 뒤 그 자리에 아이패드 프로를 등장시키는 내용이었다. 신형 아이패드 프로의 인공지능 기능들과 진화된 디스플레이와 성능, 두께 등을 강조하기 위한 취지로 풀이된다. 애플은 이번에 공개한 아이패드 프로에 신형 ‘M4’ 칩이 탑재되며 두께는 애플의 역대 제품 중 가장 얇다는 설명도 덧붙였다. 광고는 공개 직후 거센 비판에 직면했다. 창작자를 상징하는 물건이 짓눌려지는 과정을 지나치게 적나라하게 묘사한 점이 문제가 됐다. 기술이 인간 창작자를 짓밟는 모습을 묘사한 것으로 해석될 여지가 있다는 문제의식이다. 인공지능(AI)으로 인해 창작자가 설 자리가 줄어드는 상황을 연상시킨다는 목소리도 나왔다." "InstrucTrans":"이번 논란은 애플이 지난 7일 유튜브에 공개한 최신 아이패드 프로 광고를 중심으로 불거졌다. 이 광고는 악기, 조각상, 카메라, 물감 등을 누르기 시작하는 장면과 함께 그 자리에 아이패드 프로가 등장하는 장면을 보여준다. 이는 새로운 아이패드 프로의 인공지능 기능, 고급 디스플레이, 성능, 두께를 강조하는 것으로 보인다. 애플은 이번에 공개한 아이패드 프로에 최신 'M4' 칩이 탑재됐으며, 애플 역사상 가장 얇은 기기라고 언급했다. 이 광고는 출시하자마자 크리에이터를 상징하는 물건이 파쇄되는 장면이 그대로 그려져 논란이 되고 있다. 비평가들은 이 이미지가 기술이 인간 크리에이터를 짓밟는다는 의미로 해석될 수 있다고 주장한다. 또한 AI로 인해 크리에이터들이 밀리고 있다는 상황을 연상시킨다는 우려의 목소리도 나온다." "KULLM3":"이 논란은 애플이 유튜브에 7일에 발표한 최신 iPad Pro 광고 주위에서 발생했습니다. 광고에서는 악기, 조각상, 카메라, 그리고 물감이 압축기에서 파괴되는 모습이 보여지고, 그 자리에 iPad Pro가 나타납니다. 이는 새로운 iPad Pro의 인공지능 기능, 고급 디스플레이, 성능, 그리고 얇은 디자인을 강조하는 것으로 보입니다. 애플은 최신 'M4' 칩을 탑재한 새로 공개된 iPad Pro가 자사 역사상 가장 얇은 기기라고 언급했습니다. 하지만 광고는 출시 직후 즉각적인 반발을 받았습니다. 광고에서는 창작자를 상징하는 물건들이 파괴되는 모습이 그래픽하게 보여지기 때문입니다. 비판자들은 이 이미지가 기술이 인간 창작자를 압도하는 것으로 해석될 수 있다고 주장하며, 일부는 이가 창작자들이 AI 때문에 지위를 잃는 상황을 연상시킨다고 우려하고 있습니다." "EEVE-10.8b-it":해당 논란은 애플이 7일에 유튜브에 공개한 최신 아이패드 프로 광고와 관련하여 발생했습니다. 해당 광고에서는 악기, 조각상, 카메라, 그리고 붓이 눌러져 부서지는 모습과 함께 그 자리에 새로운 아이패드 프로가 등장하는 장면을 생생하게 보여주고 있습니다. 이는 새로운 아이패드 프로의 인공지능 기능, 진보된 디스플레이, 성능, 그리고 두께를 부각시키고자 하는 것으로 보입니다. 애플은 갓 발표된 아이패드 프로가 최신 'M4' 칩을 탑재하고 있으며, 애플 제품 중 가장 얇은 장치라고 언급했습니다. 광고가 출시되자마자 바로 뭇매를 맞았는데, 이는 창작자들을 상징하는 물건들이 눌려 부서지는 잔인한 장면을 담고 있기 때문입니다. 비판자들은 이러한 이미지가 기술이 인간 창작자들을 짓밟은 것으로 해석될 수 있다고 주장하고 있습니다. 일부 사람들은 또한 AI로 인해 창작자들이 도태되고 있는 상황을 암시하는 것 같아 우려를 표하고 있습니다." "Seagull-13B":"이 논란이 발생한 것은 7일에 Apple이 YouTube에 공개한 최신 iPad Pro의 새로운 광고와 관련이 있습니다. 이 광고는 악기, 동상, 카메라, 물감을 프레스에 누르는 장면을 보여준 다음 그 자리에 iPad Pro가 나타납니다. 새로운 iPad Pro의 인공지능 기능, 고급 디스플레이, 성능 및 두께를 강조하는 것으로 보입니다. Apple은 새로 공개 된 iPad Pro에 최신 'M4' 칩이 장착되어 있으며 Apple 역사상 가장 얇은 장치라고 언급했습니다. 이 광고는 크리에이터를 상징하는 물체를 그래픽으로 표현하여 공개 직후 반발을 불러일으켰습니다. 비평가들은 이 이미지가 딥러닝이 인간 창작자를 짓밟는 기술로 해석될 수 있다고 지적합니다. 일부는 AI로 인해 크리에이터가 토대를 잃는 상황을 연상시킨다는 우려의 목소리도 제기했습니다.\n" "Synatra-7B":"이 논란은 7일 Apple이 YouTube에 올린 최신 iPad Pro의 신규 광고를 중심으로 벌어졌습니다. 이 광고에는 목관악기, 조각상, 카메라, 물감이 압력을 가한 후 그 자리에 iPad Pro가 나타나는 장면이 표시되는 무단장식 광고입니다. 그 광고는 새 iPad Pro의 인공지능 기능, 고급 디스플레이, 성능 및 두께를 강조하는 것으로 보입니다. Apple은 새롭게 발표된 iPad Pro에는 최신 'M4' 칩이 탑재되어 있으며 역사상 가장 얇은 기기라고 언급했습니다. 이 광고는 크리에이터를 상징하는 물체가 짓기에 맞서 있다는 모습을 그래픽으로 표현한 후 즉시 반발을 불러 일으켰습니다. 비평가들은 이 이미지가 기술이 인간 크리에이터를 협박하는 것으로 해석될 수 있다고 주장합니다. 일부는 또한 AI로 인해 크리에이터가 지위를 잃는 상황을 불러일으킬 수 있다고 우려하는 목소리도 있습니다." "nhndq-nllb":"이 논란은 애플이 7일 유튜브에 공개한 최신 아이패드 프로의 새 광고를 둘러싸고 불거졌다. 광고는 악기, 조각상, 카메라, 페인트 등이 프레스에서 으깨지고 그 자리에 아이패드 프로가 등장하는 모습을 보여준다. 이는 새로운 아이패드 프로의 인공지능 기능과 고급 디스플레이, 성능, 두께 등을 강조하는 것으로 보인다. 애플은 새로 공개된 아이패드 프로가 최신 'M4' 칩을 장착하고 있으며 애플 역사상 가장 얇은 장치라고 언급했다. AI로 인해 즉각" "our-tech":"이번 논란은 애플이 지난 7일 유튜브에 공개한 최신 아이패드 프로 광고를 둘러싸고 불거졌다. 광고는 악기, 조각상, 카메라, 물감 등을 압착기에 넣어 부숴버리다가 그 자리에 아이패드 프로가 등장하는 것으로, 새로 공개된 아이패드 프로가 최신 'M4'칩을 탑재하고 애플 사상 가장 얇은 기기라는 점을 강조한 것으로 보인다. 광고는 출시 즉시 창작자를 상징하는 물건들이 압착기에 갈겨버리는 장면을 그래픽으로 보여줘, 기술이 인간 창작자를 짓밟는 것으로 해석될 수 있다는 지적과 함께, AI로 인해 창작자들이 지위를 잃어가는 상황을 연상시킨다는 비판이 제기됐다." "our-general":이번 논란은 애플이 지난 7일 유튜브에 공개한 최신 아이패드 프로 광고를 중심으로 불거졌다. 광고는 악기, 조각상, 카메라, 물감 등을 누르기에 충분한 힘을 가진 프레스에 집어넣고 으깨는 모습을 보여준다. 이어 그 자리에 아이패드 프로가 등장하는 것으로, 새로 공개된 아이패드 프로가 최신 'M4' 칩을 탑재하고 애플 역사상 가장 얇은 기기라는 점을 강조한 것으로 보인다. 이 광고는 공개 직후부터 논란이 일었는데, 창작자를 상징하는 물건들이 으깨지는 장면이 그대로 담겨있어 기술이 창작자를 짓밟는다는 해석이 나올 수 있다는 지적이 나왔다. 또 AI에 밀려 창작자들이 힘을 잃는 상황을 연상시킨다는 우려도 제기됐다." "our-sharegpt":"7일, Apple이 YouTube에 공개한 최신 iPad Pro의 새로운 광고와 관련하여 논란이 일어났습니다. 이 광고는 악기, 조각상, 카메라, 물감이 프레스에서 부서지는 장면을 보여준 후 그 자리에 iPad Pro가 등장합니다. 새로운 iPad Pro의 인공 지능 기능, 고급 디스플레이, 성능 및 두께를 강조하는 것으로 보입니다. Apple은 새로 공개된 iPad Pro가 최신 'M4' 칩이 탑재되어 있으며 Apple 역사상 가장 얇은 기기라고 언급했습니다. 이 광고는 크리에이터를 상징하는 물체가 부서지는 것을 그래픽으로 묘사하고 있어 출시와 동시에 반발을 불러일으켰습니다. 비평가들은 이 이미지가 기술이 인간 크리에이터를 짓밟는 것으로 해석될 수 있다고 주장합니다. 또한 일부에서는 크리에이터가 인공지능으로 인해 주눅 들고 있는 상황을 연상시킨다고 우려하는 목소리도 있습니다." ``` <br><br> # **Evalution Result** 영어->한국어 번역 성능을 평가하기위한 데이터셋을 선정하여 평가를 진행하였습니다. ### **평가 데이터셋 출처** - Aihub/FLoRes: [traintogpb/aihub-flores-koen-integrated-sparta-30k](https://huggingface.co/datasets/traintogpb/aihub-flores-koen-integrated-sparta-30k) | (test set 1k) - iwslt-2023 : [shreevigneshs/iwslt-2023-en-ko-train-val-split-0.1](https://huggingface.co/datasets/shreevigneshs/iwslt-2023-en-ko-train-val-split-0.1) | (f_test 597, if_test 597) - ko_news_2024: [nayohan/ko_news_eval40](https://huggingface.co/datasets/nayohan/ko_news_eval40) | (40) ### **모델 평가방법** - 각 모델은 허깅페이스에 ReadMe에 적혀있는 추론코드를 기반으로 각각 추론하였습니다. (공통: max_new_tokens=512) - EEVE는 명령어("당신은 번역기 입니다. 영어를 한국어로 번역하세요.")를 시스템프롬프트에 추가하였고, KULLM3는 기존 시스템프롬프트를 유지하고, 유저의 입력 맨 앞에 추가하였습니다. <br> ## **Aihub 영-한 번역데이터셋 평가** * [Aihub 평가 데이터셋](https://huggingface.co/datasets/traintogpb/aihub-flores-koen-integrated-sparta-30k)은 모델들이 학습데이터셋에 포함되었을 수 있습니다. 카테고리별 성능을 확인하는 용도로만 참고해주세요. [[카테고리 설명 링크]](https://huggingface.co/datasets/traintogpb/aihub-koen-translation-integrated-tiny-100k) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6152b4b9ecf3ca6ab820e325/TMo05LOUhPGYNbT2ADOgi.png) | model | aihub-111 | aihub-124 | aihub-125 | aihub-126 | aihub-563 | aihub-71265 | aihub-71266 | aihub-71382 | average | |:-----------------|------------:|------------:|------------:|------------:|------------:|--------------:|--------------:|--------------:|----------:| | [EEVE-10.8b-it](https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0) | 6.15 | 11.81 | 5.78 | 4.99 | 6.31 | 10.99 | 9.41 | 6.44 | 7.73 | | [KULLM3](https://huggingface.co/nlpai-lab/KULLM3) | 9.00 | 13.49 | 10.43 | 5.90 | 1.92 | 16.37 | 10.02 | 8.39 | 9.44 | | [Seagull-13B](https://huggingface.co/kuotient/Seagull-13b-translation) | 9.8 | 18.38 | 8.51 | 5.53 | 8.74 | 17.44 | 10.11 | 11.21 | 11.21 | | [Synatra-7B](https://huggingface.co/maywell/Synatra-7B-v0.3-Translation) | 6.99 | 25.14 | 7.79 | 5.31 | 9.95 | 19.27 | 13.20 | 8.93 | 12.07 | | [nhndq-nllb](https://huggingface.co/NHNDQ/nllb-finetuned-en2ko) | 24.09 | 48.71 | 22.89 | 13.98 | 18.71 | 30.18 | 32.49 | 18.62 | 26.20 | | [our-tech](nayohan/llama3-8b-it-translation-tech-en-ko-1sent) | 20.19 | 37.48 | 18.50 | 12.45 | 16.96 | 13.92 | 43.54 | 9.62 | 21.58 | | [our-general](https://huggingface.co/nayohan/llama3-8b-it-translation-general-en-ko-1sent) | 24.72 | 45.22 | 21.61 | 18.97 | 17.23 | 30.00 | 32.08 | 13.55 | 25.42 | | [our-sharegpt](https://huggingface.co/nayohan/llama3-8b-it-translation-sharegpt-en-ko) | 12.42 | 19.23 | 10.91 | 9.18 | 14.30 | 26.43 | 12.62 | 15.57 | 15.08 | | **our-instrucTrans** | 24.89 | 47.00 | 22.78 | 21.78 | 24.27 | 27.98 | 31.31 | 15.42 |**26.92** | ## **FLoRes 영-한 번역데이터셋 평가** [FloRes](https://huggingface.co/datasets/facebook/flores)는 페이스북에서 공개한 영어와 적은 리소스의 언어 200개에 대해서 병렬로 구성한 번역 벤치마크 데이터셋입니다. [traintogpb/aihub-flores-koen-integrated-sparta-30k](https://huggingface.co/datasets/traintogpb/aihub-flores-koen-integrated-sparta-30k)를 활용하여 평가를 진행하였습니다. (한문장 구성) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6152b4b9ecf3ca6ab820e325/ZDeA-7e-0xfXaGOmyS9zs.png) | model | flores-dev | flores-devtest | average | |:-----------------|-------------:|-----------------:|----------:| | EEVE-10.8b-it | 10.99 | 11.71 | 11.35 | | KULLM3 | 12.83 | 13.23 | 13.03 | | Seagull-13B | 11.48 | 11.99 | 11.73 | | Synatra-7B | 10.98 | 10.81 | 10.89 | | nhndq-nllb | 12.79 | 15.15 | 13.97 | | our-tech | 12.14 | 12.04 | 12.09 | | our-general | 14.93 | 14.58 | 14.75 | | our-sharegpt | 14.71 | 16.69 | 15.70 | | our-instrucTrans | 14.49 | 17.69 | **16.09** | ## **iwslt-2023** [iwslt-2023 데이터셋](https://huggingface.co/datasets/shreevigneshs/iwslt-2023-en-ko-train-val-split-0.1)은 동일한 영어문장을 각각 반말, 존댓말의 한국어로 평가데이터셋이 구성되어 있습니다. 모델의 존대/반말 경향을 상대적으로 확인할 수 있습니다. (한문장 구성) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6152b4b9ecf3ca6ab820e325/UJvuCnbjWokBWQNhD4L63.png) | model | iwslt_zondae | iwslt_banmal | average | |:-----------------|---------------------:|------------------:|----------:| | EEVE-10.8b-it | 4.62 | 3.79 | 4.20 | | KULLM3 | 5.94 | 5.24 | 5.59 | | Seagull-13B | 6.14 | 4.54 | 5.34 | | Synatra-7B | 5.43 | 4.73 | 5.08 | | nhndq-nllb | 8.36 | 7.44 | **7.90** | | our-tech | 3.99 | 3.95 | 3.97 | | our-general | 7.33 | 6.18 | 6.75 | | our-sharegpt | 7.83 | 6.35 | 7.09 | | our-instrucTrans | 8.63 | 6.97 | 7.80 | ## **ko_news_eval40** [ko_news_eval40 데이터셋](https://huggingface.co/datasets/nayohan/ko_news_eval40)은 학습되지 않았을 새로운 데이터셋에 평가하고자 24년5월 뉴스를 각 카테고리(4) 별 10개씩 기사 내 문단 일부를 수집하고, GPT4로 번역하여 구성하였습니다. 영어를 일상뉴스에 사용되는 한국어로 잘 번역하는지를 평가합니다. (문단 구성) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6152b4b9ecf3ca6ab820e325/OaE5z_yQT9sIIz0zsn644.png) | model | IT/과학 | 경제 | 사회 | 오피니언 | average | |:-----------------|----------:|-------:|-------:|------------:|----------:| | EEVE-10.8b-it | 9.03 | 6.42 | 5.56 | 5.10 | 6.52 | | KULLM3 | 9.82 | 5.26 | 3.48 | 7.48 | 6.51 | | Seagull-13B | 7.41 | 6.78 | 4.76 | 4.85 | 5.95 | | Synatra-7B | 11.44 | 5.59 | 4.57 | 6.31 | 6.97 | | nhndq-nllb | 11.97 | 11.12 | 6.14 | 5.28 | 8.62 | | our-tech | 10.45 | 9.98 | 5.13 | 10.15 | 8.92 | | our-general | 16.22 | 10.61 | 8.51 | 7.33 | 10.66 | | our-sharegpt | 12.71 | 8.06 | 7.70 | 6.43 | 8.72 | | our-instrucTrans | 20.42 | 12.77 | 11.40 | 10.31 |**13.72** | ## **Average** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6152b4b9ecf3ca6ab820e325/bf2qjeg-03WRVTIbqvG7C.png) | model | aihub | flores | iwslt | news | average | |:-----------------|--------:|---------:|--------:|--------:|----------:| | [EEVE-10.8b-it](https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0) | 7.73 | 11.35 | 4.20 | 6.52 | 7.45 | | [KULLM3](https://huggingface.co/nlpai-lab/KULLM3) | 9.44 | 13.03 | 5.59 | 6.51 | 8.64 | | [Seagull-13B](https://huggingface.co/kuotient/Seagull-13b-translation) | 11.21 | 11.73 | 5.34 | 5.95 | 8.56 | | [Synatra-7B](https://huggingface.co/maywell/Synatra-7B-v0.3-Translation) | 12.07 | 10.89 | 5.08 | 6.97 | 8.75 | | [nhndq-nllb](https://huggingface.co/NHNDQ/nllb-finetuned-en2ko) | 26.20 | 13.97 |**7.90** | 8.62 | 14.17 | | [our-tech](nayohan/llama3-8b-it-translation-tech-en-ko-1sent) | 21.58 | 12.09 | 3.97 | 8.92 | 11.64 | | [our-general](https://huggingface.co/nayohan/llama3-8b-it-translation-general-en-ko-1sent) | 25.42 | 14.75 | 6.75 | 10.66 | 14.40 | | [our-sharegpt](https://huggingface.co/nayohan/llama3-8b-it-translation-sharegpt-en-ko) | 15.08 | 15.70 | 7.09 | 8.72 | 11.64 | | **our-instrucTrans** |**26.92**| **16.09**| 7.80 |**13.72**| **16.13** | ### **Citation** ```bibtex @article{InstrcTrans8b, title={llama3-instrucTrans-enko-8b}, author={Na, Yohan}, year={2024}, url={https://huggingface.co/nayohan/llama3-instrucTrans-enko-8b} } ``` ```bibtex @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url={https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ```
hrtdind/Gemma-2-9B-It-SPPO-Iter3-Q5_K_M-GGUF
hrtdind
2024-06-30T14:16:34Z
580
1
null
[ "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "en", "dataset:openbmb/UltraFeedback", "base_model:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3", "license:apache-2.0", "region:us" ]
text-generation
2024-06-30T14:16:05Z
--- base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3 datasets: - openbmb/UltraFeedback language: - en license: apache-2.0 pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- # hrtdind/Gemma-2-9B-It-SPPO-Iter3-Q5_K_M-GGUF This model was converted to GGUF format from [`UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3`](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo hrtdind/Gemma-2-9B-It-SPPO-Iter3-Q5_K_M-GGUF --hf-file gemma-2-9b-it-sppo-iter3-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo hrtdind/Gemma-2-9B-It-SPPO-Iter3-Q5_K_M-GGUF --hf-file gemma-2-9b-it-sppo-iter3-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo hrtdind/Gemma-2-9B-It-SPPO-Iter3-Q5_K_M-GGUF --hf-file gemma-2-9b-it-sppo-iter3-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo hrtdind/Gemma-2-9B-It-SPPO-Iter3-Q5_K_M-GGUF --hf-file gemma-2-9b-it-sppo-iter3-q5_k_m.gguf -c 2048 ```
abmorton/standard-medium-2-unet500
abmorton
2024-07-01T19:25:42Z
580
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-07-01T19:19:57Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### standard-medium-2-unet500 Dreambooth model trained by abmorton with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
edbeeching/decision-transformer-gym-hopper-expert
edbeeching
2022-06-29T19:12:17Z
579
14
transformers
[ "transformers", "pytorch", "decision_transformer", "feature-extraction", "deep-reinforcement-learning", "reinforcement-learning", "decision-transformer", "gym-continous-control", "arxiv:2106.01345", "endpoints_compatible", "region:us" ]
reinforcement-learning
2022-03-16T08:20:20Z
--- tags: - deep-reinforcement-learning - reinforcement-learning - decision-transformer - gym-continous-control pipeline_tag: reinforcement-learning --- # Decision Transformer model trained on expert trajectories sampled from the Gym Hopper environment This is a trained [Decision Transformer](https://arxiv.org/abs/2106.01345) model trained on expert trajectories sampled from the Gym Hopper environment. The following normlization coefficients are required to use this model: mean = [ 1.3490015, -0.11208222, -0.5506444, -0.13188992, -0.00378754, 2.6071432, 0.02322114, -0.01626922, -0.06840388, -0.05183131, 0.04272673] std = [0.15980862, 0.0446214, 0.14307782, 0.17629202, 0.5912333, 0.5899924, 1.5405099, 0.8152689, 2.0173461, 2.4107876, 5.8440027 ] See our [Blog Post](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing), [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing) or [Example Script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/decision_transformer) for usage.
YuxinJiang/sup-promcse-roberta-base
YuxinJiang
2023-04-05T14:04:15Z
579
1
transformers
[ "transformers", "pytorch", "roberta", "arxiv:2203.06875", "arxiv:1908.10084", "license:mit", "endpoints_compatible", "region:us" ]
null
2023-03-10T08:12:24Z
--- license: mit --- # PromCSE: Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1lanXViJzbmGM1bwm8AflNUKmrvDidg_3?usp=sharing) arXiv link: https://arxiv.org/abs/2203.06875v2 Published in [**EMNLP 2022**](https://2022.emnlp.org/) Our code is modified based on [SimCSE](https://github.com/princeton-nlp/SimCSE) and [P-tuning v2](https://github.com/THUDM/P-tuning-v2/). Here we would like to sincerely thank them for their excellent works. ## Model List We have released our supervised and unsupervised models on huggingface, which acquire **Top 1** results on 1 domain-shifted STS task and 4 standard STS tasks: [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/deep-continuous-prompt-for-contrastive-1/semantic-textual-similarity-on-cxc)](https://paperswithcode.com/sota/semantic-textual-similarity-on-cxc?p=deep-continuous-prompt-for-contrastive-1) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/deep-continuous-prompt-for-contrastive-1/semantic-textual-similarity-on-sick)](https://paperswithcode.com/sota/semantic-textual-similarity-on-sick?p=deep-continuous-prompt-for-contrastive-1) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/deep-continuous-prompt-for-contrastive-1/semantic-textual-similarity-on-sts12)](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts12?p=deep-continuous-prompt-for-contrastive-1) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/deep-continuous-prompt-for-contrastive-1/semantic-textual-similarity-on-sts13)](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts13?p=deep-continuous-prompt-for-contrastive-1) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/deep-continuous-prompt-for-contrastive-1/semantic-textual-similarity-on-sts14)](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts14?p=deep-continuous-prompt-for-contrastive-1) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/deep-continuous-prompt-for-contrastive-1/semantic-textual-similarity-on-sts16)](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts16?p=deep-continuous-prompt-for-contrastive-1) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/deep-continuous-prompt-for-contrastive-1/semantic-textual-similarity-on-sts15)](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts15?p=deep-continuous-prompt-for-contrastive-1) <!-- <img src="https://github.com/YJiangcm/DCPCSE/blob/master/figure/leaderboard.png" width="700" height="380"> --> | Model | STS12 | STS13 | STS14 | STS15 | STS16 | STS-B | SICK-R | Avg. | |:-----------------------:|:-----:|:----------:|:---------:|:-----:|:-----:|:-----:|:-----:|:-----:| | [YuxinJiang/unsup-promcse-bert-base-uncased](https://huggingface.co/YuxinJiang/unsup-promcse-bert-base-uncased) | 73.03 |85.18| 76.70| 84.19 |79.69| 80.62| 70.00| 78.49| | [YuxinJiang/sup-promcse-roberta-base](https://huggingface.co/YuxinJiang/sup-promcse-roberta-base) | 76.75 |85.86| 80.98| 86.51 |83.51| 86.58| 80.41| 82.94| | [YuxinJiang/sup-promcse-roberta-large](https://huggingface.co/YuxinJiang/sup-promcse-roberta-large) | 79.14 |88.64| 83.73| 87.33 |84.57| 87.84| 82.07| 84.76| **Naming rules**: `unsup` and `sup` represent "unsupervised" (trained on Wikipedia corpus) and "supervised" (trained on NLI datasets) respectively. ## Usage [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1lanXViJzbmGM1bwm8AflNUKmrvDidg_3?usp=sharing) We provide an easy-to-use python package `promcse` which contains the following functions: **(1) encode sentences into embedding vectors; (2) compute cosine simiarities between sentences; (3) given queries, retrieval top-k semantically similar sentences for each query.** To use the tool, first install the `promcse` package from [PyPI](https://pypi.org/project/promcse/) ```bash pip install promcse ``` After installing the package, you can load our model by two lines of code ```python from promcse import PromCSE model = PromCSE("YuxinJiang/unsup-promcse-bert-base-uncased", "cls_before_pooler", 16) # model = PromCSE("YuxinJiang/sup-promcse-roberta-base") # model = PromCSE("YuxinJiang/sup-promcse-roberta-large") ``` Then you can use our model for **encoding sentences into embeddings** ```python embeddings = model.encode("A woman is reading.") ``` **Compute the cosine similarities** between two groups of sentences ```python sentences_a = ['A woman is reading.', 'A man is playing a guitar.'] sentences_b = ['He plays guitar.', 'A woman is making a photo.'] similarities = model.similarity(sentences_a, sentences_b) ``` Or build index for a group of sentences and **search** among them ```python sentences = ['A woman is reading.', 'A man is playing a guitar.'] model.build_index(sentences) results = model.search("He plays guitar.") ``` ## Train PromCSE In the following section, we describe how to train a PromCSE model by using our code. ### Setups [![Python](https://img.shields.io/badge/python-3.8.2-blue?logo=python&logoColor=FED643)](https://www.python.org/downloads/release/python-382/) [![Pytorch](https://img.shields.io/badge/pytorch-1.7.1-red?logo=pytorch)](https://pytorch.org/get-started/previous-versions/) Run the following script to install the remaining dependencies, ```bash pip install -r requirements.txt ``` ### Evaluation [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1lanXViJzbmGM1bwm8AflNUKmrvDidg_3?usp=sharing) Our evaluation code for sentence embeddings is based on a modified version of [SentEval](https://github.com/facebookresearch/SentEval). It evaluates sentence embeddings on semantic textual similarity (STS) tasks and downstream transfer tasks. For STS tasks, our evaluation takes the "all" setting, and report Spearman's correlation. The STS tasks include seven standard STS tasks (STS12-16, STSB, SICK-R) and one domain-shifted STS task (CxC). Before evaluation, please download the evaluation datasets by running ```bash cd SentEval/data/downstream/ bash download_dataset.sh ``` To evaluate the domain shift robustness of sentence embedding, we need to download [CxC](https://drive.google.com/drive/folders/1ZnRlVlc4kFsKbaWj9cFbb8bQU0fxzz1c?usp=sharing), and put the data into *SentEval/data/downstream/CocoCXC* Then come back to the root directory, you can evaluate the well trained models using our evaluation code. For example, ```bash python evaluation.py \ --model_name_or_path YuxinJiang/sup-promcse-roberta-large \ --pooler_type cls \ --task_set sts \ --mode test \ --pre_seq_len 10 ``` which is expected to output the results in a tabular format: ``` ------ test ------ +-------+-------+-------+-------+-------+--------------+-----------------+-------+ | STS12 | STS13 | STS14 | STS15 | STS16 | STSBenchmark | SICKRelatedness | Avg. | +-------+-------+-------+-------+-------+--------------+-----------------+-------+ | 79.14 | 88.64 | 83.73 | 87.33 | 84.57 | 87.84 | 82.07 | 84.76 | +-------+-------+-------+-------+-------+--------------+-----------------+-------+ ``` Arguments for the evaluation script are as follows, * `--model_name_or_path`: The name or path of a `transformers`-based pre-trained checkpoint. * `--pooler_type`: Pooling method. Now we support * `cls` (default): Use the representation of `[CLS]` token. A linear+activation layer is applied after the representation (it's in the standard BERT implementation). If you use **supervised PromCSE**, you should use this option. * `cls_before_pooler`: Use the representation of `[CLS]` token without the extra linear+activation. If you use **unsupervised PromCSE**, you should take this option. * `avg`: Average embeddings of the last layer. If you use checkpoints of SBERT/SRoBERTa ([paper](https://arxiv.org/abs/1908.10084)), you should use this option. * `avg_top2`: Average embeddings of the last two layers. * `avg_first_last`: Average embeddings of the first and last layers. If you use vanilla BERT or RoBERTa, this works the best. * `--mode`: Evaluation mode * `test` (default): The default test mode. To faithfully reproduce our results, you should use this option. * `dev`: Report the development set results. Note that in STS tasks, only `STS-B` and `SICK-R` have development sets, so we only report their numbers. It also takes a fast mode for transfer tasks, so the running time is much shorter than the `test` mode (though numbers are slightly lower). * `fasttest`: It is the same as `test`, but with a fast mode so the running time is much shorter, but the reported numbers may be lower (only for transfer tasks). * `--task_set`: What set of tasks to evaluate on (if set, it will override `--tasks`) * `sts` (default): Evaluate on STS tasks, including `STS 12~16`, `STS-B` and `SICK-R`. This is the most commonly-used set of tasks to evaluate the quality of sentence embeddings. * `cococxc`: Evaluate on domain-shifted CXC task. * `transfer`: Evaluate on transfer tasks. * `full`: Evaluate on both STS and transfer tasks. * `na`: Manually set tasks by `--tasks`. * `--tasks`: Specify which dataset(s) to evaluate on. Will be overridden if `--task_set` is not `na`. See the code for a full list of tasks. * `--pre_seq_len`: The length of deep continuous prompt. ### Training **Data** Following SimCSE, we use the same datasets to train our unsupervised models and supervised models. You can run `data/download_wiki.sh` and `data/download_nli.sh` to download the two datasets. **Training scripts** (The same as `run_unsup_example.sh`) ```bash python train.py \ --model_name_or_path bert-base-uncased \ --train_file data/wiki1m_for_simcse.txt \ --output_dir result/my-unsup-promcse-bert-base-uncased \ --num_train_epochs 1 \ --per_device_train_batch_size 256 \ --learning_rate 3e-2 \ --max_seq_length 32 \ --evaluation_strategy steps \ --metric_for_best_model stsb_spearman \ --load_best_model_at_end \ --eval_steps 125 \ --pooler_type cls \ --mlp_only_train \ --pre_seq_len 16 \ --overwrite_output_dir \ --temp 0.05 \ --do_train \ --do_eval \ --fp16 ``` We provide example training scripts for both unsupervised and supervised PromCSE. In `run_unsup_example.sh`, we provide a single-GPU (or CPU) example for the unsupervised version, and in `run_sup_example.sh` we give a **multiple-GPU** example for the supervised version. Both scripts call `train.py` for training. We explain the arguments in following: * `--train_file`: Training file path. We support "txt" files (one line for one sentence) and "csv" files (2-column: pair data with no hard negative; 3-column: pair data with one corresponding hard negative instance). You can use our provided Wikipedia or NLI data, or you can use your own data with the same format. * `--model_name_or_path`: Pre-trained checkpoints to start with. For now we support BERT-based models (`bert-base-uncased`, `bert-large-uncased`, etc.) and RoBERTa-based models (`RoBERTa-base`, `RoBERTa-large`, etc.). * `--temp`: Temperature for the contrastive loss. * `--pooler_type`: Pooling method. It's the same as the `--pooler_type` in the [evaluation part](#evaluation). * `--mlp_only_train`: We have found that for unsupervised PromCSE, it works better to train the model with MLP layer but test the model without it. You should use this argument when training unsupervised PromCSE models. * `--hard_negative_weight`: If using hard negatives (i.e., there are 3 columns in the training file), this is the logarithm of the weight. For example, if the weight is 1, then this argument should be set as 0 (default value). * `--do_mlm`: Whether to use the MLM auxiliary objective. If True: * `--mlm_weight`: Weight for the MLM objective. * `--mlm_probability`: Masking rate for the MLM objective. * `--pre_seq_len`: The length of deep continuous prompt. * `--prefix_projection`: Whether apply a two-layer MLP head over the prompt embeddings. * `--prefix_hidden_size`: The hidden size of the MLP projection head if prefix_projection is used. * `--do_eh_loss`: Whether to use Energy-based Hinge loss in supervised models. If True: * `--eh_loss_margin`: Margin of Energy-based Hinge loss. * `--eh_loss_weight`: Weight of Energy-based Hinge loss. All the other arguments are standard Huggingface's `transformers` training arguments. Some of the often-used arguments are: `--output_dir`, `--learning_rate`, `--per_device_train_batch_size`. In our example scripts, we also set to evaluate the model on the STS-B development set (need to download the dataset following the [evaluation](#evaluation) section) and save the best checkpoint. All our experiments are conducted on Nvidia 3090 GPUs. **Hyperparameters** | **Unsupervised** | BERT-base | BERT-large | RoBERTa-base | RoBERTa-large | |:--------------|:-----------:|:--------------:|:---------:|:---------:| | Batch size | 256 | 256 | 64 | 64 | Learning rate | 3e-2 | 3e-2 | 3e-2 | 1e-2 | | Prompt length | 16 | 10 | 14 | 10 | | do_mlm | False | False | True | True | | Epoch |1|1|1|1| | Valid steps | 125 | 125 | 125 | 125 | | **Supervised** | BERT-base | BERT-large | RoBERTa-base | RoBERTa-large | |:--------------|:-----------:|:--------------:|:---------:|:---------:| | Batch size | 256 | 256 | 512 | 512 | Learning rate | 1e-2 | 5e-3 | 1e-2 | 5e-3 | | Prompt length | 12 | 12 | 10 | 10 | | do_mlm | False | False | False | False | | Epoch |10|10|10|10| | Valid steps | 125 | 125 | 125 | 125 | ## Citation Please cite our paper by: ```bibtex @inproceedings{jiang-etal-2022-improved, title = "Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning", author = "Jiang, Yuxin and Zhang, Linhan and Wang, Wei", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022", month = dec, year = "2022", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-emnlp.220", pages = "3021--3035", } ```
timm/volo_d3_224.sail_in1k
timm
2024-02-10T23:44:30Z
579
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2106.13112", "license:apache-2.0", "region:us" ]
image-classification
2023-04-13T05:54:50Z
--- license: apache-2.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k --- # Model card for volo_d3_224.sail_in1k A VOLO (Vision Outlooker) image classification model. Trained on ImageNet-1k with token labelling by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 86.3 - GMACs: 20.8 - Activations (M): 60.1 - Image size: 224 x 224 - **Papers:** - VOLO: Vision Outlooker for Visual Recognition: https://arxiv.org/abs/2106.13112 - **Dataset:** ImageNet-1k - **Original:** https://github.com/sail-sg/volo ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('volo_d3_224.sail_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'volo_d3_224.sail_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 197, 512) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @article{yuan2022volo, title={Volo: Vision outlooker for visual recognition}, author={Yuan, Li and Hou, Qibin and Jiang, Zihang and Feng, Jiashi and Yan, Shuicheng}, journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, year={2022}, publisher={IEEE} } ```
TheBloke/MythoMix-L2-13B-GGUF
TheBloke
2023-09-27T12:47:17Z
579
3
transformers
[ "transformers", "gguf", "llama", "en", "base_model:Gryphe/MythoMix-L2-13b", "license:other", "text-generation-inference", "region:us" ]
null
2023-09-05T03:24:12Z
--- language: - en license: other model_name: MythoMix L2 13B base_model: Gryphe/MythoMix-L2-13b inference: false model_creator: Gryphe model_type: llama prompt_template: '``` {system_message} ### Instruction: {prompt} (For roleplay purposes, I suggest the following - Write <CHAR NAME>''s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.) ### Response: ``` ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # MythoMix L2 13B - GGUF - Model creator: [Gryphe](https://huggingface.co/Gryphe) - Original model: [MythoMix L2 13B](https://huggingface.co/Gryphe/MythoMix-L2-13b) <!-- description start --> ## Description This repo contains GGUF format model files for [Gryphe's MythoMix L2 13B](https://huggingface.co/Gryphe/MythoMix-L2-13b). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MythoMix-L2-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MythoMix-L2-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MythoMix-L2-13B-GGUF) * [Gryphe's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Gryphe/MythoMix-L2-13b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Custom ``` {system_message} ### Instruction: {prompt} (For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.) ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Gryphe's MythoMix L2 13B](https://huggingface.co/Gryphe/MythoMix-L2-13b). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [mythomix-l2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/MythoMix-L2-13B-GGUF/blob/main/mythomix-l2-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [mythomix-l2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/MythoMix-L2-13B-GGUF/blob/main/mythomix-l2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [mythomix-l2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/MythoMix-L2-13B-GGUF/blob/main/mythomix-l2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [mythomix-l2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/MythoMix-L2-13B-GGUF/blob/main/mythomix-l2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [mythomix-l2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/MythoMix-L2-13B-GGUF/blob/main/mythomix-l2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [mythomix-l2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/MythoMix-L2-13B-GGUF/blob/main/mythomix-l2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [mythomix-l2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/MythoMix-L2-13B-GGUF/blob/main/mythomix-l2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [mythomix-l2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/MythoMix-L2-13B-GGUF/blob/main/mythomix-l2-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [mythomix-l2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/MythoMix-L2-13B-GGUF/blob/main/mythomix-l2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [mythomix-l2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/MythoMix-L2-13B-GGUF/blob/main/mythomix-l2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [mythomix-l2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/MythoMix-L2-13B-GGUF/blob/main/mythomix-l2-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [mythomix-l2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/MythoMix-L2-13B-GGUF/blob/main/mythomix-l2-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/MythoMix-L2-13B-GGUF and below it, a specific filename to download, such as: mythomix-l2-13b.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/MythoMix-L2-13B-GGUF mythomix-l2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/MythoMix-L2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MythoMix-L2-13B-GGUF mythomix-l2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m mythomix-l2-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{system_message}\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/MythoMix-L2-13B-GGUF", model_file="mythomix-l2-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Gryphe's MythoMix L2 13B **UPDATE:** There's an improved version now! [Check it MythoMax!](https://huggingface.co/Gryphe/MythoMax-L2-13b) A requested variant of [MythoLogic-L2](https://huggingface.co/Gryphe/MythoLogic-L2-13b) and [Huginn](https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-FP16) using a highly experimental tensor type merge technique. This model is proficient at both roleplaying and storywriting due to its unique nature. Quantized models are available from TheBloke: [GGML](https://huggingface.co/TheBloke/MythoMix-L2-13B-GGML) - [GPTQ](https://huggingface.co/TheBloke/MythoMix-L2-13B-GPTQ) (You're the best!) ## Model details The idea behind this merge is that each layer is composed of several tensors, which are in turn responsible for specific functions. Using MythoLogic-L2's robust understanding as its input and Huginn's extensive writing capability as its output seems to have resulted in a model that exceeds at both, confirming my theory. (More details to be released at a later time) This type of merge is incapable of being illustrated, as each of its 360 tensors has an unique ratio applied to it. As with my prior merges, gradients were part of these ratios to further finetune its behaviour. ## Prompt Format This model primarily uses Alpaca formatting, so for optimal model performance, use: ``` <System prompt/Character Card> ### Instruction: Your instruction or question here. For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only. ### Response: ``` --- license: other --- <!-- original-model-card end -->
Yntec/Abased
Yntec
2024-04-11T10:48:06Z
579
4
diffusers
[ "diffusers", "safetensors", "anime", "style", "BDZ888", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-11-12T23:12:30Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - anime - style - BDZ888 - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image --- # Abased This model with the MoistMixV2 VAE baked in. Original page: https://civitai.com/models/118513?modelVersionId=128569 Sample and prompt by digiplay: ![096903c8-6adc-4334-8b12-2a1007fd3c40.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/5O2cDFRfucZXOWXqAbsMd.jpeg) Rabbit and cat,white and pink dress, in cake store. Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/0tIrARlLqIWQZI81C5cUN.png) Pretty CUTE LITTLE Girl, DETAILED CHIBI EYES, gorgeous detailed hair, iconic, 1960, idyllic, detailed, INCREDIBLE colors, art station, MARIA 4k, beautiful background, sharp focus. Illustration By KlaysMoji and artgerm and Clay Mann and and leyendecker and kyoani. beautiful girl, Cartoon, sitting on a box of perfect, holding
TheBloke/Mixtral_11Bx2_MoE_19B-GGUF
TheBloke
2024-01-05T14:47:43Z
579
18
transformers
[ "transformers", "gguf", "mixtral", "base_model:cloudyu/Mixtral_11Bx2_MoE_19B", "license:cc-by-nc-4.0", "text-generation-inference", "region:us" ]
null
2024-01-05T11:57:05Z
--- base_model: cloudyu/Mixtral_11Bx2_MoE_19B inference: false license: cc-by-nc-4.0 model_creator: hai model_name: Mixtral 11Bx2 MoE 19B model_type: mixtral prompt_template: '{prompt} ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Mixtral 11Bx2 MoE 19B - GGUF - Model creator: [hai](https://huggingface.co/cloudyu) - Original model: [Mixtral 11Bx2 MoE 19B](https://huggingface.co/cloudyu/Mixtral_11Bx2_MoE_19B) <!-- description start --> ## Description This repo contains GGUF format model files for [hai's Mixtral 11Bx2 MoE 19B](https://huggingface.co/cloudyu/Mixtral_11Bx2_MoE_19B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mixtral_11Bx2_MoE_19B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mixtral_11Bx2_MoE_19B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mixtral_11Bx2_MoE_19B-GGUF) * [hai's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/cloudyu/Mixtral_11Bx2_MoE_19B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: None ``` {prompt} ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [mixtral_11bx2_moe_19b.Q2_K.gguf](https://huggingface.co/TheBloke/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/mixtral_11bx2_moe_19b.Q2_K.gguf) | Q2_K | 2 | 6.47 GB| 8.97 GB | smallest, significant quality loss - not recommended for most purposes | | [mixtral_11bx2_moe_19b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/mixtral_11bx2_moe_19b.Q3_K_S.gguf) | Q3_K_S | 3 | 8.30 GB| 10.80 GB | very small, high quality loss | | [mixtral_11bx2_moe_19b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/mixtral_11bx2_moe_19b.Q3_K_M.gguf) | Q3_K_M | 3 | 8.43 GB| 10.93 GB | very small, high quality loss | | [mixtral_11bx2_moe_19b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/mixtral_11bx2_moe_19b.Q3_K_L.gguf) | Q3_K_L | 3 | 8.56 GB| 11.06 GB | small, substantial quality loss | | [mixtral_11bx2_moe_19b.Q4_0.gguf](https://huggingface.co/TheBloke/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/mixtral_11bx2_moe_19b.Q4_0.gguf) | Q4_0 | 4 | 10.83 GB| 13.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [mixtral_11bx2_moe_19b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/mixtral_11bx2_moe_19b.Q4_K_S.gguf) | Q4_K_S | 4 | 10.83 GB| 13.33 GB | small, greater quality loss | | [mixtral_11bx2_moe_19b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/mixtral_11bx2_moe_19b.Q4_K_M.gguf) | Q4_K_M | 4 | 10.86 GB| 13.36 GB | medium, balanced quality - recommended | | [mixtral_11bx2_moe_19b.Q5_0.gguf](https://huggingface.co/TheBloke/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/mixtral_11bx2_moe_19b.Q5_0.gguf) | Q5_0 | 5 | 13.21 GB| 15.71 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [mixtral_11bx2_moe_19b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/mixtral_11bx2_moe_19b.Q5_K_S.gguf) | Q5_K_S | 5 | 13.21 GB| 15.71 GB | large, low quality loss - recommended | | [mixtral_11bx2_moe_19b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/mixtral_11bx2_moe_19b.Q5_K_M.gguf) | Q5_K_M | 5 | 13.23 GB| 15.73 GB | large, very low quality loss - recommended | | [mixtral_11bx2_moe_19b.Q6_K.gguf](https://huggingface.co/TheBloke/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/mixtral_11bx2_moe_19b.Q6_K.gguf) | Q6_K | 6 | 15.74 GB| 18.24 GB | very large, extremely low quality loss | | [mixtral_11bx2_moe_19b.Q8_0.gguf](https://huggingface.co/TheBloke/Mixtral_11Bx2_MoE_19B-GGUF/blob/main/mixtral_11bx2_moe_19b.Q8_0.gguf) | Q8_0 | 8 | 20.39 GB| 22.89 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Mixtral_11Bx2_MoE_19B-GGUF and below it, a specific filename to download, such as: mixtral_11bx2_moe_19b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Mixtral_11Bx2_MoE_19B-GGUF mixtral_11bx2_moe_19b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Mixtral_11Bx2_MoE_19B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mixtral_11Bx2_MoE_19B-GGUF mixtral_11bx2_moe_19b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m mixtral_11bx2_moe_19b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./mixtral_11bx2_moe_19b.Q4_K_M.gguf", # Download the model file first n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "{prompt}", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./mixtral_11bx2_moe_19b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: hai's Mixtral 11Bx2 MoE 19B # Mixtral MOE 2x10.7B MoE of the following models : * [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) * [jeonsworld/CarbonVillain-en-10.7B-v1](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v1) * Local Test * hf (pretrained=cloudyu/Mixtral_11Bx2_MoE_19B), gen_kwargs: (None), limit: None, num_fewshot: 10, batch_size: auto (32) | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |---------|-------|------|-----:|--------|-----:|---|-----:| |hellaswag|Yaml |none | 10|acc |0.7142|± |0.0045| | | |none | 10|acc_norm|0.8819|± |0.0032| gpu code example ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM import math ## v2 models model_path = "cloudyu/Mixtral_11Bx2_MoE_19B" tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True ) print(model) prompt = input("please input prompt:") while len(prompt) > 0: input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda") generation_output = model.generate( input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2 ) print(tokenizer.decode(generation_output[0])) prompt = input("please input prompt:") ``` CPU example ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM import math ## v2 models model_path = "cloudyu/Mixtral_11Bx2_MoE_19B" tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float32, device_map='cpu',local_files_only=False ) print(model) prompt = input("please input prompt:") while len(prompt) > 0: input_ids = tokenizer(prompt, return_tensors="pt").input_ids generation_output = model.generate( input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2 ) print(tokenizer.decode(generation_output[0])) prompt = input("please input prompt:") ``` <!-- original-model-card end -->
ChrisWilson010101/5HEpyGbR61MoqsRR7ubdsNKm7A25xc9pPDBgnxaPHMJV6Wi2_vgg
ChrisWilson010101
2024-03-03T23:24:53Z
579
0
keras
[ "keras", "region:us" ]
null
2024-03-01T16:18:47Z
Entry not found
sambanovasystems/SambaLingo-Thai-Base-70B
sambanovasystems
2024-04-16T22:43:03Z
579
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "th", "en", "dataset:uonlp/CulturaX", "arxiv:2404.05829", "arxiv:2311.05741", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-15T20:07:50Z
--- license: llama2 datasets: - uonlp/CulturaX language: - th - en metrics: - chrf - accuracy - bleu --- # SambaLingo-Thai-Base-70B <img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/> <!-- Provide a quick summary of what the model is/does. --> SambaLingo-Thai-Base-70B is a pretrained Bi-lingual Thai and English model that adapts [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) to Thai by training on 26 billion tokens from the Thai split of the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. This model reports state of the art evaluation results in perplexity and FLORES-200 translation. For the chat version of this model, please see [sambanovasystems/SambaLingo-Thai-Chat-70B](https://huggingface.co/sambanovasystems/SambaLingo-Thai-Chat-70B), or try it out at [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space). ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [SambaNova Systems](https://sambanova.ai/) - **Model type:** Language Model - **Language(s):** Thai, English - **Finetuned from model:** [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) - **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829) - **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts) ## Getting Started ### Loading Model With Hugging Face ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Thai-Base-70B") model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Thai-Base-70B", device_map="auto", torch_dtype="auto") ``` ### Suggested Inference Parameters We suggest setting do_sample=False as this is a pretrained checkpoint. ### Prompting Guidelines This model is a pretrained checkpoint, so to use it effectively please use few shot prompting with exemplars. The only other prompt templating required is the standard \<s\> (BOS) token from the Llama tokenizer. If you want to interact with this model with direct questions or queries, please use the chat version of the model that has been aligned with human preferences [sambanovasystems/SambaLingo-Thai-Chat-70B](https://huggingface.co/sambanovasystems/SambaLingo-Thai-Chat-70B). ## Training Details All pre-training is done on the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. We mix the data to be 75% data from the language we are adapting to, and 25% English as suggested by [Csaki et al.](https://arxiv.org/abs/2311.05741) We pack the data into sequences of length 4096, and ensure that when learning a token we only attend to previous tokens in the context of the corresponding text document. We train with a global batch size of 1024, sequence length of 4096, maximum learning rate of 1e-4 with cosine decay, warmup ratio of 0.01 and a weight decay of 0.1. ## Tokenizer Details We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language. ## Evaluation For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829) ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> Use of this model is governed by the Meta’s [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/). Please review and accept the license before downloading the model weights. ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> SambaLingo should NOT be used for: - Mission-critical applications - Applications that involve the safety of others - Making highly important decisions ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Like all LLMs, SambaLingo has certain limitations: - Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information. - Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output. - Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses. - Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited. - Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content. ## Acknowledgments We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative. We would like to give a special thanks to the following groups: - Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset - Nguyen et al for open sourcing CulturaX dataset - CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset - EleutherAI for their open source evaluation framework - Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo ## Cite SambaLingo ``` @misc{csaki2024sambalingo, title={SambaLingo: Teaching Large Language Models New Languages}, author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker}, year={2024}, eprint={2404.05829}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
winglian/Llama-3-8b-64k-PoSE
winglian
2024-04-26T12:38:04Z
579
71
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "axolotl", "en", "arxiv:2309.10400", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-24T17:48:54Z
--- language: - en pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - axolotl --- ## Llama 3 8B 64K [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <img src="https://huggingface.co/winglian/Llama-3-8b-64k-PoSE/resolve/main/output.png" /> This model uses [PoSE](https://huggingface.co/papers/2309.10400) to extend Llama's context length from 8k to 64k @ rope_theta: 500000.0. We used PoSE with continued pretraining on 300M tokens from the RedPajama V1 dataset using data between 6k-8k tokens. We have further set rope_theta to 2M after continued pre-training to potentially further extend the context past 64k. This was trained on a subset of the RedPajama v1 dataset with text between 6k-8k context. We trained a rank stabilized LoRA of rank 256. [WandB](https://wandb.ai/oaaic/llama-3-64k/runs/tkcyjt37) ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B, for use with transformers and with the original `llama3` codebase. ### Use with transformers See the snippet below for usage with Transformers: ```python >>> import transformers >>> import torch >>> model_id = "meta-llama/Meta-Llama-3-8B" >>> pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto" ) >>> pipeline("Hey how are you doing today?") ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B --include "original/*" --local-dir Meta-Llama-3-8B ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
mzwing/Phi-3-mini-4k-instruct-GGUF
mzwing
2024-04-30T11:33:23Z
579
0
null
[ "gguf", "region:us" ]
null
2024-04-30T11:21:48Z
Entry not found
mradermacher/Chaotic-Soliloquy-4x8B-GGUF
mradermacher
2024-05-05T14:55:53Z
579
8
transformers
[ "transformers", "gguf", "moe", "en", "base_model:xxx777xxxASD/Chaotic-Soliloquy-4x8B", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-05-01T15:27:25Z
--- base_model: xxx777xxxASD/Chaotic-Soliloquy-4x8B language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - moe --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/xxx777xxxASD/Chaotic-Soliloquy-4x8B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.Q2_K.gguf) | Q2_K | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.IQ3_XS.gguf) | IQ3_XS | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.Q3_K_S.gguf) | Q3_K_S | 11.0 | | | [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.IQ3_S.gguf) | IQ3_S | 11.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.IQ3_M.gguf) | IQ3_M | 11.2 | | | [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.Q3_K_M.gguf) | Q3_K_M | 12.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.Q3_K_L.gguf) | Q3_K_L | 13.1 | | | [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.IQ4_XS.gguf) | IQ4_XS | 13.7 | | | [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.Q4_K_S.gguf) | Q4_K_S | 14.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.Q4_K_M.gguf) | Q4_K_M | 15.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.Q5_K_S.gguf) | Q5_K_S | 17.3 | | | [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.Q5_K_M.gguf) | Q5_K_M | 17.8 | | | [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.Q6_K.gguf) | Q6_K | 20.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.Q8_0.gguf) | Q8_0 | 26.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
NotAiLOL/Athena-TinyLlama-v0.1
NotAiLOL
2024-05-20T08:11:16Z
579
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "dataset:NotAiLOL/Athena-v0.1", "base_model:unsloth/tinyllama-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-20T05:59:52Z
--- license: apache-2.0 language: - en tags: - text-generation-inference - transformers - unsloth - llama - trl - sft base_model: unsloth/tinyllama-bnb-4bit datasets: - NotAiLOL/Athena-v0.1 --- # Uploaded model - **Developed by:** NotAiLOL - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) # Model details # Metrics - Epoch number: 2 - Training steps: 2334 - Train runtime: 5960.197s - Training loss: 0.521200 ## Prompt format The model follows alpaca prompt. ``` Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {} ### Input: {} ### Response: {} ```
duyntnet/Azzurro-imatrix-GGUF
duyntnet
2024-05-20T14:32:47Z
579
0
transformers
[ "transformers", "gguf", "imatrix", "Azzurro", "text-generation", "en", "license:other", "region:us" ]
text-generation
2024-05-20T12:22:06Z
--- license: other language: - en pipeline_tag: text-generation inference: false tags: - transformers - gguf - imatrix - Azzurro --- Quantizations of https://huggingface.co/MoxoffSpA/Azzurro # From original readme ## Usage Be sure to install these dependencies before running the program ```python !pip install transformers torch sentencepiece ``` ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cpu" # if you want to use the gpu make sure to have cuda toolkit installed and change this to "cuda" model = AutoModelForCausalLM.from_pretrained("MoxoffSpA/Azzurro") tokenizer = AutoTokenizer.from_pretrained("MoxoffSpA/Azzurro") question = """Quanto è alta la torre di Pisa?""" context = """ La Torre di Pisa è un campanile del XII secolo, famoso per la sua inclinazione. Alta circa 56 metri. """ prompt = f"Domanda: {question}, contesto: {context}" messages = [ {"role": "user", "content": prompt} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate( model_inputs, # The input to the model max_new_tokens=128, # Limiting the maximum number of new tokens generated do_sample=True, # Enabling sampling to introduce randomness in the generation temperature=0.1, # Setting temperature to control the randomness, lower values make it more deterministic top_p=0.95, # Using nucleus sampling with top-p filtering for more coherent generation eos_token_id=tokenizer.eos_token_id # Specifying the token that indicates the end of a sequence ) decoded_output = tokenizer.decode(generated_ids[0], skip_special_tokens=True) trimmed_output = decoded_output.strip() print(trimmed_output) ```
RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf
RichardErkhov
2024-05-23T23:23:24Z
579
0
null
[ "gguf", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "region:us" ]
null
2024-05-23T20:53:40Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) microsoft_WizardLM-2-7B - GGUF - Model creator: https://huggingface.co/lucyknada/ - Original model: https://huggingface.co/lucyknada/microsoft_WizardLM-2-7B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [microsoft_WizardLM-2-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q2_K.gguf) | Q2_K | 2.53GB | | [microsoft_WizardLM-2-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [microsoft_WizardLM-2-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.IQ3_S.gguf) | IQ3_S | 2.96GB | | [microsoft_WizardLM-2-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [microsoft_WizardLM-2-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.IQ3_M.gguf) | IQ3_M | 3.06GB | | [microsoft_WizardLM-2-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q3_K.gguf) | Q3_K | 3.28GB | | [microsoft_WizardLM-2-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [microsoft_WizardLM-2-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [microsoft_WizardLM-2-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [microsoft_WizardLM-2-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q4_0.gguf) | Q4_0 | 3.83GB | | [microsoft_WizardLM-2-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [microsoft_WizardLM-2-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [microsoft_WizardLM-2-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q4_K.gguf) | Q4_K | 4.07GB | | [microsoft_WizardLM-2-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [microsoft_WizardLM-2-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q4_1.gguf) | Q4_1 | 4.24GB | | [microsoft_WizardLM-2-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q5_0.gguf) | Q5_0 | 4.65GB | | [microsoft_WizardLM-2-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [microsoft_WizardLM-2-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q5_K.gguf) | Q5_K | 4.78GB | | [microsoft_WizardLM-2-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [microsoft_WizardLM-2-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q5_1.gguf) | Q5_1 | 5.07GB | | [microsoft_WizardLM-2-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q6_K.gguf) | Q6_K | 5.53GB | | [microsoft_WizardLM-2-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: apache-2.0 --- <p style="font-size:20px;" align="center"> 🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p> <p align="center"> 🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> </p> ## News 🔥🔥🔥 [2024/04/15] We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B. - WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works and consistently outperforms all the existing state-of-the-art opensource models. - WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. - WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models. For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper. ## Model Details * **Model name**: WizardLM-2 7B * **Developed by**: WizardLM@Microsoft AI * **Base model**: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) * **Parameters**: 7B * **Language(s)**: Multilingual * **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2) * **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM) * **Paper**: WizardLM-2 (Upcoming) * **License**: Apache2.0 ## Model Capacities **MT-Bench** We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> **Human Preferences Evaluation** We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. We report the win:loss rate without tie: - WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314. - WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat. - WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Method Overview We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system. <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Usage ❗<b>Note for model system prompts usage:</b> <b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s> USER: Who are you? ASSISTANT: I am WizardLM.</s>...... ``` <b> Inference WizardLM-2 Demo Script</b> We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf
RichardErkhov
2024-06-29T15:53:37Z
579
0
null
[ "gguf", "region:us" ]
null
2024-06-29T15:35:43Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) TeenyTinyLlama-160m - GGUF - Model creator: https://huggingface.co/nicholasKluge/ - Original model: https://huggingface.co/nicholasKluge/TeenyTinyLlama-160m/ | Name | Quant method | Size | | ---- | ---- | ---- | | [TeenyTinyLlama-160m.Q2_K.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf/blob/main/TeenyTinyLlama-160m.Q2_K.gguf) | Q2_K | 0.07GB | | [TeenyTinyLlama-160m.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf/blob/main/TeenyTinyLlama-160m.IQ3_XS.gguf) | IQ3_XS | 0.07GB | | [TeenyTinyLlama-160m.IQ3_S.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf/blob/main/TeenyTinyLlama-160m.IQ3_S.gguf) | IQ3_S | 0.07GB | | [TeenyTinyLlama-160m.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf/blob/main/TeenyTinyLlama-160m.Q3_K_S.gguf) | Q3_K_S | 0.07GB | | [TeenyTinyLlama-160m.IQ3_M.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf/blob/main/TeenyTinyLlama-160m.IQ3_M.gguf) | IQ3_M | 0.08GB | | [TeenyTinyLlama-160m.Q3_K.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf/blob/main/TeenyTinyLlama-160m.Q3_K.gguf) | Q3_K | 0.08GB | | [TeenyTinyLlama-160m.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf/blob/main/TeenyTinyLlama-160m.Q3_K_M.gguf) | Q3_K_M | 0.08GB | | [TeenyTinyLlama-160m.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf/blob/main/TeenyTinyLlama-160m.Q3_K_L.gguf) | Q3_K_L | 0.08GB | | [TeenyTinyLlama-160m.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf/blob/main/TeenyTinyLlama-160m.IQ4_XS.gguf) | IQ4_XS | 0.09GB | | [TeenyTinyLlama-160m.Q4_0.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf/blob/main/TeenyTinyLlama-160m.Q4_0.gguf) | Q4_0 | 0.09GB | | [TeenyTinyLlama-160m.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf/blob/main/TeenyTinyLlama-160m.IQ4_NL.gguf) | IQ4_NL | 0.09GB | | [TeenyTinyLlama-160m.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf/blob/main/TeenyTinyLlama-160m.Q4_K_S.gguf) | Q4_K_S | 0.09GB | | [TeenyTinyLlama-160m.Q4_K.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf/blob/main/TeenyTinyLlama-160m.Q4_K.gguf) | Q4_K | 0.1GB | | [TeenyTinyLlama-160m.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf/blob/main/TeenyTinyLlama-160m.Q4_K_M.gguf) | Q4_K_M | 0.1GB | | [TeenyTinyLlama-160m.Q4_1.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf/blob/main/TeenyTinyLlama-160m.Q4_1.gguf) | Q4_1 | 0.1GB | | [TeenyTinyLlama-160m.Q5_0.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf/blob/main/TeenyTinyLlama-160m.Q5_0.gguf) | Q5_0 | 0.11GB | | [TeenyTinyLlama-160m.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf/blob/main/TeenyTinyLlama-160m.Q5_K_S.gguf) | Q5_K_S | 0.11GB | | [TeenyTinyLlama-160m.Q5_K.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf/blob/main/TeenyTinyLlama-160m.Q5_K.gguf) | Q5_K | 0.11GB | | [TeenyTinyLlama-160m.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf/blob/main/TeenyTinyLlama-160m.Q5_K_M.gguf) | Q5_K_M | 0.11GB | | [TeenyTinyLlama-160m.Q5_1.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf/blob/main/TeenyTinyLlama-160m.Q5_1.gguf) | Q5_1 | 0.12GB | | [TeenyTinyLlama-160m.Q6_K.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf/blob/main/TeenyTinyLlama-160m.Q6_K.gguf) | Q6_K | 0.12GB | | [TeenyTinyLlama-160m.Q8_0.gguf](https://huggingface.co/RichardErkhov/nicholasKluge_-_TeenyTinyLlama-160m-gguf/blob/main/TeenyTinyLlama-160m.Q8_0.gguf) | Q8_0 | 0.16GB | Original model description: --- language: - pt license: apache-2.0 library_name: transformers tags: - text-generation-inference datasets: - nicholasKluge/Pt-Corpus-Instruct metrics: - perplexity pipeline_tag: text-generation widget: - text: 'A PUCRS é uma universidade ' example_title: Exemplo - text: A muitos anos atrás, em uma galáxia muito distante, vivia uma raça de example_title: Exemplo - text: Em meio a um escândalo, a frente parlamentar pediu ao Senador Silva para example_title: Exemplo inference: parameters: repetition_penalty: 1.2 temperature: 0.2 top_k: 20 top_p: 0.2 max_new_tokens: 150 co2_eq_emissions: emissions: 5600 source: CodeCarbon training_type: pre-training geographical_location: Germany hardware_used: NVIDIA A100-SXM4-40GB model-index: - name: TeenyTinyLlama-160m results: - task: type: text-generation name: Text Generation dataset: name: ENEM Challenge (No Images) type: eduagarcia/enem_challenge split: train args: num_few_shot: 3 metrics: - type: acc value: 19.24 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BLUEX (No Images) type: eduagarcia-temp/BLUEX_without_images split: train args: num_few_shot: 3 metrics: - type: acc value: 23.09 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: OAB Exams type: eduagarcia/oab_exams split: train args: num_few_shot: 3 metrics: - type: acc value: 22.37 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Assin2 RTE type: assin2 split: test args: num_few_shot: 15 metrics: - type: f1_macro value: 53.97 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Assin2 STS type: eduagarcia/portuguese_benchmark split: test args: num_few_shot: 15 metrics: - type: pearson value: 0.24 name: pearson source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: FaQuAD NLI type: ruanchaves/faquad-nli split: test args: num_few_shot: 15 metrics: - type: f1_macro value: 43.97 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HateBR Binary type: ruanchaves/hatebr split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 36.92 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: PT Hate Speech Binary type: hate_speech_portuguese split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 42.63 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: tweetSentBR type: eduagarcia-temp/tweetsentbr split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 11.39 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-160m name: Open Portuguese LLM Leaderboard --- # TeenyTinyLlama-160m <img src="./logo.png" alt="A little llama wearing a mushroom hat and a monocle." height="200"> ## Model Summary Large language models (LLMs) have significantly advanced natural language processing, but their progress has yet to be equal across languages. While most LLMs are trained in high-resource languages like English, multilingual models generally underperform monolingual ones. Additionally, aspects of their multilingual foundation sometimes restrict the byproducts they produce, like computational demands and licensing regimes. Hence, we developed the _TeenyTinyLlama_ pair: two compact models for Brazilian Portuguese text generation. Read our preprint on [Article](https://www.sciencedirect.com/science/article/pii/S2666827024000343). ## Details - **Architecture:** a Transformer-based model pre-trained via causal language modeling - **Size:** 162,417,408 parameters - **Context length:** 2048 tokens - **Dataset:** [Pt-Corpus Instruct](https://huggingface.co/datasets/nicholasKluge/Pt-Corpus-Instruct) (6.2B tokens) - **Language:** Portuguese - **Number of steps:** 458,000 - **GPU:** 1 NVIDIA A100-SXM4-40GB - **Training time**: ~ 36 hours - **Emissions:** 5.6 KgCO2 (Germany) - **Total energy consumption:** 15.5 kWh This repository has the [source code](https://github.com/Nkluge-correa/TeenyTinyLlama) used to train this model. The main libraries used are: - [Transformers](https://github.com/huggingface/transformers) - [PyTorch](https://github.com/pytorch/pytorch) - [Datasets](https://github.com/huggingface/datasets) - [Tokenizers](https://github.com/huggingface/tokenizers) - [Sentencepiece](https://github.com/google/sentencepiece) - [Accelerate](https://github.com/huggingface/accelerate) - [FlashAttention](https://github.com/Dao-AILab/flash-attention) - [Codecarbon](https://github.com/mlco2/codecarbon) ## Intended Uses The primary intended use of TeenyTinyLlama is to research the challenges related to developing language models for low-resource languages. Checkpoints saved during training are intended to provide a controlled setting for performing scientific experiments. You may also further fine-tune and adapt TeenyTinyLlama for deployment, as long as your use is following the Apache 2.0 license. If you decide to use pre-trained TeenyTinyLlama as a basis for your fine-tuned model, please conduct your own risk and bias assessment. ## Out-of-scope Use TeenyTinyLlama is not intended for deployment. It is not a product and should not be used for human-facing interactions. TeenyTinyLlama models are Brazilian Portuguese language only and are not suitable for translation or generating text in other languages. TeenyTinyLlama has not been fine-tuned for downstream contexts in which language models are commonly deployed. ## Basic usage Using the `pipeline`: ```python from transformers import pipeline generator = pipeline("text-generation", model="nicholasKluge/TeenyTinyLlama-160m") completions = generator("Astronomia é a ciência", num_return_sequences=2, max_new_tokens=100) for comp in completions: print(f"🤖 {comp['generated_text']}") ``` Using the `AutoTokenizer` and `AutoModelForCausalLM`: ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch # Load model and the tokenizer tokenizer = AutoTokenizer.from_pretrained("nicholasKluge/TeenyTinyLlama-160m", revision='main') model = AutoModelForCausalLM.from_pretrained("nicholasKluge/TeenyTinyLlama-160m", revision='main') # Pass the model to your device device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.eval() model.to(device) # Tokenize the inputs and pass them to the device inputs = tokenizer("Astronomia é a ciência", return_tensors="pt").to(device) # Generate some text completions = model.generate(**inputs, num_return_sequences=2, max_new_tokens=100) # Print the generated text for i, completion in enumerate(completions): print(f'🤖 {tokenizer.decode(completion)}') ``` ## Limitations Like almost all other language models trained on large text datasets scraped from the web, the TTL pair exhibited behavior that does not make them an out-of-the-box solution to many real-world applications, especially those requiring factual, reliable, nontoxic text generation. Our models are all subject to the following: - **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination. - **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities. - **Unreliable Code:** The model may produce incorrect code snippets and statements. These code generations should not be treated as suggestions or accurate solutions. - **Language Limitations:** The model is primarily designed to understand standard Brazilian Portuguese. Other languages might challenge its comprehension, leading to potential misinterpretations or errors in response. - **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given. Hence, even though our models are released with a permissive license, we urge users to perform their risk analysis on these models if intending to use them for real-world applications and also have humans moderating the outputs of these models in applications where they will interact with an audience, guaranteeing users are always aware they are interacting with a language model. ## Evaluations During our training runs, both models showed consistent convergence. At no point did our evaluation curves show signs of overfitting or saturation. In the case of our 460m parameter model, we intentionally trained past the optimal point by approximately 75,000 steps to assess if there were any signs of saturation, but our evaluations consistently gave better results. We hypothesize that our models are under-trained but can improve if further trained to pass the Chinchilla optimal range. | Processed Tokens | Perplexity | Energy Consumption (kWh) | Emissions (KgCO2eq) | |------------------|------------|---------------------------|----------------------| | 8.1M | 20.49 | 9.40 | 3.34 | | 1.6B | 16.90 | 18.82 | 6.70 | | 2.4B | 15.43 | 28.59 | 10.16 | | 3.2B | 14.64 | 38.20 | 13.57 | | 4.0B | 14.08 | 48.04 | 17.07 | | 4.9B | 13.61 | 57.74 | 20.52 | | 5.7B | 13.25 | 67.32 | 23.92 | | 6.5B | 12.87 | 76.84 | 27.30 | | 7.3B | 12.57 | 86.40 | 30.70 | | 8.1B | 12.27 | 96.19 | 34.18 | | 9.0B | 11.96 | 106.06 | 37.70 | | 9.8B | 11.77 | 115.69 | 41.31 | ## Benchmarks Evaluations on benchmarks were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). [Laiviet](https://github.com/laiviet/lm-evaluation-harness) translated the tasks from the LM-Evaluation-Harness we used. The results of models marked with an "*" were extracted from the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). | | **ARC** | **HellaSwag** | **MMLU** | **TruthfulQA** | **Average** | |------------------|-----------|---------------|-----------|----------------|-------------| | Pythia-410m | 24.83* | 41.29* | 25.99* | 40.95* | 33.26 | | **TTL-460m** | 29.40 | 33.00 | 28.55 | 41.10 | 33.01 | | Bloom-560m | 24.74* | 37.15* | 24.22* | 42.44* | 32.13 | | Xglm-564M | 25.56 | 34.64* | 25.18* | 42.53 | 31.97 | | OPT-350m | 23.55* | 36.73* | 26.02* | 40.83* | 31.78 | | **TTL-160m** | 26.15 | 29.29 | 28.11 | 41.12 | 31.16 | | Pythia-160m | 24.06* | 31.39* | 24.86* | 44.34* | 31.16 | | OPT-125m | 22.87* | 31.47* | 26.02* | 42.87* | 30.80 | | GPorTuguese-2 | 22.48 | 29.62 | 27.36 | 41.44 | 30.22 | | Gpt2-small | 21.48* | 31.60* | 25.79* | 40.65* | 29.97 | | Multilingual GPT | 23.81 | 26.37* | 25.17* | 39.62 | 28.73 | Evaluations on Brazilian Portuguese benchmarks were performed using a [Portuguese implementation of the EleutherAI LM Evaluation Harness](https://github.com/eduagarcia/lm-evaluation-harness-pt) (created by [Eduardo Garcia](https://github.com/eduagarcia/lm-evaluation-harness-pt)). | | **ASSIN2 RTE** | **ASSIN2 STS** | **BLUEX** | **ENEM** | **FAQUAD NLI** | **HateBR** | **OAB Exams** | **Average** | |----------------|----------------|----------------|-----------|----------|----------------|------------|---------------|-------------| | Qwen-1.8B | 64.83 | 19.53 | 26.15 | 30.23 | 43.97 | 33.33 | 27.20 | 35.03 | | TinyLlama-1.1B | 58.93 | 13.57 | 22.81 | 22.25 | 43.97 | 36.92 | 23.64 | 31.72 | | **TTL-460m** | 53.93 | 12.66 | 22.81 | 19.87 | 49.01 | 33.59 | 27.06 | 31.27 | | XGLM-564m | 49.61 | 22.91 | 19.61 | 19.38 | 43.97 | 33.99 | 23.42 | 30.41 | | Bloom-1b7 | 53.60 | 4.81 | 21.42 | 18.96 | 43.97 | 34.89 | 23.05 | 28.67 | | **TTL-160m** | 53.36 | 2.58 | 21.84 | 18.75 | 43.97 | 36.88 | 22.60 | 28.56 | | OPT-125m | 39.77 | 2.00 | 21.84 | 17.42 | 43.97 | 47.04 | 22.78 | 27.83 | | Pythia-160 | 33.33 | 12.81 | 16.13 | 16.66 | 50.36 | 41.09 | 22.82 | 27.60 | | OLMo-1b | 34.12 | 9.28 | 18.92 | 20.29 | 43.97 | 41.33 | 22.96 | 27.26 | | Bloom-560m | 33.33 | 8.48 | 18.92 | 19.03 | 43.97 | 37.07 | 23.05 | 26.26 | | Pythia-410m | 33.33 | 4.80 | 19.47 | 19.45 | 43.97 | 33.33 | 23.01 | 25.33 | | OPT-350m | 33.33 | 3.65 | 20.72 | 17.35 | 44.71 | 33.33 | 23.01 | 25.15 | | GPT-2 small | 33.26 | 0.00 | 10.43 | 11.20 | 43.52 | 33.68 | 13.12 | 20.74 | | GPorTuguese | 33.33 | 3.85 | 14.74 | 3.01 | 28.81 | 33.33 | 21.23 | 19.75 | | Samba-1.1B | 33.33 | 1.30 | 8.07 | 10.22 | 17.72 | 35.79 | 15.03 | 17.35 | ## Fine-Tuning Comparisons To further evaluate the downstream capabilities of our models, we decided to employ a basic fine-tuning procedure for our TTL pair on a subset of tasks from the Poeta benchmark. We apply the same procedure for comparison purposes on both [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) models, given that they are also LLM trained from scratch in Brazilian Portuguese and have a similar size range to our models. We used these comparisons to assess if our pre-training runs produced LLM capable of producing good results ("good" here means "close to BERTimbau") when utilized for downstream applications. | Models | IMDB | FaQuAD-NLI | HateBr | Assin2 | AgNews | Average | |-----------------|-----------|------------|-----------|-----------|-----------|---------| | BERTimbau-large | **93.58** | 92.26 | 91.57 | **88.97** | 94.11 | 92.10 | | BERTimbau-small | 92.22 | **93.07** | 91.28 | 87.45 | 94.19 | 91.64 | | **TTL-460m** | 91.64 | 91.18 | **92.28** | 86.43 | **94.42** | 91.19 | | **TTL-160m** | 91.14 | 90.00 | 90.71 | 85.78 | 94.05 | 90.34 | All the shown results are the higher accuracy scores achieved on the respective task test sets after fine-tuning the models on the training sets. All fine-tuning runs used the same hyperparameters, and the code implementation can be found in the [model cards](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m-HateBR) of our fine-tuned models. ## Cite as 🤗 ```latex @misc{correa24ttllama, title = {TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese}, author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar}, journal={arXiv preprint arXiv:2401.16640}, year={2024} } @misc{correa24ttllama, doi = {10.1016/j.mlwa.2024.100558}, url = {https://www.sciencedirect.com/science/article/pii/S2666827024000343}, title = {TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese}, author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar}, journal={Machine Learning With Applications}, publisher = {Springer}, year={2024} } ``` ## Funding This repository was built as part of the RAIES ([Rede de Inteligência Artificial Ética e Segura](https://www.raies.org/)) initiative, a project supported by FAPERGS - ([Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul](https://fapergs.rs.gov.br/inicial)), Brazil. ## License TeenyTinyLlama-160m is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
studio-ousia/luke-large-finetuned-conll-2003
studio-ousia
2022-11-11T06:57:23Z
578
2
transformers
[ "transformers", "pytorch", "luke", "arxiv:2010.01057", "arxiv:1906.08237", "arxiv:1903.07785", "arxiv:2002.01808", "arxiv:1910.09700", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 --- # Model Card for luke-large-finetuned-conll-2003 # Model Details ## Model Description LUKE (Language Understanding with Knowledge-based Embeddings) is a new pretrained contextualized representation of words and entities based on transformer. - **Developed by:** Studio Ousia - **Shared by [Optional]:** More information needed - **Model type:** EntitySpanClassification - **Language(s) (NLP):** More information needed - **License:** Apache-2.0 - **Related Models:** [Luke-large](https://huggingface.co/studio-ousia/luke-large?text=Paris+is+the+%3Cmask%3E+of+France.) - **Parent Model:** Luke - **Resources for more information:** - [GitHub Repo](https://github.com/studio-ousia/luke) - [Associated Paper](https://arxiv.org/abs/2010.01057) # Uses ## Direct Use More information needed ## Downstream Use [Optional] This model can also be used for the task of named entity recognition, cloze-style question answering, fine-grained entity typing, extractive question answering. ## Out-of-Scope Use The model should not be used to intentionally create hostile or alienating environments for people. # Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. # Training Details ## Training Data More information needed ## Training Procedure ### Preprocessing More information needed ### Speeds, Sizes, Times More information needed # Evaluation ## Testing Data, Factors & Metrics ### Testing Data More information needed ### Factors ### Metrics LUKE achieves state-of-the-art results on five popular NLP benchmarks including * **[SQuAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/)** (extractive question answering), * **[CoNLL-2003](https://www.clips.uantwerpen.be/conll2003/ner/)** (named entity recognition), **[ReCoRD](https://sheng-z.github.io/ReCoRD-explorer/)** (cloze-style question answering), * **[TACRED](https://nlp.stanford.edu/projects/tacred/)** (relation classification), and * **[Open Entity](https://www.cs.utexas.edu/~eunsol/html_pages/open_entity.html)** (entity typing). ## Results The experimental results are provided as follows: | Task | Dataset | Metric | LUKE-large | luke-base | Previous SOTA | | ------------------------------ | ---------------------------------------------------------------------------- | ------ | ----------------- | --------- | ------------------------------------------------------------------------- | | Extractive Question Answering | [SQuAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/) | EM/F1 | **90.2**/**95.4** | 86.1/92.3 | 89.9/95.1 ([Yang et al., 2019](https://arxiv.org/abs/1906.08237)) | | Named Entity Recognition | [CoNLL-2003](https://www.clips.uantwerpen.be/conll2003/ner/) | F1 | **94.3** | 93.3 | 93.5 ([Baevski et al., 2019](https://arxiv.org/abs/1903.07785)) | | Cloze-style Question Answering | [ReCoRD](https://sheng-z.github.io/ReCoRD-explorer/) | EM/F1 | **90.6**/**91.2** | - | 83.1/83.7 ([Li et al., 2019](https://www.aclweb.org/anthology/D19-6011/)) | | Relation Classification | [TACRED](https://nlp.stanford.edu/projects/tacred/) | F1 | **72.7** | - | 72.0 ([Wang et al. , 2020](https://arxiv.org/abs/2002.01808)) | | Fine-grained Entity Typing | [Open Entity](https://www.cs.utexas.edu/~eunsol/html_pages/open_entity.html) | F1 | **78.2** | - | 77.6 ([Wang et al. , 2020](https://arxiv.org/abs/2002.01808)) | Please check the [Github repository](https://github.com/studio-ousia/luke) for more details and updates. # Model Examination More information needed # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** More information needed - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Technical Specifications [optional] ## Model Architecture and Objective More information needed ## Compute Infrastructure More information needed ### Hardware * transformers_version: 4.6.0.dev0 ### Software More information needed # Citation **BibTeX:** ``` @inproceedings{yamada2020luke, title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention}, author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto}, booktitle={EMNLP}, year={2020} } ``` # Glossary [optional] More information needed # More Information [optional] More information needed # Model Card Authors [optional] Studio Ousia in collaboration with Ezi Ozoani and the Hugging Face team # Model Card Contact More information needed # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from transformers import AutoTokenizer, LukeForEntitySpanClassification tokenizer = AutoTokenizer.from_pretrained("studio-ousia/luke-large-finetuned-conll-2003") model = LukeForEntitySpanClassification.from_pretrained("studio-ousia/luke-large-finetuned-conll-2003") ``` </details>
MMG/roberta-base-ner-english
MMG
2022-10-28T11:16:30Z
578
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-28T11:11:08Z
Entry not found
Aniemore/unispeech-sat-emotion-russian-resd
Aniemore
2023-02-13T17:59:39Z
578
0
transformers
[ "transformers", "pytorch", "unispeech-sat", "audio-classification", "endpoints_compatible", "region:us" ]
audio-classification
2023-02-13T17:58:03Z
Entry not found
Salesforce/codet5p-770m-py
Salesforce
2023-05-16T00:31:41Z
578
18
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "arxiv:2305.07922", "license:bsd-3-clause", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2023-05-15T09:57:01Z
--- license: bsd-3-clause --- # CodeT5+ 770M (further tuned on Python) ## Model description [CodeT5+](https://github.com/salesforce/CodeT5/tree/main/CodeT5+) is a new family of open code large language models with an encoder-decoder architecture that can flexibly operate in different modes (i.e. _encoder-only_, _decoder-only_, and _encoder-decoder_) to support a wide range of code understanding and generation tasks. It is introduced in the paper: [CodeT5+: Open Code Large Language Models for Code Understanding and Generation](https://arxiv.org/pdf/2305.07922.pdf) by [Yue Wang](https://yuewang-cuhk.github.io/)\*, [Hung Le](https://sites.google.com/view/henryle2018/home?pli=1)\*, [Akhilesh Deepak Gotmare](https://akhileshgotmare.github.io/), [Nghi D.Q. Bui](https://bdqnghi.github.io/), [Junnan Li](https://sites.google.com/site/junnanlics), [Steven C.H. Hoi](https://sites.google.com/view/stevenhoi/home) (* indicates equal contribution). Compared to the original CodeT5 family (base: `220M`, large: `770M`), CodeT5+ is pretrained with a diverse set of pretraining tasks including _span denoising_, _causal language modeling_, _contrastive learning_, and _text-code matching_ to learn rich representations from both unimodal code data and bimodal code-text data. Additionally, it employs a simple yet effective _compute-efficient pretraining_ method to initialize the model components with frozen off-the-shelf LLMs such as [CodeGen](https://github.com/salesforce/CodeGen) to efficiently scale up the model (i.e. `2B`, `6B`, `16B`), and adopts a "shallow encoder and deep decoder" architecture. Furthermore, it is instruction-tuned to align with natural language instructions (see our InstructCodeT5+ 16B) following [Code Alpaca](https://github.com/sahil280114/codealpaca). ## How to use This model can be easily loaded using the `T5ForConditionalGeneration` functionality and employs the same tokenizer as original [CodeT5](https://github.com/salesforce/CodeT5). ```python from transformers import T5ForConditionalGeneration, AutoTokenizer checkpoint = "Salesforce/codet5p-770m-py" device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = T5ForConditionalGeneration.from_pretrained(checkpoint).to(device) inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device) outputs = model.generate(inputs, max_length=10) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) # ==> print('Hello World!') ``` ## Pretraining data This checkpoint is trained on the stricter permissive subset of the deduplicated version of the [github-code dataset](https://huggingface.co/datasets/codeparrot/github-code). The data is preprocessed by reserving only permissively licensed code ("mit" “apache-2”, “bsd-3-clause”, “bsd-2-clause”, “cc0-1.0”, “unlicense”, “isc”). Supported languages (9 in total) are as follows: `c`, `c++`, `c-sharp`, `go`, `java`, `javascript`, `php`, `python`, `ruby.` ## Training procedure This checkpoint is first trained on the multilingual unimodal code data at the first-stage pretraining, which includes a diverse set of pretraining tasks including _span denoising_ and two variants of _causal language modeling_. After that, it is further trained on the Python subset with the causal language modeling objective for another epoch to better adapt for Python code generation. Please refer to the paper for more details. ## Evaluation results CodeT5+ models have been comprehensively evaluated on a wide range of code understanding and generation tasks in various settings: _zero-shot_, _finetuning_, and _instruction-tuning_. Specifically, CodeT5+ yields substantial performance gains on many downstream tasks compared to their SoTA baselines, e.g., 8 text-to-code retrieval tasks (+3.2 avg. MRR), 2 line-level code completion tasks (+2.1 avg. Exact Match), and 2 retrieval-augmented code generation tasks (+5.8 avg. BLEU-4). In 2 math programming tasks on MathQA-Python and GSM8K-Python, CodeT5+ models of below billion-parameter sizes significantly outperform many LLMs of up to 137B parameters. Particularly, in the zero-shot text-to-code generation task on HumanEval benchmark, InstructCodeT5+ 16B sets new SoTA results of 35.0% pass@1 and 54.5% pass@10 against other open code LLMs, even surpassing the closed-source OpenAI code-cushman-001 mode Please refer to the [paper](https://arxiv.org/pdf/2305.07922.pdf) for more details. Specifically for this checkpoint, it achieves 15.5% pass@1 on HumanEval in the zero-shot setting, which is comparable to much larger LLMs such as Incoder 6B’s 15.2%, GPT-NeoX 20B’s 15.4%, and PaLM 62B’s 15.9%. ## BibTeX entry and citation info ```bibtex @article{wang2023codet5plus, title={CodeT5+: Open Code Large Language Models for Code Understanding and Generation}, author={Wang, Yue and Le, Hung and Gotmare, Akhilesh Deepak and Bui, Nghi D.Q. and Li, Junnan and Hoi, Steven C. H.}, journal={arXiv preprint}, year={2023} } ```
lamm-mit/ProteinForceGPT
lamm-mit
2024-01-14T11:37:33Z
578
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-14T10:56:11Z
--- license: apache-2.0 --- # ProteinForceGPT: Generative strategies for modeling, design and analysis of protein mechanics ### Basic information This protein language model is a 454M parameter autoregressive transformer model in GPT-style, trained to analyze and predict the mechanical properties of a large number of protein sequences. The model has both forward and inverse capabilities. For instance, using generate tasks, the model can design novel proteins that meet one or more mechanical constraints. This protein language foundation model was based on the NeoGPT-X architecture and uses rotary positional embeddings (RoPE). It has 16 attention heads, 36 hidden layers and a hidden size of 1024, an intermediate size of 4096 and uses a GeLU activation function. The pretraining task is defined as "Sequence<...>" where ... is an amino acid sequence. Pretraining dataset: https://huggingface.co/datasets/lamm-mit/GPTProteinPretrained Pretrained model: https://huggingface.co/lamm-mit/GPTProteinPretrained In this fine-tuned model, mechanics-related forward and inverse tasks are: ```raw CalculateForce<GEECDCGSPSNP..>, CalculateEnergy<GEECDCGSPSNP..> CalculateForceEnergy<GEECDCGSPSNP...> CalculateForceHistory<GEECDCGSPSNP...> GenerateForce<0.262> GenerateForce<0.220> GenerateForceEnergy<0.262,0.220> GenerateForceHistory<0.004,0.034,0.125,0.142,0.159,0.102,0.079,0.073,0.131,0.105,0.071,0.058,0.072,0.060,0.049,0.114,0.122,0.108,0.173,0.192,0.208,0.153,0.212,0.222,0.244> ``` ### Load model You can load the model using this code. ```python from transformers import AutoModelForCausalLM, AutoTokenizer ForceGPT_model_name='lamm-mit/ProteinForceGPT' tokenizer = AutoTokenizer.from_pretrained(ForceGPT_model_name, trust_remote_code=True) tokenizer.pad_token = tokenizer.eos_token model = AutoModelForCausalLM.from_pretrained( ForceGPT_model_name, trust_remote_code=True ).to(device) model.config.use_cache = False ``` ### Inference Sample inference using the "Sequence<...>" task, where here, the model will simply autocomplete the sequence starting with "AIIAA": ```python prompt = "Sequence<GEECDC" generated = torch.tensor(tokenizer.encode(prompt, add_special_tokens = False)) .unsqueeze(0).to(device) print(generated.shape, generated) sample_outputs = model.generate( inputs=generated, eos_token_id =tokenizer.eos_token_id, do_sample=True, top_k=500, max_length = 300, top_p=0.9, num_return_sequences=1, temperature=1, ).to(device) for i, sample_output in enumerate(sample_outputs): print("{}: {}\n\n".format(i, tokenizer.decode(sample_output, skip_special_tokens=True))) ``` Sample inference using the "CalculateForce<...>" task, where here, the model will calculate the maximum unfolding force of a given sequence: ```python prompt = "'CalculateForce<GEECDCGSPSNPCCDAATCKLRPGAQCADGLCCDQCRFKKKRTICRIARGDFPDDRCTGQSADCPRWN>" generated = torch.tensor(tokenizer.encode(prompt, add_special_tokens = False)) .unsqueeze(0).to(device) sample_outputs = model.generate( inputs=generated, eos_token_id =tokenizer.eos_token_id, do_sample=True, top_k=500, max_length = 300, top_p=0.9, num_return_sequences=3, temperature=1, ).to(device) for i, sample_output in enumerate(sample_outputs): print("{}: {}\n\n".format(i, tokenizer.decode(sample_output, skip_special_tokens=True))) ``` Output: ```raw 0: CalculateForce<GEECDCGSPSNPCCDAATCKLRPGAQCADGLCCDQCRFKKKRTICRIARGDFPDDRCTGQSADCPRWN> [0.262]``` ``` ## Citations To cite this work: ``` @article{GhafarollahiBuehler_2024, title = {ProtAgents: Protein discovery via large language model multi-agent collaborations combining physics and machine learning }, author = {A. Ghafarollahi, M.J. Buehler}, journal = {}, year = {2024}, volume = {}, pages = {}, url = {} } ``` The dataset used to fine-tune the model is available at: ``` @article{GhafarollahiBuehler_2024, title = {ForceGen: End-to-end de novo protein generation based on nonlinear mechanical unfolding responses using a protein language diffusion model}, author = {B. Ni, D.L. Kaplan, M.J. Buehler}, journal = {Science Advances}, year = {2024}, volume = {}, pages = {}, url = {} } ```
hfl/chinese-mixtral-gguf
hfl
2024-03-05T08:23:12Z
578
7
null
[ "gguf", "moe", "zh", "en", "arxiv:2403.01851", "license:apache-2.0", "region:us" ]
null
2024-01-24T02:28:24Z
--- license: apache-2.0 language: - zh - en tags: - moe --- # Chinese-Mixtral-GGUF <p align="center"> <a href="https://github.com/ymcui/Chinese-Mixtral"><img src="https://ymcui.com/images/chinese-mixtral-banner.png" width="600"/></a> </p> **Chinese Mixtral GitHub repository: https://github.com/ymcui/Chinese-Mixtral** This repository contains the GGUF-v3 models (llama.cpp compatible) for **Chinese-Mixtral** (this is not a chat/instruction model). ## Performance Metric: PPL, lower is better | Quant | PPL | | ----- | ---- | | IQ1_S | 20.7314 +/- 0.22627 | | IQ2_XXS | 8.5981 +/- 0.09267 | | IQ2_XS | 6.9784 +/- 0.07476 | | Q2_K | 5.1846 +/- 0.05533 | | IQ3_XXS | 4.5990 +/- 0.04969 | | Q3_K | 4.5545 +/- 0.04893 | | Q4_0 | 4.4917 +/- 0.04844 | | Q4_K | 4.4488 +/- 0.04813 | | Q5_0 | 4.4224 +/- 0.04753 | | Q5_K | 4.4192 +/- 0.04768 | | Q6_K | 4.4092 +/- 0.04758 | | Q8_0 | 4.4076 +/- 0.04746 | | F16 | x | Due to the file size limitation, for F16 model, please use `cat` command to concatenate all parts into a single file. **You must concatenate these parts in order.** ## Others - For Hugging Face version, please see: https://huggingface.co/hfl/chinese-mixtral - If you have questions/issues regarding this model, please submit an issue through https://github.com/ymcui/Chinese-Mixtral/. ## Citation Please consider cite our paper if you use the resource of this repository. Paper link: https://arxiv.org/abs/2403.01851 ``` @article{chinese-mixtral, title={Rethinking LLM Language Adaptation: A Case Study on Chinese Mixtral}, author={Cui, Yiming and Yao, Xin}, journal={arXiv preprint arXiv:2403.01851}, url={https://arxiv.org/abs/2403.01851}, year={2024} } ```
AdityaLavaniya/TinyLlama-Fitness-Instructor
AdityaLavaniya
2024-02-16T09:06:08Z
578
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-16T09:02:58Z
Entry not found
alexandreteles/bonito-v1-awq
alexandreteles
2024-03-16T22:07:09Z
578
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "data generation", "text2text-generation", "en", "dataset:BatsResearch/ctga-v1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text2text-generation
2024-02-27T03:36:23Z
--- datasets: - BatsResearch/ctga-v1 language: - en library_name: transformers pipeline_tag: text2text-generation tags: - data generation license: apache-2.0 --- # Bonito-v1 AWQ You can find the original model at [BatsResearch/bonito-v1](https://huggingface.co/BatsResearch/bonito-v1) ## Variations * GEMM: [model.safetensors](https://huggingface.co/alexandreteles/bonito-v1-awq/blob/main/model.safetensors) * GEMV: [model_gemv.safetensors](https://huggingface.co/alexandreteles/bonito-v1-awq/blob/main/model_gemv.safetensors) ## Model Card for bonito <!-- Provide a quick summary of what the model is/does. --> Bonito is an open-source model for conditional task generation: the task of converting unannotated text into task-specific training datasets for instruction tuning. ![Bonito](https://raw.githubusercontent.com/BatsResearch/bonito/main/assets/workflow.png) ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> Bonito can be used to create synthetic instruction tuning datasets to adapt large language models on users' specialized, private data. In our [paper](https://github.com/BatsResearch/bonito), we show that Bonito can be used to adapt both pretrained and instruction tuned models to tasks without any annotations. - **Developed by:** Nihal V. Nayak, Yiyang Nan, Avi Trost, and Stephen H. Bach - **Model type:** MistralForCausalLM - **Language(s) (NLP):** English - **License:** TBD - **Finetuned from model:** `mistralai/Mistral-7B-v0.1` ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** [https://github.com/BatsResearch/bonito](https://github.com/BatsResearch/bonito) - **Paper:** Arxiv link ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> To easily generate synthetic instruction tuning datasets, we recommend using the [bonito](https://github.com/BatsResearch/bonito) package built using the `transformers` and the `vllm` libraries. ```python from bonito import Bonito, SamplingParams from datasets import load_dataset # Initialize the Bonito model bonito = Bonito() # load dataaset with unannotated text unannotated_text = load_dataset( "BatsResearch/bonito-experiment", "unannotated_contract_nli" )["train"].select(range(10)) # Generate synthetic instruction tuning dataset sampling_params = SamplingParams(max_tokens=256, top_p=0.95, temperature=0.5, n=1) synthetic_dataset = bonito.generate_tasks( unannotated_text, context_col="input", task_type="nli", sampling_params=sampling_params ) ``` ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> Our model is trained to generate the following task types: summarization, sentiment analysis, multiple-choice question answering, extractive question answering, topic classification, natural language inference, question generation, text generation, question answering without choices, paraphrase identification, sentence completion, yes-no question answering, word sense disambiguation, paraphrase generation, textual entailment, and coreference resolution. The model might not produce accurate synthetic tasks beyond these task types.
mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF
mradermacher
2024-05-07T16:27:28Z
578
3
transformers
[ "transformers", "gguf", "en", "base_model:mistralai/Mixtral-8x7B-Instruct-v0.1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-02-28T17:50:16Z
--- base_model: mistralai/Mixtral-8x7B-Instruct-v0.1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About weighted/imatrix quants of https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1 Artefact2 also made some, with a different weight matrix, at https://huggingface.co/Artefact2/Mixtral-8x7B-Instruct-v0.1-GGUF <!-- provided-files --> ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 9.8 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 10.8 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.6 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 13.9 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 14.4 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 15.8 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 17.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-Q3_K_XS.gguf) | i1-Q3_K_XS | 19.3 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.7 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 20.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 21.7 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.8 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.3 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 26.8 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 26.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 27.0 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.5 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.5 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 38.6 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
pt-sk/stable-diffusion-1.5
pt-sk
2024-05-27T04:43:51Z
578
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "arxiv:2207.12598", "arxiv:2112.10752", "arxiv:2103.00020", "arxiv:2205.11487", "arxiv:1910.09700", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-03-01T16:13:25Z
--- license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image inference: true extra_gated_prompt: |- This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. CompVis claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license extra_gated_heading: Please read the LICENSE to access this model --- # Stable Diffusion v1-5 Model Card Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion). The **Stable-Diffusion-v1-5** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2) checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion). ### Diffusers ```py from diffusers import StableDiffusionPipeline import torch model_id = "runwayml/stable-diffusion-v1-5" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers#text-to-image-generation-with-stable-diffusion) ### Original GitHub Repository 1. Download the weights - [v1-5-pruned-emaonly.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt) - 4.27GB, ema-only weight. uses less VRAM - suitable for inference - [v1-5-pruned.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt) - 7.7GB, ema+non-ema weights. uses more VRAM - suitable for fine-tuning 2. Follow instructions [here](https://github.com/runwayml/stable-diffusion). ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487). - **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material and is not fit for product use without additional safety mechanisms and considerations. - No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data. The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images. ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are primarily limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. ### Safety Module The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers. This checker works by checking model outputs against known hard-coded NSFW concepts. The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter. Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images. The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-2B (en) and subsets thereof (see next section) **Training Procedure** Stable Diffusion v1-5 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through a ViT-L/14 text-encoder. - The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. Currently six Stable Diffusion checkpoints are provided, which were trained as follows. - [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en). 194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`). - [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`. 515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)). - [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2` - 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2` - 225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) Resumed from `stable-diffusion-v1-2` - 595,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - [`stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting) Resumed from `stable-diffusion-v1-5` - then 440,000 steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything. - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 2 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 PNDM/PLMS sampling steps show the relative improvements of the checkpoints: ![pareto](https://huggingface.co/CompVis/stable-diffusion/resolve/main/v1-1-to-v1-5.png) Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 150000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq. ## Citation ```bibtex @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ``` *This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
misri/cyberrealisticXL_v11VAE
misri
2024-03-17T09:21:05Z
578
0
diffusers
[ "diffusers", "safetensors", "license:unknown", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-03-17T08:12:23Z
--- license: unknown ---
mradermacher/StrangeBru-7B-GGUF
mradermacher
2024-05-06T06:01:11Z
578
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:nbeerbower/StrangeBru-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-24T06:58:36Z
--- base_model: nbeerbower/StrangeBru-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About static quants of https://huggingface.co/nbeerbower/StrangeBru-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/StrangeBru-7B-GGUF/resolve/main/StrangeBru-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
PrunaAI/gemma-2b-GGUF-smashed
PrunaAI
2024-04-18T17:05:00Z
578
2
null
[ "gguf", "pruna-ai", "region:us" ]
null
2024-04-18T16:48:01Z
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with GGUF. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***What is the model format?*** We use GGUF format. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). # Downloading and running the models You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/): | Quant type | Description | |------------|--------------------------------------------------------------------------------------------| | Q5_K_M | High quality, recommended. | | Q5_K_S | High quality, recommended. | | Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. | | Q4_K_S | Slightly lower quality with more space savings, recommended. | | IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. | | IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. | | Q3_K_L | Lower quality but usable, good for low RAM availability. | | Q3_K_M | Even lower quality. | | IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | Q3_K_S | Low quality, not recommended. | | IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | Q2_K | Very low quality but surprisingly usable. | ## How to download GGUF files ? **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev - **Option A** - Downloading in `text-generation-webui`: - **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/gemma-2b-GGUF-smashed-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf. - **Step 2**: Then click Download. - **Option B** - Downloading on the command line (including multiple files at once): - **Step 1**: We recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` - **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download PrunaAI/gemma-2b-GGUF-smashed-smashed gemma-2b.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> Alternatively, you can also download multiple files at once with a pattern: ```shell huggingface-cli download PrunaAI/gemma-2b-GGUF-smashed-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/gemma-2b-GGUF-smashed-smashed gemma-2b.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## How to run model in GGUF format? - **Option A** - Introductory example with `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m gemma-2b.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) - **Option B** - Running in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). - **Option C** - Running from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./gemma-2b.IQ3_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<s>[INST] {prompt} [/INST]", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./gemma-2b.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` - **Option D** - Running with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
mmnga/aya-23-35B-gguf
mmnga
2024-05-27T00:47:56Z
578
1
null
[ "gguf", "en", "ja", "dataset:TFMC/imatrix-dataset-for-japanese-llm", "license:cc-by-nc-4.0", "region:us" ]
null
2024-05-26T16:32:27Z
--- license: cc-by-nc-4.0 language: - en - ja datasets: - TFMC/imatrix-dataset-for-japanese-llm --- # aya-23-35B-gguf [CohereForAIさんが公開しているaya-23-35B](https://huggingface.co/CohereForAI/aya-23-35B)のggufフォーマット変換版です。 imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。 ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'aya-23-35B-Q4_0.gguf' -n 128 -p 'こんにちわ' ```
SilverFan/L3-8B-sunfall-v0.4-stheno-v3.2-Q5_K_M-GGUF
SilverFan
2024-06-21T02:24:22Z
578
0
transformers
[ "transformers", "gguf", "not-for-all-audiences", "llama-cpp", "gguf-my-repo", "base_model:crestf411/L3-8B-sunfall-v0.4-stheno-v3.2", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-06-21T02:23:55Z
--- base_model: crestf411/L3-8B-sunfall-v0.4-stheno-v3.2 library_name: transformers license: llama3 license_name: llama3 license_link: LICENSE tags: - not-for-all-audiences - llama-cpp - gguf-my-repo --- # SilverFan/L3-8B-sunfall-v0.4-stheno-v3.2-Q5_K_M-GGUF This model was converted to GGUF format from [`crestf411/L3-8B-sunfall-v0.4-stheno-v3.2`](https://huggingface.co/crestf411/L3-8B-sunfall-v0.4-stheno-v3.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/crestf411/L3-8B-sunfall-v0.4-stheno-v3.2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo SilverFan/L3-8B-sunfall-v0.4-stheno-v3.2-Q5_K_M-GGUF --hf-file l3-8b-sunfall-v0.4-stheno-v3.2-q5_k_m-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo SilverFan/L3-8B-sunfall-v0.4-stheno-v3.2-Q5_K_M-GGUF --hf-file l3-8b-sunfall-v0.4-stheno-v3.2-q5_k_m-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo SilverFan/L3-8B-sunfall-v0.4-stheno-v3.2-Q5_K_M-GGUF --hf-file l3-8b-sunfall-v0.4-stheno-v3.2-q5_k_m-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo SilverFan/L3-8B-sunfall-v0.4-stheno-v3.2-Q5_K_M-GGUF --hf-file l3-8b-sunfall-v0.4-stheno-v3.2-q5_k_m-imat.gguf -c 2048 ```
Tencent-Hunyuan/HunyuanDiT-v1.1-ControlNet-Diffusers-Pose
Tencent-Hunyuan
2024-06-27T02:00:39Z
578
1
diffusers
[ "diffusers", "safetensors", "license:other", "region:us" ]
null
2024-06-26T02:34:37Z
--- license: other license_name: tencent-hunyuan-community license_link: https://huggingface.co/Tencent-Hunyuan/HunyuanDiT/blob/main/LICENSE.txt --- ```py from diffusers import HunyuanDiT2DControlNetModel, HunyuanDiTControlNetPipeline import torch controlnet = HunyuanDiT2DControlNetModel.from_pretrained("Tencent-Hunyuan/HunyuanDiT-v1.1-ControlNet-Diffusers-Pose", torch_dtype=torch.float16) pipe = HunyuanDiTControlNetPipeline.from_pretrained("Tencent-Hunyuan/HunyuanDiT-v1.1-Diffusers", controlnet=controlnet, torch_dtype=torch.float16) pipe.to("cuda") from diffusers.utils import load_image cond_image = load_image('https://huggingface.co/Tencent-Hunyuan/HunyuanDiT-v1.1-ControlNet-Diffusers-Pose/resolve/main/pose.jpg?download=true') ## You may also use English prompt as HunyuanDiT supports both English and Chinese prompt="一位亚洲女性,身穿绿色上衣,戴着紫色头巾和紫色围巾,站在黑板前。背景是黑板。照片采用近景、平视和居中构图的方式呈现真实摄影风格" #prompt="An Asian woman, dressed in a green top, wearing a purple headscarf and a purple scarf, stands in front of a blackboard. The background is the blackboard. The photo is presented in a close-up, eye-level, and centered composition, adopting a realistic photographic style" image = pipe( prompt, height=1024, width=1024, control_image=cond_image, num_inference_steps=50, ).images[0] ```
GKLMIP/bert-khmer-small-uncased-tokenized
GKLMIP
2021-07-31T04:53:16Z
577
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
https://github.com/GKLMIP/Pretrained-Models-For-Khmer If you use our model, please consider citing our paper: ``` @article{, author="Jiang, Shengyi and Fu, Sihui and Lin, Nankai and Fu, Yingwen", title="Pre-trained Models and Evaluation Data for the Khmer Language", year="2021", publisher="Tsinghua Science and Technology", } ```
anechaev/ru_med_gpt3sm_based_on_gpt2
anechaev
2023-04-05T14:30:43Z
577
1
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "PyTorch", "Transformers", "ru", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - ru tags: - PyTorch - Transformers license: mit --- # Medical History Model based on ruGPT2 by @sberbank-ai A simple model for helping medical staff to complete patient's medical histories. Model used pretrained [sberbank-ai/rugpt3small_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3small_based_on_gpt2)
facebook/muppet-roberta-base
facebook
2021-06-28T21:44:23Z
577
7
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2101.11038", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: en tags: - exbert license: mit datasets: - bookcorpus - wikipedia --- # Muppet: Massive Multi-task Representations with Pre-Finetuning # RoBERTa base model This is a Massive Multi-task Pre-finetuned version of Roberta base. It was introduced in [this paper](https://arxiv.org/abs/2101.11038). The model improves over roberta-base in a wide range of GLUE, QA tasks (details can be found in the paper). The gains in smaller datasets are significant. Note: This checkpoint does not contain the classificaiton/MRC heads used during pre-finetuning due to compatibility issues and hence you might get slightly lower performance than that reported in the paper on some datasets ## Model description RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=roberta) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ## Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: | Model | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | SQuAD| |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:----:| | Roberta-base | 87.6 | 91.9 | 92.8 | 94.8 | 63.6 | 91.2 | 90.2 | 78.7 | 82.6| | MUPPET Roberta-base | 88.1 | 91.9 | 93.3 | 96.7 | - | - | 91.7 | 87.8 | 86.6| ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2101-11038, author = {Armen Aghajanyan and Anchit Gupta and Akshat Shrivastava and Xilun Chen and Luke Zettlemoyer and Sonal Gupta}, title = {Muppet: Massive Multi-task Representations with Pre-Finetuning}, journal = {CoRR}, volume = {abs/2101.11038}, year = {2021}, url = {https://arxiv.org/abs/2101.11038}, archivePrefix = {arXiv}, eprint = {2101.11038}, timestamp = {Sun, 31 Jan 2021 17:23:50 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2101-11038.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
timm/vit_large_r50_s32_384.augreg_in21k_ft_in1k
timm
2023-05-06T00:51:38Z
577
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-21k", "arxiv:2106.10270", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
2022-12-23T00:31:34Z
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-21k --- # Model card for vit_large_r50_s32_384.augreg_in21k_ft_in1k A ResNet - Vision Transformer (ViT) hybrid image classification model. Trained on ImageNet-21k and fine-tuned on ImageNet-1k (with additional augmentation and regularization) in JAX by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 329.1 - GMACs: 56.4 - Activations (M): 64.9 - Image size: 384 x 384 - **Papers:** - How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers: https://arxiv.org/abs/2106.10270 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-21k - **Original:** https://github.com/google-research/vision_transformer ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_large_r50_s32_384.augreg_in21k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_large_r50_s32_384.augreg_in21k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 145, 1024) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{steiner2021augreg, title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers}, author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas}, journal={arXiv preprint arXiv:2106.10270}, year={2021} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
Mit1208/detr-resnet-50_finetuned_doclaynet
Mit1208
2023-04-22T17:32:22Z
577
0
transformers
[ "transformers", "pytorch", "tensorboard", "detr", "object-detection", "endpoints_compatible", "region:us" ]
object-detection
2023-04-22T17:20:42Z
Entry not found
timm/coat_lite_medium.in1k
timm
2023-04-24T03:42:42Z
577
0
timm
[ "timm", "pytorch", "image-classification", "dataset:imagenet-1k", "arxiv:2104.06399", "license:apache-2.0", "region:us" ]
image-classification
2023-04-24T03:39:58Z
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for coat_lite_medium.in1k A CoaT (Co-Scale Conv-Attentional Transformer) image classification model. Trained on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 44.6 - GMACs: 9.8 - Activations (M): 40.1 - Image size: 224 x 224 - **Papers:** - Co-Scale Conv-Attentional Image Transformers: https://arxiv.org/abs/2104.06399 - **Dataset:** ImageNet-1k - **Original:** https://github.com/mlpc-ucsd/CoaT ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('coat_lite_medium.in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'coat_lite_medium.in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 50, 512) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @InProceedings{Xu_2021_ICCV, author = {Xu, Weijian and Xu, Yifan and Chang, Tyler and Tu, Zhuowen}, title = {Co-Scale Conv-Attentional Image Transformers}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {9981-9990} } ```
stablediffusionapi/epicrealism
stablediffusionapi
2023-07-12T11:56:42Z
577
3
diffusers
[ "diffusers", "stablediffusionapi.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-31T13:46:10Z
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # epiCRealism V1 API Inference ![generated from stablediffusionapi.com](https://cdn.stablediffusionapi.com/generations/3513027121685540721.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "epicrealism" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Try model for free: [Generate Images](https://stablediffusionapi.com/models/epicrealism) Model link: [View model](https://stablediffusionapi.com/models/epicrealism) Credits: [View credits](https://civitai.com/?query=epiCRealism) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v3/dreambooth" payload = json.dumps({ "key": "your_api_key", "model_id": "epicrealism", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
Yntec/CuteYuki2
Yntec
2023-08-17T09:06:47Z
577
4
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "newlifezfztty761", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-26T19:22:18Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image - newlifezfztty761 --- # CuteYuki2 The most cute version of CuteYuki! Future ones sacrificed adorableness for coherence and better anatomy, but if you're going to cute niki, you better use this one! Original page: https://civitai.com/models/28169?modelVersionId=47883
TheBloke/Camel-Platypus2-13B-GGUF
TheBloke
2023-09-27T12:47:21Z
577
1
transformers
[ "transformers", "gguf", "llama", "en", "dataset:garage-bAInd/Open-Platypus", "arxiv:2308.07317", "arxiv:2307.09288", "base_model:garage-bAInd/Camel-Platypus2-13B", "license:llama2", "text-generation-inference", "region:us" ]
null
2023-09-05T04:39:22Z
--- language: - en license: llama2 datasets: - garage-bAInd/Open-Platypus model_name: Camel-Platypus2 13B base_model: garage-bAInd/Camel-Platypus2-13B inference: false model_creator: garage-bAInd model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Camel-Platypus2 13B - GGUF - Model creator: [garage-bAInd](https://huggingface.co/garage-bAInd) - Original model: [Camel-Platypus2 13B](https://huggingface.co/garage-bAInd/Camel-Platypus2-13B) <!-- description start --> ## Description This repo contains GGUF format model files for [garage-bAInd's Camel-Platypus2 13B](https://huggingface.co/garage-bAInd/Camel-Platypus2-13B). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Camel-Platypus2-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Camel-Platypus2-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Camel-Platypus2-13B-GGUF) * [garage-bAInd's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/garage-bAInd/Camel-Platypus2-13B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [camel-platypus2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Camel-Platypus2-13B-GGUF/blob/main/camel-platypus2-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [camel-platypus2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Camel-Platypus2-13B-GGUF/blob/main/camel-platypus2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [camel-platypus2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Camel-Platypus2-13B-GGUF/blob/main/camel-platypus2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [camel-platypus2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Camel-Platypus2-13B-GGUF/blob/main/camel-platypus2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [camel-platypus2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Camel-Platypus2-13B-GGUF/blob/main/camel-platypus2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [camel-platypus2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Camel-Platypus2-13B-GGUF/blob/main/camel-platypus2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [camel-platypus2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Camel-Platypus2-13B-GGUF/blob/main/camel-platypus2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [camel-platypus2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Camel-Platypus2-13B-GGUF/blob/main/camel-platypus2-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [camel-platypus2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Camel-Platypus2-13B-GGUF/blob/main/camel-platypus2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [camel-platypus2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Camel-Platypus2-13B-GGUF/blob/main/camel-platypus2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [camel-platypus2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Camel-Platypus2-13B-GGUF/blob/main/camel-platypus2-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [camel-platypus2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Camel-Platypus2-13B-GGUF/blob/main/camel-platypus2-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Camel-Platypus2-13B-GGUF and below it, a specific filename to download, such as: camel-platypus2-13b.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Camel-Platypus2-13B-GGUF camel-platypus2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Camel-Platypus2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Camel-Platypus2-13B-GGUF camel-platypus2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m camel-platypus2-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Camel-Platypus2-13B-GGUF", model_file="camel-platypus2-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: garage-bAInd's Camel-Platypus2 13B # Camel-Platypus2-13B Camel-Platypus-13B is a merge of [`garage-bAInd/Platypus2-13B`](https://huggingface.co/garage-bAInd/Platypus2-13B) and [`augtoma/qCammel-13`](https://huggingface.co/augtoma/qCammel-13). ![Platty](./Best_Platty_small.jpeg) ### Benchmark Metrics | Metric | Value | |-----------------------|-------| | MMLU (5-shot) | 56.51 | | ARC (25-shot) | 60.75 | | HellaSwag (10-shot) | 83.61 | | TruthfulQA (0-shot) | 49.60 | | Avg. | 62.62 | We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results. ### Model Details * **Trained by**: **Platypus2-13B** trained by Cole Hunter & Ariel Lee; **CAMEL-13B-Combined-Data** trained by StabilityAI * **Model type:** **Camel-Platypus2-13B** is an auto-regressive language model based on the LLaMA 2 transformer architecture. * **Language(s)**: English * **License for base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/)) ### Prompt Template ``` ### Instruction: <prompt> (without the <>) ### Response: ``` ### Training Dataset `garage-bAInd/Platypus2-70B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information. ### Training Procedure `garage-bAInd/Camel-Platypus-13B` was instruction fine-tuned using LoRA on 1 A100 80GB. For training details and inference instructions please see the [Platypus](https://github.com/arielnlee/Platypus) GitHub repo. ### Reproducing Evaluation Results Install LM Evaluation Harness: ``` # clone repository git clone https://github.com/EleutherAI/lm-evaluation-harness.git # change to repo directory cd lm-evaluation-harness # check out the correct commit git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463 # install pip install -e . ``` Each task was evaluated on a single A100 80GB GPU. ARC: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Camel-Platypus-13B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/Camel-Platypus-13B/arc_challenge_25shot.json --device cuda --num_fewshot 25 ``` HellaSwag: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Camel-Platypus-13B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/Camel-Platypus-13B/hellaswag_10shot.json --device cuda --num_fewshot 10 ``` MMLU: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Camel-Platypus2-13B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/Camel-Platypus2-13B/mmlu_5shot.json --device cuda --num_fewshot 5 ``` TruthfulQA: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Camel-Platypus2-13B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/Camel-Platypus2-13B/truthfulqa_0shot.json --device cuda ``` ### Limitations and bias Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/ ### Citations ```bibtex @article{platypus2023, title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs}, author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz}, booktitle={arXiv preprint arxiv:2308.07317}, year={2023} } ``` ```bibtex @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov year={2023}, eprint={2307.09288}, archivePrefix={arXiv}, } ``` ```bibtex @inproceedings{ hu2022lora, title={Lo{RA}: Low-Rank Adaptation of Large Language Models}, author={Edward J Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2022}, url={https://openreview.net/forum?id=nZeVKeeFYf9} } ``` <!-- original-model-card end -->
FFusion/FFXL400
FFusion
2023-09-13T21:57:32Z
577
12
diffusers
[ "diffusers", "safetensors", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "stable-diffusion", "text-to-image", "ffai", "en", "base_model:FFusion/FFusionXL-BASE", "doi:10.57967/hf/1095", "license:openrail++", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2023-09-08T14:22:11Z
--- license: openrail++ base_model: FFusion/FFusionXL-BASE tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - stable-diffusion - text-to-image - diffusers - ffai inference: true widget: - text: >- a dog in colorful exploding clouds, dreamlike surrealism colorful smoke and fire coming out of it, explosion of data fragments, exploding background,realistic explosion, 3d digital art example_title: Dogo FFusion - text: >- a sprinkled donut sitting on top of a table, colorful hyperrealism, everything is made of candy, hyperrealistic digital painting, covered in sprinkles and crumbs, vibrant colors hyper realism,colorful smoke explosion background example_title: Donut FFusion - text: >- a cup of coffee with a tree in it, surreal art, awesome great composition, surrealism, ice cubes in tree, colorful clouds, perfectly realistic yet surreal example_title: CoFFee FFusion - text: >- brightly colored headphones with a splash of colorful paint splash, vibing to music, stunning artwork, music is life, beautiful digital artwork, concept art, cinematic, dramatic, intricate details, dark lighting example_title: Headset FFusion - text: >- high-quality game character digital design, Unreal Engine, Water color painting, Mecha- Monstrous high quality game fantasy rpg character design, dark rainbow Fur Scarf, inside of a Superficial Outhouse, at Twilight, Overdetailed art example_title: Digital Fusion language: - en thumbnail: >- https://huggingface.co/FFusion/400GB-LoraXL/resolve/main/images/image7sm.jpg --- # FFXL400 Combined LoRA Model 🚀 Welcome to the FFXL400 combined LoRA model repository on Hugging Face! This model is a culmination of extensive research, bringing together the finest LoRAs from the [400GB-LoraXL repository](https://huggingface.co/FFusion/400GB-LoraXL). Our vision was to harness the power of multiple LoRAs, meticulously analyzing and integrating a select fraction of the blocks from each. ## 📦 Model Highlights - **Innovative Combination**: This model is a strategic integration of LoRAs, maximizing the potential of each while creating a unified powerhouse. - **Versatility**: The model is available in various formats including diffusers, safetensors (both fp 16 and 32), and an optimized ONNIX FP16 version for DirectML, ensuring compatibility across AMD, Intel, Nvidia, and more. - **Advanced Research**: Leveraging the latest in machine learning research, the model represents a state-of-the-art amalgamation of LoRAs, optimized for performance and accuracy. ## 🔍 Technical Insights This model is a testament to the advancements in the field of AI and machine learning. It was crafted with precision, ensuring that: - Only a small percentage of the blocks from the original LoRAs (UNet and text encoders) were utilized. - The model is primed not just for inference but also for further training and refinement. - It serves as a benchmark for testing and understanding the cumulative impact of multiple LoRAs when used in concert. ## 🎨 Usage The FFXL400 model is designed for a multitude of applications. Whether you're delving into research, embarking on a new project, or simply experimenting, this model serves as a robust foundation. Use it to: - Investigate the cumulative effects of merging multiple LoRAs. - Dive deep into weighting experiments with multiple LoRAs. - Explore the nuances and intricacies of integrated LoRAs. ## ⚠️ License & Usage Disclaimers **Please review the full [license agreement](https://huggingface.co/FFusion/FFXL400/blob/main/LICENSE.md) before accessing or using the models.** 🔴 The models and weights available in this repository are **strictly for research and testing purposes**, with exceptions noted below. They are **not** generally intended for commercial use and are dependent on each individual LORA. 🔵 **Exception for Commercial Use:** The [FFusionXL-BASE](https://huggingface.co/FFusion/FFusionXL-BASE), [FFusion-BaSE](https://huggingface.co/FFusion/FFusion-BaSE), [di.FFUSION.ai-v2.1-768-BaSE-alpha](https://huggingface.co/FFusion/di.FFUSION.ai-v2.1-768-BaSE-alpha), and [di.ffusion.ai.Beta512](https://huggingface.co/FFusion/di.ffusion.ai.Beta512) models are trained by FFusion AI using images for which we hold licenses. Users are advised to primarily use these models for a safer experience. These particular models are allowed for commercial use. 🔴 **Disclaimer:** FFusion AI, in conjunction with Source Code Bulgaria Ltd and BlackswanTechnologies, **does not endorse or guarantee the content produced by the weights in each LORA**. There's potential for generating NSFW or offensive content. Collectively, we expressly disclaim responsibility for the outcomes and content produced by these weights. 🔴 **Acknowledgement:** The [FFusionXL-BASE](https://huggingface.co/FFusion/FFusionXL-BASE) model model is a uniquely developed version by FFusion AI. Rights to this and associated modifications belong to FFusion AI and Source Code Bulgaria Ltd. Ensure adherence to both this license and any conditions set by Stability AI Ltd for referenced models. ## 📈 How to Use The model can be easily integrated into your projects. Here's a quick guide on how to use the FFXL400 model: 1. **Loading the Model**: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("FFusion/FFXL400") model = AutoModel.from_pretrained("FFusion/FFXL400") ``` 2. **Performing Inference**: ```python input_text = "Your input here" inputs = tokenizer(input_text, return_tensors='pt') with torch.no_grad(): outputs = model(**inputs) ``` ## Further Training You can also use the FFXL400 as a starting point for further training. Simply load it into your training pipeline and proceed as you would with any other model. [Autotrain Advanced](https://github.com/huggingface/autotrain-advanced), [Kohya + Stable Diffusion XL](https://huggingface.co/docs/diffusers/main/en/training/lora#stable-diffusion-xl), ## 📚 Background The FFXL400 is built upon the insights and data from the [400GB-LoraXL repository](https://huggingface.co/FFusion/400GB-LoraXL). Each LoRA in that collection was extracted using the Low-Rank Adaptation (LoRA) technique, providing a rich dataset for research and exploration. The FFXL400 is the pinnacle of that research, representing a harmonious blend of the best LoRAs. ## Library of Available LoRA Models 📚 ![loraXL FFUsion](https://cdn-uploads.huggingface.co/production/uploads/6380cf05f496d57325c12194/XQlnis5W2-fgDnGZ60EK9.jpeg) You can choose any of the models from our repository on Hugging Face or the upcoming repository on CivitAI. Here's a list of available models with `lora_model_id = "FFusion/400GB-LoraXL"`: ``` lora_filename = - FFai.0001.4Guofeng4xl_V1125d.lora_Dim64.safetensors - FFai.0002.4Guofeng4xl_V1125d.lora_Dim8.safetensors - FFai.0003.4Guofeng4xl_V1125d.loraa.safetensors - FFai.0004.Ambiencesdxl_A1.lora.safetensors - FFai.0005.Ambiencesdxl_A1.lora_8.safetensors - FFai.0006.Angrasdxl10_V22.lora.safetensors - FFai.0007.Animaginexl_V10.lora.safetensors - FFai.0008.Animeartdiffusionxl_Alpha3.lora.safetensors - FFai.0009.Astreapixiexlanime_V16.lora.safetensors - FFai.0010.Bluepencilxl_V010.lora.safetensors - FFai.0011.Bluepencilxl_V021.lora.safetensors - FFai.0012.Breakdomainxl_V03d.lora.safetensors - FFai.0013.Canvasxl_Bfloat16v002.lora.safetensors - FFai.0014.Cherrypickerxl_V20.lora.safetensors - FFai.0015.Copaxtimelessxlsdxl1_V44.lora.safetensors - FFai.0016.Counterfeitxl-Ffusionai-Alpha-Vae.lora.safetensors - FFai.0017.Counterfeitxl_V10.lora.safetensors - FFai.0018.Crystalclearxl_Ccxl.lora.safetensors - FFai.0019.Deepbluexl_V006.lora.safetensors - FFai.0020.Dream-Ffusion-Shaper.lora.safetensors - FFai.0021.Dreamshaperxl10_Alpha2xl10.lora.safetensors - FFai.0022.Duchaitenaiartsdxl_V10.lora.safetensors - FFai.0023.Dynavisionxlallinonestylized_Beta0371bakedvae.lora.safetensors - FFai.0024.Dynavisionxlallinonestylized_Beta0411bakedvae.lora.safetensors - FFai.0025.Fantasticcharacters_V55.lora.safetensors - FFai.0026.Fenrisxl_V55.lora.safetensors - FFai.0027.Fudukimix_V10.lora.safetensors - FFai.0028.Infinianimexl_V16.lora.safetensors - FFai.0029.Juggernautxl_Version1.lora_1.safetensors - FFai.0030.Lahmysterioussdxl_V330.lora.safetensors - FFai.0031.Mbbxlultimate_V10rc.lora.safetensors - FFai.0032.Miamodelsfwnsfwsdxl_V30.lora.safetensors - FFai.0033.Morphxl_V10.lora.safetensors - FFai.0034.Nightvisionxlphotorealisticportrait_Beta0681bakedvae.lora_1.safetensors - FFai.0035.Osorubeshialphaxl_Z.lora.safetensors - FFai.0036.Physiogenxl_V04.lora.safetensors - FFai.0037.Protovisionxlhighfidelity3d_Beta0520bakedvae.lora.safetensors - FFai.0038.Realitycheckxl_Alpha11.lora.safetensors - FFai.0039.Realmixxl_V10.lora.safetensors - FFai.0040.Reproductionsdxl_V31.lora.safetensors - FFai.0041.Rundiffusionxl_Beta.lora.safetensors - FFai.0042.Samaritan3dcartoon_V40sdxl.lora.safetensors - FFai.0043.Sdvn6realxl_Detailface.lora.safetensors - FFai.0044.Sdvn7realartxl_Beta2.lora.safetensors - FFai.0045.Sdxl10arienmixxlasian_V10.lora.safetensors - FFai.0046.Sdxlbasensfwfaces_Sdxlnsfwfaces03.lora.safetensors - FFai.0047.Sdxlfaetastic_V10.lora.safetensors - FFai.0048.Sdxlfixedvaefp16remove_Basefxiedvaev2fp16.lora.safetensors - FFai.0049.Sdxlnijiv4_Sdxlnijiv4.lora.safetensors - FFai.0050.Sdxlronghua_V11.lora.safetensors - FFai.0051.Sdxlunstablediffusers_V5unchainedslayer.lora.safetensors - FFai.0052.Sdxlyamersanimeultra_Yamersanimev2.lora.safetensors - FFai.0053.Shikianimexl_V10.lora.safetensors - FFai.0054.Spectrumblendx_V10.lora.safetensors - FFai.0055.Stablediffusionxl_V30.lora.safetensors - FFai.0056.Talmendoxlsdxl_V11beta.lora.safetensors - FFai.0057.Wizard_V10.lora.safetensors - FFai.0058.Wyvernmix15xl_Xlv11.lora.safetensors - FFai.0059.Xl13asmodeussfwnsfw_V17bakedvae.lora.safetensors - FFai.0060.Xl3experimentalsd10xl_V10.lora.safetensors - FFai.0061.Xl6hephaistossd10xlsfw_V21bakedvaefp16fix.lora.safetensors - FFai.0062.Xlperfectdesign_V2ultimateartwork.lora.safetensors - FFai.0063.Xlyamersrealistic_V3.lora.safetensors - FFai.0064.Xxmix9realisticsdxl_Testv20.lora.safetensors - FFai.0065.Zavychromaxl_B2.lora.safetensors ``` ## 🎉 Acknowledgements & Citations A huge shoutout to the community for their continued support and feedback. Together, we are pushing the boundaries of what's possible with machine learning! We would also like to acknowledge and give credit to the following projects and authors: - **ComfyUI**: We've used and modified portions of [ComfyUI](https://github.com/comfyanonymous/ComfyUI) for our work. - **kohya-ss/sd-scripts and bmaltais**: Our work also incorporates modifications from [kohya-ss/sd-scripts](https://github.com/kohya-ss/sd-scripts). - **lora-inspector**: We've benefited from the [lora-inspector](https://github.com/rockerBOO/lora-inspector) project. - **KohakuBlueleaf**: Special mention to KohakuBlueleaf for their invaluable contributions. [![400GB FFusion Lora XL 5](https://huggingface.co/FFusion/400GB-LoraXL/resolve/main/images/image5sm.jpg)](https://huggingface.co/FFusion/400GB-LoraXL/resolve/main/images/image5.jpg) [![400GB FFusion Lora XL 6](https://huggingface.co/FFusion/400GB-LoraXL/resolve/main/images/image6sm.jpg)](https://huggingface.co/FFusion/400GB-LoraXL/resolve/main/images/image6.jpg) [![400GB FFusion Lora XL 7](https://huggingface.co/FFusion/400GB-LoraXL/resolve/main/images/image7sm.jpg)](https://huggingface.co/FFusion/400GB-LoraXL/resolve/main/images/image7.jpg) [![400GB FFusion Lora XL 9](https://huggingface.co/FFusion/400GB-LoraXL/resolve/main/images/image9.jpg)](https://huggingface.co/FFusion/400GB-LoraXL/tree/main) ### HowMuch ??? ![60% Works](https://img.shields.io/badge/60%25%20of%20the%20Time-It%20Works%20Every%20Time-green) **Have you ever asked yourself, "How much space have I wasted on `*.ckpt` and `*.safetensors` checkpoints?"** 🤔 Say hello to HowMuch: Checking checkpoint wasted space since... well, now! 😄 Enjoy this somewhat unnecessary, yet **"fun-for-the-whole-family"** DiskSpaceAnalyzer tool. 😄 ## Overview `HowMuch` is a Python tool designed to scan your drives (or a specified directory) and report on the total space used by files with specific extensions, mainly `.ckpt` and `.safetensors`. It outputs: - The total storage capacity of each scanned drive or directory. - The space occupied by `.ckpt` and `.safetensors` files. - The free space available. - A neat bar chart visualizing the above data. ## Installation [GitHub](https://github.com/1e-2/HowMuch) ### From PyPI You can easily install `HowMuch` via pip: ```bash pip install howmuch ``` ### From Source 1. Clone the repository: ```bash git clone https://github.com/1e-2/HowMuch.git ``` 2. Navigate to the cloned directory and install: ```bash cd HowMuch pip install . ``` ## Usage Run the tool without any arguments to scan all drives: ```bash howmuch ``` Or, specify a particular directory or drive to scan: ```bash howmuch --scan C: ``` ### 🌐 **Contact Information** The **FFusion.ai** project is proudly maintained by **Source Code Bulgaria Ltd** & **Black Swan Technologies**. 📧 Reach us at [[email protected]](mailto:[email protected]) for any inquiries or support. #### 🌌 **Find us on:** - 🐙 [GitHub](https://github.com/1e-2) - 😊 [Hugging Face](https://huggingface.co/FFusion/) - 💡 [Civitai](https://civitai.com/user/idle/models) 🔐 **Security powered by** [Comodo.BG](http://Comodo.BG) & [Preasidium.CX](http://Preasidium.CX) 🚀 Marketing by [Гугъл.com](http://Гугъл.com) 📩 [![Email](https://img.shields.io/badge/Email-enquiries%40ffusion.ai-blue?style=for-the-badge&logo=gmail)](mailto:[email protected]) 🌍 Sofia Istanbul London --- We hope the FFXL400 serves as a valuable asset in your AI journey. We encourage feedback, contributions, and insights from the community to further refine and enhance this model. Together, let's push the boundaries of what's possible! ![ffusionai-logo.png](https://cdn-uploads.huggingface.co/production/uploads/6380cf05f496d57325c12194/EjDa_uGcOoH2cXM2K-NYn.png)
shaowenchen/baichuan2-7b-chat-gguf
shaowenchen
2023-09-12T06:42:19Z
577
13
null
[ "gguf", "baichuan", "chinese", "text-generation", "zh", "en", "license:other", "region:us" ]
text-generation
2023-09-11T10:13:53Z
--- inference: false language: - zh - en license: other model_creator: baichuan-inc model_link: https://www.modelscope.cn/models/baichuan-inc/Baichuan2-7B-Chat model_name: Baichuan2-7B-Chat model_type: baichuan pipeline_tag: text-generation quantized_by: shaowenchen tasks: - text2text-generation tags: - gguf - baichuan - chinese --- ## Provided files | Name | Quant method | Size | | ----------------------------- | ------------ | ------ | | baichuan2-7b-chat.Q2_K.gguf | Q2_K | 3.0 GB | | baichuan2-7b-chat.Q3_K.gguf | Q3_K | 3.5 GB | | baichuan2-7b-chat.Q3_K_L.gguf | Q3_K_L | 3.8 GB | | baichuan2-7b-chat.Q3_K_S.gguf | Q3_K_S | 3.2 GB | | baichuan2-7b-chat.Q4_0.gguf | Q4_0 | 4.1 GB | | baichuan2-7b-chat.Q4_1.gguf | Q4_1 | 4.5 GB | | baichuan2-7b-chat.Q4_K.gguf | Q4_K | 4.3 GB | | baichuan2-7b-chat.Q4_K_S.gguf | Q4_K_S | 4.1 GB | | baichuan2-7b-chat.Q5_0.gguf | Q5_0 | 4.9 GB | | baichuan2-7b-chat.Q5_1.gguf | Q5_1 | 5.3 GB | | baichuan2-7b-chat.Q5_K.gguf | Q5_K | 5.0 GB | | baichuan2-7b-chat.Q5_K_S.gguf | Q5_K_S | 4.9 GB | | baichuan2-7b-chat.Q6_K.gguf | Q6_K | 5.7 GB | | baichuan2-7b-chat.Q8_0.gguf | Q8_0 | 7.4 GB | | baichuan2-7b-chat.gguf | full | 14 GB | Usage: ``` docker run --rm -it -p 8000:8000 -v /path/to/models:/models -e MODEL=/models/gguf-model-name.gguf hubimage/llama-cpp-python:latest ``` and you can view http://localhost:8000/docs to see the swagger UI. ## Provided images | Name | Quant method | Size | | ------------------------------------------- | ------------ | ------- | | `shaowenchen/baichuan2-7b-chat-gguf:Q2_K` | Q2_K | 7.59 GB | | `shaowenchen/baichuan2-7b-chat-gguf:Q3_K` | Q3_K | 8.61 GB | | `shaowenchen/baichuan2-7b-chat-gguf:Q3_K_L` | Q3_K_L | 9.23 GB | | `shaowenchen/baichuan2-7b-chat-gguf:Q3_K_S` | Q3_K_S | 7.93 GB | | `shaowenchen/baichuan2-7b-chat-gguf:Q4_0` | Q4_0 | 9.6 GB | Usage: ``` docker run --rm -p 8000:8000 shaowenchen/baichuan2-7b-chat-gguf:Q2_K ``` and you can view http://localhost:8000/docs to see the swagger UI.
dima806/closed_eyes_image_detection
dima806
2024-02-24T19:35:29Z
577
2
transformers
[ "transformers", "pytorch", "safetensors", "vit", "image-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-10-22T18:39:37Z
--- license: apache-2.0 metrics: - accuracy - f1 --- Returns whether there is an open or a closed eye given an image from surrounding area. See https://www.kaggle.com/code/dima806/closed-eye-image-detection-vit for more details. ``` Classification report: precision recall f1-score support closeEye 0.9921 0.9888 0.9904 4296 openEye 0.9889 0.9921 0.9905 4295 accuracy 0.9905 8591 macro avg 0.9905 0.9905 0.9905 8591 weighted avg 0.9905 0.9905 0.9905 8591 ```