modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0
m-polignano-uniba
"2021-05-19T22:20:54Z"
1,039
4
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
Entry not found
susnato/clvp_dev
susnato
"2023-10-29T09:16:52Z"
1,039
0
transformers
[ "transformers", "pytorch", "clvp", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
"2023-07-11T07:14:07Z"
**DISCLAIMER** : I do not own any weights present in this repository. All weights belong to the author of the paper - "Better speech synthesis through scaling", James Betker . I am storing the weights(temporarily) for the `tortoise-tts` integration to Huggingface. Please refer to this [PR](https://github.com/huggingface/transformers/pull/24745) to know more. <h3><u>About</u></h3> CLVP model is an integral part of `tortoise-tts` presented in the paper - "Better speech synthesis through scaling" by James Betker. CLVP uses an architecture similar to the CLIP text encoder, except it uses two of them: one for text tokens and the other for MEL tokens.
TheBloke/Llama2-70B-OASST-SFT-v10-GGUF
TheBloke
"2023-09-27T12:46:16Z"
1,039
10
transformers
[ "transformers", "gguf", "llama", "sft", "text-generation", "en", "dataset:rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored", "dataset:OpenAssistant/oasst1", "dataset:shahules786/orca-best", "dataset:argilla/databricks-dolly-15k-curated-multilingual", "base_model:OpenAssistant/llama2-70b-oasst-sft-v10", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
"2023-08-25T22:48:44Z"
--- language: - en license: llama2 library_name: transformers tags: - sft datasets: - rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored - OpenAssistant/oasst1 - shahules786/orca-best - argilla/databricks-dolly-15k-curated-multilingual model_name: Llama2 70B SFT v10 base_model: OpenAssistant/llama2-70b-oasst-sft-v10 inference: false model_creator: OpenAssistant model_type: llama pipeline_tag: text-generation prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Llama2 70B SFT v10 - GGUF - Model creator: [OpenAssistant](https://huggingface.co/OpenAssistant) - Original model: [Llama2 70B SFT v10](https://huggingface.co/OpenAssistant/llama2-70b-oasst-sft-v10) <!-- description start --> ## Description This repo contains GGUF format model files for [OpenAssistant's Llama2 70B SFT v10](https://huggingface.co/OpenAssistant/llama2-70b-oasst-sft-v10). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GGUF) * [OpenAssistant's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenAssistant/llama2-70b-oasst-sft-v10) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [llama2-70b-oasst-sft-v10.Q2_K.gguf](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GGUF/blob/main/llama2-70b-oasst-sft-v10.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes | | [llama2-70b-oasst-sft-v10.Q3_K_S.gguf](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GGUF/blob/main/llama2-70b-oasst-sft-v10.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss | | [llama2-70b-oasst-sft-v10.Q3_K_M.gguf](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GGUF/blob/main/llama2-70b-oasst-sft-v10.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss | | [llama2-70b-oasst-sft-v10.Q3_K_L.gguf](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GGUF/blob/main/llama2-70b-oasst-sft-v10.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss | | [llama2-70b-oasst-sft-v10.Q4_0.gguf](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GGUF/blob/main/llama2-70b-oasst-sft-v10.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [llama2-70b-oasst-sft-v10.Q4_K_S.gguf](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GGUF/blob/main/llama2-70b-oasst-sft-v10.Q4_K_S.gguf) | Q4_K_S | 4 | 39.08 GB| 41.58 GB | small, greater quality loss | | [llama2-70b-oasst-sft-v10.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GGUF/blob/main/llama2-70b-oasst-sft-v10.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended | | [llama2-70b-oasst-sft-v10.Q5_0.gguf](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GGUF/blob/main/llama2-70b-oasst-sft-v10.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [llama2-70b-oasst-sft-v10.Q5_K_S.gguf](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GGUF/blob/main/llama2-70b-oasst-sft-v10.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended | | [llama2-70b-oasst-sft-v10.Q5_K_M.gguf](https://huggingface.co/TheBloke/Llama2-70B-OASST-SFT-v10-GGUF/blob/main/llama2-70b-oasst-sft-v10.Q5_K_M.gguf) | Q5_K_M | 5 | 48.76 GB| 51.26 GB | large, very low quality loss - recommended | | llama2-70b-oasst-sft-v10.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss | | llama2-70b-oasst-sft-v10.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ### Q6_K and Q8_0 files are split and require joining **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files. <details> <summary>Click for instructions regarding Q6_K and Q8_0 files</summary> ### q6_K Please download: * `llama2-70b-oasst-sft-v10.Q6_K.gguf-split-a` * `llama2-70b-oasst-sft-v10.Q6_K.gguf-split-b` ### q8_0 Please download: * `llama2-70b-oasst-sft-v10.Q8_0.gguf-split-a` * `llama2-70b-oasst-sft-v10.Q8_0.gguf-split-b` To join the files, do the following: Linux and macOS: ``` cat llama2-70b-oasst-sft-v10.Q6_K.gguf-split-* > llama2-70b-oasst-sft-v10.Q6_K.gguf && rm llama2-70b-oasst-sft-v10.Q6_K.gguf-split-* cat llama2-70b-oasst-sft-v10.Q8_0.gguf-split-* > llama2-70b-oasst-sft-v10.Q8_0.gguf && rm llama2-70b-oasst-sft-v10.Q8_0.gguf-split-* ``` Windows command line: ``` COPY /B llama2-70b-oasst-sft-v10.Q6_K.gguf-split-a + llama2-70b-oasst-sft-v10.Q6_K.gguf-split-b llama2-70b-oasst-sft-v10.Q6_K.gguf del llama2-70b-oasst-sft-v10.Q6_K.gguf-split-a llama2-70b-oasst-sft-v10.Q6_K.gguf-split-b COPY /B llama2-70b-oasst-sft-v10.Q8_0.gguf-split-a + llama2-70b-oasst-sft-v10.Q8_0.gguf-split-b llama2-70b-oasst-sft-v10.Q8_0.gguf del llama2-70b-oasst-sft-v10.Q8_0.gguf-split-a llama2-70b-oasst-sft-v10.Q8_0.gguf-split-b ``` </details> <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Llama2-70B-OASST-SFT-v10-GGUF and below it, a specific filename to download, such as: llama2-70b-oasst-sft-v10.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Llama2-70B-OASST-SFT-v10-GGUF llama2-70b-oasst-sft-v10.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Llama2-70B-OASST-SFT-v10-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Llama2-70B-OASST-SFT-v10-GGUF llama2-70b-oasst-sft-v10.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m llama2-70b-oasst-sft-v10.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama2-70B-OASST-SFT-v10-GGUF", model_file="llama2-70b-oasst-sft-v10.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: OpenAssistant's Llama2 70B SFT v10 # Open-Assistant Llama2 70B SFT v10 This model is an Open-Assistant fine-tuning of Meta's [Llama2 70B](https://huggingface.co/meta-llama/Llama-2-70b) LLM. It was fine-tuned in two stages, first on a mix of synthetic instrunctions and coding tasks and then in a "polishing" stage on the best human demonstrations collected at [open-assistant.io](https://open-assistant.io/) up to July 23, 2023 (see [Configuration Details](#configuration-details) below). ## Model Details - **Finetuned from:** [meta-llama/Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b) via [epfLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) - **Model type:** Causal decoder-only transformer language model - **Language:** English (and limited capabilities in German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish) - **Weights & Biases training logs:** [Stage 1](https://wandb.ai/open-assistant/public-sft/runs/run45_oasst_pre10_llama2_70b) (1 epoch pretrain-mix, 12k steps), [Stage 2](https://wandb.ai/open-assistant/public-sft/runs/run46_oasst_sft10_llama2_70b) (3 epochs oasst top-1, 519 steps) - **Demo:** [Continuations for 250 random prompts (TGI, 4bit nf4 quantization)](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-08-22_OpenAssistant_llama2-70b-oasst-sft-v10_sampling_noprefix2_nf4.json%0A) - **Evaluation** [FastEval-OpenAssistant Overview](https://tju01.github.io/FastEval-OpenAssistant/) (using [FastEval](https://github.com/FastEval/FastEval) & [vLLM](https://github.com/vllm-project/vllm)) - **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) - **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord) ## Prompting / Prompt Template Due to public demand (see [survey](https://twitter.com/erhartford/status/1682403597525430272)) we changed the prompt-template for this model from custom prompter/assistant tokens to OpenAI's [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) standard prompt format. We hope that this leads to greater compatibility with chat inference/frontend applications. Prompt dialogue template: ``` """ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant """ ``` The model input can contain multiple conversation turns between user and assistant, e.g. ``` <|im_start|>user {prompt 1}<|im_end|> <|im_start|>assistant {reply 1}<|im_end|> <|im_start|>user {prompt 2}<|im_end|> <|im_start|>assistant (...) ``` The model was partly trained with orca system messages. For inference we recommend to use the official [Llama2 system message](https://github.com/facebookresearch/llama/blob/ea9f33d6d3ea8ed7d560d270986407fd6c2e52b7/example_chat_completion.py#L57-L61): ``` <|im_start|>system You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <|im_end|> ``` ### Credits & Special Thanks - Thanks to [Meta AI](https://ai.meta.com/) for training and releasing the Llama2 model. - Distributed training support was provided by EPFL's [Machine Learning and Optimization Laboratory](https://www.epfl.ch/labs/mlo/), and [Natural Language Processing Lab](https://nlp.epfl.ch/). - The open-source [epfLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) trainer was used for fine-tuning. - [rombodawg](https://huggingface.co/rombodawg) curated the [LosslessMegaCodeTrainingV2_1m_Evol_Uncensored](https://huggingface.co/datasets/rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored) dataset. - [ehartford](https://huggingface.co/ehartford) generated and published the [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) and the [ehartford/oa_leet10k](https://huggingface.co/datasets/ehartford/oa_leet10k) datasets. - [Argilla](https://huggingface.co/argilla) curated and published the [argilla/databricks-dolly-15k-curated-multilingual](https://huggingface.co/datasets/argilla/databricks-dolly-15k-curated-multilingual) dataset. - [shahules786](https://github.com/shahules786) de-duped and filtered the Dolphin dataset with a cluster-center approach and generated the orca-best (ocra-chat) dataset. - [andreaskoepf](https://github.com/andreaskoepf/) prepared & orchestrated the training. We want to especially thank everyone who contributed in the crowed-sourced Open-Assistant dataset creation on https://open-assistant.io/ - without you this project would not have been possible. ## Ethical Considerations and Limitations Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, the potential outputs of llama2-70b-oasst-sft-v10 cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of llama2-70b-oasst-sft-v10, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/). ## Note regarding inference with TGI During evaluation we noticed that this 70B model produced extremely poor outputs when loaded it was loaded in 16 bit precision sharded in [TGI](https://github.com/huggingface/text-generation-inference). In contrast the model could be evaluated without problem using [vLLM](https://github.com/vllm-project/vllm). The model also worked decently well when loaded with TGI on a single GPPU nf4 quantized via [TimDettmers/bitsandbytes](https://github.com/TimDettmers/bitsandbytes). Will will get it touch with the TGI authors to find out why sharded 16-bit inference doesn't work as expected. ## Configuration Details The "pretokenizer" utility used to tokenize the datamix is part of the Open-Assistant github repository and can be found here: [model/pretokenizer](https://github.com/LAION-AI/Open-Assistant/tree/main/model/pretokenizer). ### Stage 1 Pretokenizer Configuration Entries of the dataset with assistant replies shorter than 25 tokens were excluded from training. ``` oasst_pre10_min25: datasets: - megacode2: fraction: 0.5 val_split: 0.01 max_val_set: 1000 - orca-chat: val_split: 0.01 max_val_set: 1000 - dolly15k_multilingual: val_split: 0.05 max_val_set: 300 - oa_leet10k: val_split: 0.05 max_val_set: 250 output_dir: "output/oasst_pre10_min25" filename_prefix: "oasst_pre10" min_assistant_tokens: 25 ``` Stage 1 dataset statistics: ``` # Stats for output/oasst_pre10_min25_llama2 ## Stats for 'Subset of InstructionDataset (megacode2)' (466364 samples (50.0%)) ----------------- Accepted: 398223/466364 (85.4%) Accepted tokens: 167676873 Skipped: 68141 (14.6%) Min tokens per sample: 36 Max tokens per sample: 11810 Avg tokens per sample: 421.063 ----------------- ## Stats for 'Subset of OrcaChat (orca-chat)' (325616 samples (100.0%)) ----------------- Accepted: 325616/325616 (100.0%) Accepted tokens: 178307574 Skipped: 0 (0.0%) Min tokens per sample: 105 Max tokens per sample: 10408 Avg tokens per sample: 547.601 ----------------- ## Stats for 'Subset of Dolly15kMultilingual' (57020 samples (100.0%)) ----------------- Accepted: 47494/57020 (83.3%) Accepted tokens: 13883177 Skipped: 9526 (16.7%) Min tokens per sample: 34 Max tokens per sample: 9172 Avg tokens per sample: 292.314 ----------------- ## Stats for 'Subset of InstructionDataset (oa_leet10k)' (22236 samples (100.0%)) ----------------- Accepted: 22236/22236 (100.0%) Accepted tokens: 15905296 Skipped: 0 (0.0%) Min tokens per sample: 168 Max tokens per sample: 10588 Avg tokens per sample: 715.295 ----------------- ## Stats for 'total' (871236 samples (100.0%)) ----------------- Accepted: 793569/871236 (91.1%) Accepted tokens: 375772920 Skipped: 77667 (8.9%) Min tokens per sample: 34 Max tokens per sample: 11810 Avg tokens per sample: 473.523 ----------------- ``` ### Stage 2 Pretokenizer Configuration ``` oasst_top1: datasets: - oasst_export: lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" input_file_path: 2023-07-23_oasst_ready.tar.gz top_k: 1 val_split: 0.05 output_dir: "output/oasst_top1_2023-07-23" filename_prefix: "oasst_top1" ``` Stage 2 dataset statistics: ``` # Stats for output/oasst_top1_2023-07-23_llama2 ## Stats for 'ListDataset' (11441 samples (100.0%)) ----------------- Accepted: 11441/11441 (100.0%) Accepted tokens: 5315368 Skipped: 0 (0.0%) Min tokens per sample: 20 Max tokens per sample: 5407 Avg tokens per sample: 464.58945896337735 ----------------- ## Stats for 'total' (11441 samples (100.0%)) ----------------- Accepted: 11441/11441 (100.0%) Accepted tokens: 5315368 Skipped: 0 (0.0%) Min tokens per sample: 20 Max tokens per sample: 5407 Avg tokens per sample: 464.58945896337735 ----------------- ``` ### Megatron Fine-Tuning Arguments for Stage 1 (Instruction Tuning): ``` --tensor_model_parallel_size 8 --pipeline_model_parallel_size 4 --load ./checkpoints/llama2-70b-tp8-pp4 --save ./checkpoints/llama2-70b-tp8-pp4-oasst_pre10 --tensorboard_dir ./checkpoints/llama2-70b-tp8-pp4-oasst_pre10/logging --data_path ./data/oasst_pre10_min25_llama2/oasst_sft10-train --model_name llama2 --tokenizer_type SentencePieceTokenizer --bf16 --global_batch_size 64 --micro_batch_size 2 --vocab_file=./llama2/Llama-2-7b/tokenizer.model --use_rms_norm --glu_activation swiglu --no_tie_embed_logits --vocab_extra_ids_list "\"<|im_start|>,<|im_end|>\"" --layernorm_epsilon 1e-5 --use_flash_attn --no_bias_gelu_fusion --seq_length 4096 --max_position_embeddings 4096 --log_interval 1 --save_interval 500 --eval_interval 50 --eval_iters 10 --hidden_dropout 0.0 --position_embedding_type rotary --no_bias_dropout_fusion --use_checkpoint_args --train_iters 12000 --attention_dropout 0.0 --adam_beta1 0.9 --adam_beta2 0.95 --adam_eps 1e-12 --lr_decay_style cosine --lr_warmup_iters 100 --lr 1e-5 --min_lr 1e-6 --weight_decay 0.000001 --sequence_parallel --recompute_granularity selective --log_timers_to_tensorboard --rope_scaling_factor 1.0 --wandb_logger ``` ### Megatron Fine-Tuning Arguments for Stage 2 (OASST Polishing, LIMA Dropout): ``` --tensor_model_parallel_size 8 --pipeline_model_parallel_size 4 --load ./checkpoints/llama2-70b-tp8-pp4-oasst_pre10 --save ./checkpoints/llama2-70b-tp8-pp4-oasst_sft10 --tensorboard_dir ./checkpoints/llama2-70b-tp8-pp4-oasst_sft10/logging --data_path ./data/oasst_top1_2023-07-23_llama2/oasst_top1-train --model_name llama2 --tokenizer_type SentencePieceTokenizer --bf16 --global_batch_size 64 --micro_batch_size 2 --vocab_file=./llama2/Llama-2-7b/tokenizer.model --use_rms_norm --glu_activation swiglu --no_tie_embed_logits --vocab_extra_ids_list "\"<|im_start|>,<|im_end|>\"" --layernorm_epsilon 1e-5 --use_flash_attn --no_bias_gelu_fusion --seq_length 4096 --max_position_embeddings 4096 --log_interval 1 --save_interval 346 --eval_interval 50 --eval_iters 10 --hidden_dropout 0.25 --lima_dropout --position_embedding_type rotary --no_bias_dropout_fusion --use_checkpoint_args --train_iters 519 --attention_dropout 0.0 --adam_beta1 0.9 --adam_beta2 0.95 --adam_eps 1e-12 --lr_decay_style cosine --lr_warmup_iters 100 --lr 1e-5 --min_lr 1e-6 --weight_decay 0.000001 --sequence_parallel --recompute_granularity selective --log_timers_to_tensorboard --rope_scaling_factor 1.0 --finetune --wandb_logger ``` <!-- original-model-card end -->
Amu/tao-8k
Amu
"2023-12-03T11:46:16Z"
1,039
35
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "mteb", "zh", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2023-11-04T14:14:00Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb model-index: - name: tao-8k-origin results: - task: type: STS dataset: type: C-MTEB/AFQMC name: MTEB AFQMC config: default split: validation revision: None metrics: - type: cos_sim_pearson value: 47.33644889578121 - type: cos_sim_spearman value: 49.93968642502866 - type: euclidean_pearson value: 48.12029792973887 - type: euclidean_spearman value: 49.939666315145494 - type: manhattan_pearson value: 48.07449594650583 - type: manhattan_spearman value: 49.892461433911166 - task: type: STS dataset: type: C-MTEB/ATEC name: MTEB ATEC config: default split: test revision: None metrics: - type: cos_sim_pearson value: 50.976148098905746 - type: cos_sim_spearman value: 53.11230114448237 - type: euclidean_pearson value: 55.119977161851054 - type: euclidean_spearman value: 53.11229776647941 - type: manhattan_pearson value: 55.096968162828034 - type: manhattan_spearman value: 53.107481302419465 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (zh) config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 40.804 - type: f1 value: 39.01066543513968 - task: type: STS dataset: type: C-MTEB/BQ name: MTEB BQ config: default split: test revision: None metrics: - type: cos_sim_pearson value: 62.843816050026824 - type: cos_sim_spearman value: 65.54142642656706 - type: euclidean_pearson value: 64.08809634876388 - type: euclidean_spearman value: 65.54142642558392 - type: manhattan_pearson value: 64.09391522108272 - type: manhattan_spearman value: 65.55445491162718 - task: type: Clustering dataset: type: C-MTEB/CLSClusteringP2P name: MTEB CLSClusteringP2P config: default split: test revision: None metrics: - type: v_measure value: 40.028061591547804 - task: type: Clustering dataset: type: C-MTEB/CLSClusteringS2S name: MTEB CLSClusteringS2S config: default split: test revision: None metrics: - type: v_measure value: 38.1897102944254 - task: type: Reranking dataset: type: C-MTEB/CMedQAv1-reranking name: MTEB CMedQAv1 config: default split: test revision: None metrics: - type: map value: 85.34294439514511 - type: mrr value: 88.03849206349206 - task: type: Reranking dataset: type: C-MTEB/CMedQAv2-reranking name: MTEB CMedQAv2 config: default split: test revision: None metrics: - type: map value: 85.81294364673899 - type: mrr value: 88.52146825396825 - task: type: Retrieval dataset: type: C-MTEB/CmedqaRetrieval name: MTEB CmedqaRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 23.982 - type: map_at_10 value: 36.21 - type: map_at_100 value: 38.072 - type: map_at_1000 value: 38.194 - type: map_at_3 value: 32.239000000000004 - type: map_at_5 value: 34.377 - type: mrr_at_1 value: 36.858999999999995 - type: mrr_at_10 value: 45.084999999999994 - type: mrr_at_100 value: 46.104 - type: mrr_at_1000 value: 46.154 - type: mrr_at_3 value: 42.623 - type: mrr_at_5 value: 43.995 - type: ndcg_at_1 value: 36.858999999999995 - type: ndcg_at_10 value: 42.735 - type: ndcg_at_100 value: 50.181 - type: ndcg_at_1000 value: 52.309000000000005 - type: ndcg_at_3 value: 37.728 - type: ndcg_at_5 value: 39.664 - type: precision_at_1 value: 36.858999999999995 - type: precision_at_10 value: 9.615 - type: precision_at_100 value: 1.564 - type: precision_at_1000 value: 0.183 - type: precision_at_3 value: 21.514 - type: precision_at_5 value: 15.568999999999999 - type: recall_at_1 value: 23.982 - type: recall_at_10 value: 53.04600000000001 - type: recall_at_100 value: 84.113 - type: recall_at_1000 value: 98.37 - type: recall_at_3 value: 37.824999999999996 - type: recall_at_5 value: 44.023 - task: type: PairClassification dataset: type: C-MTEB/CMNLI name: MTEB Cmnli config: default split: validation revision: None metrics: - type: cos_sim_accuracy value: 77.71497294046902 - type: cos_sim_ap value: 86.84526989595028 - type: cos_sim_f1 value: 79.31987247608926 - type: cos_sim_precision value: 72.70601987142022 - type: cos_sim_recall value: 87.2574234276362 - type: dot_accuracy value: 77.71497294046902 - type: dot_ap value: 86.83880734247957 - type: dot_f1 value: 79.31987247608926 - type: dot_precision value: 72.70601987142022 - type: dot_recall value: 87.2574234276362 - type: euclidean_accuracy value: 77.71497294046902 - type: euclidean_ap value: 86.84526869685902 - type: euclidean_f1 value: 79.31987247608926 - type: euclidean_precision value: 72.70601987142022 - type: euclidean_recall value: 87.2574234276362 - type: manhattan_accuracy value: 77.8111846061335 - type: manhattan_ap value: 86.81142881585656 - type: manhattan_f1 value: 79.4201671780764 - type: manhattan_precision value: 72.53575570158485 - type: manhattan_recall value: 87.74842179097499 - type: max_accuracy value: 77.8111846061335 - type: max_ap value: 86.84526989595028 - type: max_f1 value: 79.4201671780764 - task: type: Retrieval dataset: type: C-MTEB/CovidRetrieval name: MTEB CovidRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 70.706 - type: map_at_10 value: 78.619 - type: map_at_100 value: 78.915 - type: map_at_1000 value: 78.918 - type: map_at_3 value: 76.967 - type: map_at_5 value: 77.922 - type: mrr_at_1 value: 70.917 - type: mrr_at_10 value: 78.64 - type: mrr_at_100 value: 78.935 - type: mrr_at_1000 value: 78.938 - type: mrr_at_3 value: 77.081 - type: mrr_at_5 value: 77.972 - type: ndcg_at_1 value: 70.917 - type: ndcg_at_10 value: 82.186 - type: ndcg_at_100 value: 83.487 - type: ndcg_at_1000 value: 83.589 - type: ndcg_at_3 value: 78.874 - type: ndcg_at_5 value: 80.548 - type: precision_at_1 value: 70.917 - type: precision_at_10 value: 9.431000000000001 - type: precision_at_100 value: 1.001 - type: precision_at_1000 value: 0.101 - type: precision_at_3 value: 28.275 - type: precision_at_5 value: 17.829 - type: recall_at_1 value: 70.706 - type: recall_at_10 value: 93.256 - type: recall_at_100 value: 99.05199999999999 - type: recall_at_1000 value: 99.895 - type: recall_at_3 value: 84.247 - type: recall_at_5 value: 88.251 - task: type: Retrieval dataset: type: C-MTEB/DuRetrieval name: MTEB DuRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 25.989 - type: map_at_10 value: 80.882 - type: map_at_100 value: 83.63199999999999 - type: map_at_1000 value: 83.663 - type: map_at_3 value: 55.772 - type: map_at_5 value: 70.598 - type: mrr_at_1 value: 90.14999999999999 - type: mrr_at_10 value: 93.30000000000001 - type: mrr_at_100 value: 93.363 - type: mrr_at_1000 value: 93.366 - type: mrr_at_3 value: 93.083 - type: mrr_at_5 value: 93.206 - type: ndcg_at_1 value: 90.14999999999999 - type: ndcg_at_10 value: 88.016 - type: ndcg_at_100 value: 90.52900000000001 - type: ndcg_at_1000 value: 90.84400000000001 - type: ndcg_at_3 value: 86.529 - type: ndcg_at_5 value: 85.65899999999999 - type: precision_at_1 value: 90.14999999999999 - type: precision_at_10 value: 42.295 - type: precision_at_100 value: 4.826 - type: precision_at_1000 value: 0.48900000000000005 - type: precision_at_3 value: 77.717 - type: precision_at_5 value: 65.81 - type: recall_at_1 value: 25.989 - type: recall_at_10 value: 89.446 - type: recall_at_100 value: 97.832 - type: recall_at_1000 value: 99.568 - type: recall_at_3 value: 58.223 - type: recall_at_5 value: 75.411 - task: type: Retrieval dataset: type: C-MTEB/EcomRetrieval name: MTEB EcomRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 49.6 - type: map_at_10 value: 59.512 - type: map_at_100 value: 60.059 - type: map_at_1000 value: 60.077999999999996 - type: map_at_3 value: 56.882999999999996 - type: map_at_5 value: 58.298 - type: mrr_at_1 value: 49.6 - type: mrr_at_10 value: 59.512 - type: mrr_at_100 value: 60.059 - type: mrr_at_1000 value: 60.077999999999996 - type: mrr_at_3 value: 56.882999999999996 - type: mrr_at_5 value: 58.298 - type: ndcg_at_1 value: 49.6 - type: ndcg_at_10 value: 64.71000000000001 - type: ndcg_at_100 value: 67.238 - type: ndcg_at_1000 value: 67.74 - type: ndcg_at_3 value: 59.275 - type: ndcg_at_5 value: 61.805 - type: precision_at_1 value: 49.6 - type: precision_at_10 value: 8.12 - type: precision_at_100 value: 0.927 - type: precision_at_1000 value: 0.097 - type: precision_at_3 value: 22.067 - type: precision_at_5 value: 14.46 - type: recall_at_1 value: 49.6 - type: recall_at_10 value: 81.2 - type: recall_at_100 value: 92.7 - type: recall_at_1000 value: 96.6 - type: recall_at_3 value: 66.2 - type: recall_at_5 value: 72.3 - task: type: Classification dataset: type: C-MTEB/IFlyTek-classification name: MTEB IFlyTek config: default split: validation revision: None metrics: - type: accuracy value: 47.98768757214313 - type: f1 value: 35.24243089488371 - task: type: Classification dataset: type: C-MTEB/JDReview-classification name: MTEB JDReview config: default split: test revision: None metrics: - type: accuracy value: 87.01688555347093 - type: ap value: 56.39167630414159 - type: f1 value: 81.91756262306008 - task: type: STS dataset: type: C-MTEB/LCQMC name: MTEB LCQMC config: default split: test revision: None metrics: - type: cos_sim_pearson value: 71.17874301231225 - type: cos_sim_spearman value: 77.47936067899236 - type: euclidean_pearson value: 76.3241109984839 - type: euclidean_spearman value: 77.47936511149533 - type: manhattan_pearson value: 76.3334642249198 - type: manhattan_spearman value: 77.48889610190774 - task: type: Reranking dataset: type: C-MTEB/Mmarco-reranking name: MTEB MMarcoReranking config: default split: dev revision: None metrics: - type: map value: 27.96872431410137 - type: mrr value: 26.92023809523809 - task: type: Retrieval dataset: type: C-MTEB/MMarcoRetrieval name: MTEB MMarcoRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 66.83099999999999 - type: map_at_10 value: 75.945 - type: map_at_100 value: 76.259 - type: map_at_1000 value: 76.27000000000001 - type: map_at_3 value: 74.22999999999999 - type: map_at_5 value: 75.318 - type: mrr_at_1 value: 69.069 - type: mrr_at_10 value: 76.491 - type: mrr_at_100 value: 76.764 - type: mrr_at_1000 value: 76.775 - type: mrr_at_3 value: 75.01 - type: mrr_at_5 value: 75.934 - type: ndcg_at_1 value: 69.069 - type: ndcg_at_10 value: 79.557 - type: ndcg_at_100 value: 80.946 - type: ndcg_at_1000 value: 81.23700000000001 - type: ndcg_at_3 value: 76.31099999999999 - type: ndcg_at_5 value: 78.121 - type: precision_at_1 value: 69.069 - type: precision_at_10 value: 9.58 - type: precision_at_100 value: 1.027 - type: precision_at_1000 value: 0.105 - type: precision_at_3 value: 28.73 - type: precision_at_5 value: 18.201 - type: recall_at_1 value: 66.83099999999999 - type: recall_at_10 value: 90.118 - type: recall_at_100 value: 96.377 - type: recall_at_1000 value: 98.656 - type: recall_at_3 value: 81.516 - type: recall_at_5 value: 85.798 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (zh-CN) config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.2649630127774 - type: f1 value: 65.96868218344183 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (zh-CN) config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.13382649630127 - type: f1 value: 72.69980239148315 - task: type: Retrieval dataset: type: C-MTEB/MedicalRetrieval name: MTEB MedicalRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 51.2 - type: map_at_10 value: 57.715 - type: map_at_100 value: 58.233999999999995 - type: map_at_1000 value: 58.289 - type: map_at_3 value: 56.483000000000004 - type: map_at_5 value: 57.193000000000005 - type: mrr_at_1 value: 51.2 - type: mrr_at_10 value: 57.714 - type: mrr_at_100 value: 58.233000000000004 - type: mrr_at_1000 value: 58.288 - type: mrr_at_3 value: 56.483000000000004 - type: mrr_at_5 value: 57.193000000000005 - type: ndcg_at_1 value: 51.2 - type: ndcg_at_10 value: 60.63499999999999 - type: ndcg_at_100 value: 63.458000000000006 - type: ndcg_at_1000 value: 64.992 - type: ndcg_at_3 value: 58.11300000000001 - type: ndcg_at_5 value: 59.391000000000005 - type: precision_at_1 value: 51.2 - type: precision_at_10 value: 6.97 - type: precision_at_100 value: 0.836 - type: precision_at_1000 value: 0.096 - type: precision_at_3 value: 20.933 - type: precision_at_5 value: 13.18 - type: recall_at_1 value: 51.2 - type: recall_at_10 value: 69.69999999999999 - type: recall_at_100 value: 83.6 - type: recall_at_1000 value: 95.8 - type: recall_at_3 value: 62.8 - type: recall_at_5 value: 65.9 - task: type: Classification dataset: type: C-MTEB/MultilingualSentiment-classification name: MTEB MultilingualSentiment config: default split: validation revision: None metrics: - type: accuracy value: 73.39 - type: f1 value: 72.85739851837214 - task: type: PairClassification dataset: type: C-MTEB/OCNLI name: MTEB Ocnli config: default split: validation revision: None metrics: - type: cos_sim_accuracy value: 73.36220898754738 - type: cos_sim_ap value: 78.50045169678386 - type: cos_sim_f1 value: 75.3875968992248 - type: cos_sim_precision value: 69.65085049239033 - type: cos_sim_recall value: 82.15417106652588 - type: dot_accuracy value: 73.36220898754738 - type: dot_ap value: 78.50039148302838 - type: dot_f1 value: 75.3875968992248 - type: dot_precision value: 69.65085049239033 - type: dot_recall value: 82.15417106652588 - type: euclidean_accuracy value: 73.36220898754738 - type: euclidean_ap value: 78.50045169678386 - type: euclidean_f1 value: 75.3875968992248 - type: euclidean_precision value: 69.65085049239033 - type: euclidean_recall value: 82.15417106652588 - type: manhattan_accuracy value: 73.09149972929075 - type: manhattan_ap value: 78.40911589236852 - type: manhattan_f1 value: 75.3623188405797 - type: manhattan_precision value: 69.45681211041853 - type: manhattan_recall value: 82.36536430834214 - type: max_accuracy value: 73.36220898754738 - type: max_ap value: 78.50045169678386 - type: max_f1 value: 75.3875968992248 - task: type: Classification dataset: type: C-MTEB/OnlineShopping-classification name: MTEB OnlineShopping config: default split: test revision: None metrics: - type: accuracy value: 91.81000000000002 - type: ap value: 89.35809579688139 - type: f1 value: 91.79220350456818 - task: type: STS dataset: type: C-MTEB/PAWSX name: MTEB PAWSX config: default split: test revision: None metrics: - type: cos_sim_pearson value: 30.06960208048424 - type: cos_sim_spearman value: 36.21568893707218 - type: euclidean_pearson value: 36.3789158810154 - type: euclidean_spearman value: 36.21568740241203 - type: manhattan_pearson value: 36.318190228955935 - type: manhattan_spearman value: 36.16813420759451 - task: type: STS dataset: type: C-MTEB/QBQTC name: MTEB QBQTC config: default split: test revision: None metrics: - type: cos_sim_pearson value: 36.779942621488736 - type: cos_sim_spearman value: 38.73716529566492 - type: euclidean_pearson value: 37.134107612179605 - type: euclidean_spearman value: 38.737099842399545 - type: manhattan_pearson value: 37.17579625045808 - type: manhattan_spearman value: 38.746051563332315 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (zh) config: zh split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 65.97416499132073 - type: cos_sim_spearman value: 68.87894646940939 - type: euclidean_pearson value: 67.2366929400408 - type: euclidean_spearman value: 68.87894646940939 - type: manhattan_pearson value: 67.30590304353478 - type: manhattan_spearman value: 68.90546655032796 - task: type: STS dataset: type: C-MTEB/STSB name: MTEB STSB config: default split: test revision: None metrics: - type: cos_sim_pearson value: 78.99420906581649 - type: cos_sim_spearman value: 79.36553449000968 - type: euclidean_pearson value: 78.77734144763518 - type: euclidean_spearman value: 79.36545230850567 - type: manhattan_pearson value: 78.82512507141092 - type: manhattan_spearman value: 79.43977311125059 - task: type: Reranking dataset: type: C-MTEB/T2Reranking name: MTEB T2Reranking config: default split: dev revision: None metrics: - type: map value: 66.38018284846501 - type: mrr value: 76.11180965277104 - task: type: Retrieval dataset: type: C-MTEB/T2Retrieval name: MTEB T2Retrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 27.423 - type: map_at_10 value: 77.206 - type: map_at_100 value: 80.83500000000001 - type: map_at_1000 value: 80.9 - type: map_at_3 value: 54.190000000000005 - type: map_at_5 value: 66.662 - type: mrr_at_1 value: 90.049 - type: mrr_at_10 value: 92.48100000000001 - type: mrr_at_100 value: 92.567 - type: mrr_at_1000 value: 92.571 - type: mrr_at_3 value: 92.07 - type: mrr_at_5 value: 92.32900000000001 - type: ndcg_at_1 value: 90.049 - type: ndcg_at_10 value: 84.69 - type: ndcg_at_100 value: 88.254 - type: ndcg_at_1000 value: 88.89399999999999 - type: ndcg_at_3 value: 86.091 - type: ndcg_at_5 value: 84.685 - type: precision_at_1 value: 90.049 - type: precision_at_10 value: 42.141 - type: precision_at_100 value: 5.016 - type: precision_at_1000 value: 0.516 - type: precision_at_3 value: 75.352 - type: precision_at_5 value: 63.176 - type: recall_at_1 value: 27.423 - type: recall_at_10 value: 83.595 - type: recall_at_100 value: 95.21 - type: recall_at_1000 value: 98.503 - type: recall_at_3 value: 55.84400000000001 - type: recall_at_5 value: 69.987 - task: type: Classification dataset: type: C-MTEB/TNews-classification name: MTEB TNews config: default split: validation revision: None metrics: - type: accuracy value: 51.927 - type: f1 value: 50.16838216110367 - task: type: Clustering dataset: type: C-MTEB/ThuNewsClusteringP2P name: MTEB ThuNewsClusteringP2P config: default split: test revision: None metrics: - type: v_measure value: 60.85131720842154 - task: type: Clustering dataset: type: C-MTEB/ThuNewsClusteringS2S name: MTEB ThuNewsClusteringS2S config: default split: test revision: None metrics: - type: v_measure value: 57.0921610946628 - task: type: Retrieval dataset: type: C-MTEB/VideoRetrieval name: MTEB VideoRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 56.99999999999999 - type: map_at_10 value: 67.611 - type: map_at_100 value: 68.095 - type: map_at_1000 value: 68.10300000000001 - type: map_at_3 value: 65.75 - type: map_at_5 value: 66.93 - type: mrr_at_1 value: 56.89999999999999 - type: mrr_at_10 value: 67.561 - type: mrr_at_100 value: 68.045 - type: mrr_at_1000 value: 68.053 - type: mrr_at_3 value: 65.7 - type: mrr_at_5 value: 66.88 - type: ndcg_at_1 value: 56.99999999999999 - type: ndcg_at_10 value: 72.25200000000001 - type: ndcg_at_100 value: 74.542 - type: ndcg_at_1000 value: 74.725 - type: ndcg_at_3 value: 68.47 - type: ndcg_at_5 value: 70.583 - type: precision_at_1 value: 56.99999999999999 - type: precision_at_10 value: 8.66 - type: precision_at_100 value: 0.972 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 25.433 - type: precision_at_5 value: 16.28 - type: recall_at_1 value: 56.99999999999999 - type: recall_at_10 value: 86.6 - type: recall_at_100 value: 97.2 - type: recall_at_1000 value: 98.6 - type: recall_at_3 value: 76.3 - type: recall_at_5 value: 81.39999999999999 - task: type: Classification dataset: type: C-MTEB/waimai-classification name: MTEB Waimai config: default split: test revision: None metrics: - type: accuracy value: 87.10000000000001 - type: ap value: 70.81766065881429 - type: f1 value: 85.5323306120456 license: apache-2.0 language: - zh --- A try for emebdding model: The method is the same as the stella-v2, I just extend the length of the context on tao.(I found if you want to use the fully-8k context, you maybe need to convert the model to float32). Now I'm working on the tao-v2, It will have a different sturcture. I will release tao-v2 as fast as I can. Thank you to the open source community.
bartowski/35b-beta-long-GGUF
bartowski
"2024-05-06T02:23:56Z"
1,039
7
null
[ "gguf", "text-generation", "en", "zh", "ja", "de", "dataset:JosephusCheung/GuanacoDataset", "dataset:meta-math/MetaMathQA", "dataset:jondurbin/airoboros-3.1", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "dataset:RyokoAI/ShareGPT52K", "dataset:RyokoAI/Fandom23K", "dataset:milashkaarshif/MoeGirlPedia_wikitext_raw_archive", "dataset:wikipedia", "dataset:wiki_lingua", "dataset:garage-bAInd/Open-Platypus", "dataset:LDJnr/Puffin", "dataset:BAAI/COIG", "dataset:TigerResearch/tigerbot-zhihu-zh-10k", "dataset:liwu/MNBVC", "dataset:teknium/openhermes", "dataset:CausalLM/Refined-Anime-Text", "dataset:microsoft/orca-math-word-problems-200k", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "license:gpl-3.0", "region:us" ]
text-generation
"2024-05-06T01:10:30Z"
--- license: gpl-3.0 language: - en - zh - ja - de datasets: - JosephusCheung/GuanacoDataset - meta-math/MetaMathQA - jondurbin/airoboros-3.1 - WizardLM/WizardLM_evol_instruct_V2_196k - RyokoAI/ShareGPT52K - RyokoAI/Fandom23K - milashkaarshif/MoeGirlPedia_wikitext_raw_archive - wikipedia - wiki_lingua - garage-bAInd/Open-Platypus - LDJnr/Puffin - BAAI/COIG - TigerResearch/tigerbot-zhihu-zh-10k - liwu/MNBVC - teknium/openhermes - CausalLM/Refined-Anime-Text - microsoft/orca-math-word-problems-200k - m-a-p/CodeFeedback-Filtered-Instruction quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp imatrix Quantizations of 35b-beta-long Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2794">b2794</a> for quantization. Original model: https://huggingface.co/CausalLM/35b-beta-long All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant <|im_end|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [35b-beta-long-Q8_0.gguf](https://huggingface.co/bartowski/35b-beta-long-GGUF/blob/main/35b-beta-long-Q8_0.gguf) | Q8_0 | 37.17GB | Extremely high quality, generally unneeded but max available quant. | | [35b-beta-long-Q6_K.gguf](https://huggingface.co/bartowski/35b-beta-long-GGUF/blob/main/35b-beta-long-Q6_K.gguf) | Q6_K | 28.70GB | Very high quality, near perfect, *recommended*. | | [35b-beta-long-Q5_K_M.gguf](https://huggingface.co/bartowski/35b-beta-long-GGUF/blob/main/35b-beta-long-Q5_K_M.gguf) | Q5_K_M | 25.00GB | High quality, *recommended*. | | [35b-beta-long-Q5_K_S.gguf](https://huggingface.co/bartowski/35b-beta-long-GGUF/blob/main/35b-beta-long-Q5_K_S.gguf) | Q5_K_S | 24.33GB | High quality, *recommended*. | | [35b-beta-long-Q4_K_M.gguf](https://huggingface.co/bartowski/35b-beta-long-GGUF/blob/main/35b-beta-long-Q4_K_M.gguf) | Q4_K_M | 21.52GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [35b-beta-long-Q4_K_S.gguf](https://huggingface.co/bartowski/35b-beta-long-GGUF/blob/main/35b-beta-long-Q4_K_S.gguf) | Q4_K_S | 20.37GB | Slightly lower quality with more space savings, *recommended*. | | [35b-beta-long-IQ4_NL.gguf](https://huggingface.co/bartowski/35b-beta-long-GGUF/blob/main/35b-beta-long-IQ4_NL.gguf) | IQ4_NL | 20.22GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [35b-beta-long-IQ4_XS.gguf](https://huggingface.co/bartowski/35b-beta-long-GGUF/blob/main/35b-beta-long-IQ4_XS.gguf) | IQ4_XS | 19.20GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [35b-beta-long-Q3_K_L.gguf](https://huggingface.co/bartowski/35b-beta-long-GGUF/blob/main/35b-beta-long-Q3_K_L.gguf) | Q3_K_L | 19.14GB | Lower quality but usable, good for low RAM availability. | | [35b-beta-long-Q3_K_M.gguf](https://huggingface.co/bartowski/35b-beta-long-GGUF/blob/main/35b-beta-long-Q3_K_M.gguf) | Q3_K_M | 17.61GB | Even lower quality. | | [35b-beta-long-IQ3_M.gguf](https://huggingface.co/bartowski/35b-beta-long-GGUF/blob/main/35b-beta-long-IQ3_M.gguf) | IQ3_M | 16.69GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [35b-beta-long-IQ3_S.gguf](https://huggingface.co/bartowski/35b-beta-long-GGUF/blob/main/35b-beta-long-IQ3_S.gguf) | IQ3_S | 15.86GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [35b-beta-long-Q3_K_S.gguf](https://huggingface.co/bartowski/35b-beta-long-GGUF/blob/main/35b-beta-long-Q3_K_S.gguf) | Q3_K_S | 15.86GB | Low quality, not recommended. | | [35b-beta-long-IQ3_XS.gguf](https://huggingface.co/bartowski/35b-beta-long-GGUF/blob/main/35b-beta-long-IQ3_XS.gguf) | IQ3_XS | 15.09GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [35b-beta-long-IQ3_XXS.gguf](https://huggingface.co/bartowski/35b-beta-long-GGUF/blob/main/35b-beta-long-IQ3_XXS.gguf) | IQ3_XXS | 13.83GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [35b-beta-long-Q2_K.gguf](https://huggingface.co/bartowski/35b-beta-long-GGUF/blob/main/35b-beta-long-Q2_K.gguf) | Q2_K | 13.81GB | Very low quality but surprisingly usable. | | [35b-beta-long-IQ2_M.gguf](https://huggingface.co/bartowski/35b-beta-long-GGUF/blob/main/35b-beta-long-IQ2_M.gguf) | IQ2_M | 12.67GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [35b-beta-long-IQ2_S.gguf](https://huggingface.co/bartowski/35b-beta-long-GGUF/blob/main/35b-beta-long-IQ2_S.gguf) | IQ2_S | 11.84GB | Very low quality, uses SOTA techniques to be usable. | | [35b-beta-long-IQ2_XS.gguf](https://huggingface.co/bartowski/35b-beta-long-GGUF/blob/main/35b-beta-long-IQ2_XS.gguf) | IQ2_XS | 11.10GB | Very low quality, uses SOTA techniques to be usable. | | [35b-beta-long-IQ2_XXS.gguf](https://huggingface.co/bartowski/35b-beta-long-GGUF/blob/main/35b-beta-long-IQ2_XXS.gguf) | IQ2_XXS | 10.18GB | Lower quality, uses SOTA techniques to be usable. | | [35b-beta-long-IQ1_M.gguf](https://huggingface.co/bartowski/35b-beta-long-GGUF/blob/main/35b-beta-long-IQ1_M.gguf) | IQ1_M | 9.14GB | Extremely low quality, *not* recommended. | | [35b-beta-long-IQ1_S.gguf](https://huggingface.co/bartowski/35b-beta-long-GGUF/blob/main/35b-beta-long-IQ1_S.gguf) | IQ1_S | 8.52GB | Extremely low quality, *not* recommended. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/35b-beta-long-GGUF --include "35b-beta-long-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/35b-beta-long-GGUF --include "35b-beta-long-Q8_0.gguf/*" --local-dir 35b-beta-long-Q8_0 --local-dir-use-symlinks False ``` You can either specify a new local-dir (35b-beta-long-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
cognitivecomputations/Llama-3-8B-Instruct-abliterated-v2-gguf
cognitivecomputations
"2024-05-09T02:31:41Z"
1,039
18
transformers
[ "transformers", "gguf", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-05-09T02:03:38Z"
--- library_name: transformers license: llama3 --- # Model Card for Llama-3-8B-Instruct-abliterated-v2 ## Overview This model card describes the Llama-3-8B-Instruct-abliterated-v2 model, which is an orthogonalized version of the meta-llama/Llama-3-8B-Instruct model, and an improvement upon the previous generation Llama-3-8B-Instruct-abliterated. This variant has had certain weights manipulated to inhibit the model's ability to express refusal. [Join the Cognitive Computations Discord!](https://discord.gg/cognitivecomputations) ## Details * The model was trained with more data to better pinpoint the "refusal direction". * This model is MUCH better at directly and succinctly answering requests without producing even so much as disclaimers. ## Methodology The methodology used to generate this model is described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' ## Quirks and Side Effects This model may come with interesting quirks, as the methodology is still new and untested. The code used to generate the model is available in the Python notebook [ortho_cookbook.ipynb](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb). Please note that the model may still refuse to answer certain requests, even after the weights have been manipulated to inhibit refusal.
second-state/Qwen2-1.5B-Instruct-GGUF
second-state
"2024-06-07T04:55:18Z"
1,039
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation", "chat", "en", "base_model:Qwen/Qwen2-1.5B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-07T04:44:37Z"
--- base_model: Qwen/Qwen2-1.5B-Instruct license: apache-2.0 model_creator: Qwen model_name: Qwen2-1.5B-Instruct quantized_by: Second State Inc. language: - en pipeline_tag: text-generation tags: - chat --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Qwen2-1.5B-Instruct-GGUF ## Original Model [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) ## Run with LlamaEdge - LlamaEdge version: [v0.2.15](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.2.15) and above - Prompt template - Prompt type: `chatml` - Prompt string ```text <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` - Context size: `32000` - Run as LlamaEdge service ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen2-1.5B-Instruct-Q5_K_M.gguf \ llama-api-server.wasm \ --model-name Qwen2-1.5B-Instruct \ --prompt-template chatml \ --ctx-size 32000 ``` - Run as LlamaEdge command app ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen2-1.5B-Instruct-Q5_K_M.gguf \ llama-chat.wasm \ --prompt-template chatml \ --ctx-size 32000 ``` ## Quantized GGUF Models | Name | Quant method | Bits | Size | Use case | | ---- | ---- | ---- | ---- | ----- | | [Qwen2-1.5B-Instruct-Q2_K.gguf](https://huggingface.co/second-state/Qwen2-1.5B-Instruct-GGUF/blob/main/Qwen2-1.5B-Instruct-Q2_K.gguf) | Q2_K | 2 | 676 MB| smallest, significant quality loss - not recommended for most purposes | | [Qwen2-1.5B-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Qwen2-1.5B-Instruct-GGUF/blob/main/Qwen2-1.5B-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 880 MB| small, substantial quality loss | | [Qwen2-1.5B-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Qwen2-1.5B-Instruct-GGUF/blob/main/Qwen2-1.5B-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 824 MB| very small, high quality loss | | [Qwen2-1.5B-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Qwen2-1.5B-Instruct-GGUF/blob/main/Qwen2-1.5B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 761 MB| very small, high quality loss | | [Qwen2-1.5B-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Qwen2-1.5B-Instruct-GGUF/blob/main/Qwen2-1.5B-Instruct-Q4_0.gguf) | Q4_0 | 4 | 935 MB| legacy; small, very high quality loss - prefer using Q3_K_M | | [Qwen2-1.5B-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Qwen2-1.5B-Instruct-GGUF/blob/main/Qwen2-1.5B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 986 MB| medium, balanced quality - recommended | | [Qwen2-1.5B-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Qwen2-1.5B-Instruct-GGUF/blob/main/Qwen2-1.5B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 940 MB| small, greater quality loss | | [Qwen2-1.5B-Instruct-Q5_0.gguf](https://huggingface.co/second-state/Qwen2-1.5B-Instruct-GGUF/blob/main/Qwen2-1.5B-Instruct-Q5_0.gguf) | Q5_0 | 5 | 1.1 GB| legacy; medium, balanced quality - prefer using Q4_K_M | | [Qwen2-1.5B-Instruct-Q5_K_M.gguf](https://huggingface.co/second-state/Qwen2-1.5B-Instruct-GGUF/blob/main/Qwen2-1.5B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 1.13 GB| large, very low quality loss - recommended | | [Qwen2-1.5B-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/Qwen2-1.5B-Instruct-GGUF/blob/main/Qwen2-1.5B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 1.1 GB| large, low quality loss - recommended | | [Qwen2-1.5B-Instruct-Q6_K.gguf](https://huggingface.co/second-state/Qwen2-1.5B-Instruct-GGUF/blob/main/Qwen2-1.5B-Instruct-Q6_K.gguf) | Q6_K | 6 | 1.27 GB| very large, extremely low quality loss | | [Qwen2-1.5B-Instruct-Q8_0.gguf](https://huggingface.co/second-state/Qwen2-1.5B-Instruct-GGUF/blob/main/Qwen2-1.5B-Instruct-Q8_0.gguf) | Q8_0 | 8 | 1.65 GB| very large, extremely low quality loss - not recommended | | [Qwen2-1.5B-Instruct-f16.gguf](https://huggingface.co/second-state/Qwen2-1.5B-Instruct-GGUF/blob/main/Qwen2-1.5B-Instruct-f16.gguf) | f16 | 16 | 3.09 GB| | *Quantized with llama.cpp b3705*
sentence-transformers/stsb-distilroberta-base-v2
sentence-transformers
"2024-03-27T12:56:24Z"
1,038
1
sentence-transformers
[ "sentence-transformers", "pytorch", "tf", "jax", "safetensors", "roberta", "feature-extraction", "sentence-similarity", "transformers", "arxiv:1908.10084", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- license: apache-2.0 library_name: sentence-transformers tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers pipeline_tag: sentence-similarity --- # sentence-transformers/stsb-distilroberta-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/stsb-distilroberta-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/stsb-distilroberta-base-v2') model = AutoModel.from_pretrained('sentence-transformers/stsb-distilroberta-base-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/stsb-distilroberta-base-v2) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
Shitao/RetroMAE
Shitao
"2022-10-18T03:35:24Z"
1,038
3
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-10-18T03:33:42Z"
Entry not found
TheBloke/WizardMath-13B-V1.0-GGUF
TheBloke
"2023-09-27T12:47:30Z"
1,038
4
transformers
[ "transformers", "gguf", "llama", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "base_model:WizardLM/WizardMath-13B-V1.0", "license:llama2", "text-generation-inference", "region:us" ]
null
"2023-09-05T10:07:57Z"
--- license: llama2 model_name: WizardMath 13B V1.0 base_model: WizardLM/WizardMath-13B-V1.0 inference: false model_creator: WizardLM model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: Let''s think step by step. ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # WizardMath 13B V1.0 - GGUF - Model creator: [WizardLM](https://huggingface.co/WizardLM) - Original model: [WizardMath 13B V1.0](https://huggingface.co/WizardLM/WizardMath-13B-V1.0) <!-- description start --> ## Description This repo contains GGUF format model files for [WizardLM's WizardMath 13B V1.0](https://huggingface.co/WizardLM/WizardMath-13B-V1.0). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/WizardMath-13B-V1.0-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WizardMath-13B-V1.0-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardMath-13B-V1.0-GGUF) * [WizardLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardMath-13B-V1.0) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca-CoT ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: Let's think step by step. ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [wizardmath-13b-v1.0.Q2_K.gguf](https://huggingface.co/TheBloke/WizardMath-13B-V1.0-GGUF/blob/main/wizardmath-13b-v1.0.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [wizardmath-13b-v1.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/WizardMath-13B-V1.0-GGUF/blob/main/wizardmath-13b-v1.0.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [wizardmath-13b-v1.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/WizardMath-13B-V1.0-GGUF/blob/main/wizardmath-13b-v1.0.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [wizardmath-13b-v1.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/WizardMath-13B-V1.0-GGUF/blob/main/wizardmath-13b-v1.0.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [wizardmath-13b-v1.0.Q4_0.gguf](https://huggingface.co/TheBloke/WizardMath-13B-V1.0-GGUF/blob/main/wizardmath-13b-v1.0.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [wizardmath-13b-v1.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/WizardMath-13B-V1.0-GGUF/blob/main/wizardmath-13b-v1.0.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [wizardmath-13b-v1.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/WizardMath-13B-V1.0-GGUF/blob/main/wizardmath-13b-v1.0.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [wizardmath-13b-v1.0.Q5_0.gguf](https://huggingface.co/TheBloke/WizardMath-13B-V1.0-GGUF/blob/main/wizardmath-13b-v1.0.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [wizardmath-13b-v1.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/WizardMath-13B-V1.0-GGUF/blob/main/wizardmath-13b-v1.0.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [wizardmath-13b-v1.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/WizardMath-13B-V1.0-GGUF/blob/main/wizardmath-13b-v1.0.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [wizardmath-13b-v1.0.Q6_K.gguf](https://huggingface.co/TheBloke/WizardMath-13B-V1.0-GGUF/blob/main/wizardmath-13b-v1.0.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [wizardmath-13b-v1.0.Q8_0.gguf](https://huggingface.co/TheBloke/WizardMath-13B-V1.0-GGUF/blob/main/wizardmath-13b-v1.0.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/WizardMath-13B-V1.0-GGUF and below it, a specific filename to download, such as: wizardmath-13b-v1.0.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/WizardMath-13B-V1.0-GGUF wizardmath-13b-v1.0.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/WizardMath-13B-V1.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WizardMath-13B-V1.0-GGUF wizardmath-13b-v1.0.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m wizardmath-13b-v1.0.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n\n### Instruction:\n{prompt}\n\n\n### Response: Let's think step by step." ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/WizardMath-13B-V1.0-GGUF", model_file="wizardmath-13b-v1.0.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: WizardLM's WizardMath 13B V1.0 ## WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct (RLEIF) <p align="center"> 🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> </p> | Model | Checkpoint | Paper | HumanEval | MBPP | Demo | License | | ----- |------| ---- |------|-------| ----- | ----- | | WizardCoder-Python-34B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 61.2 | [Demo](http://47.103.63.15:50085/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-15B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 |50.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | WizardCoder-Python-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | 55.6 | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-Python-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 55.5 | 51.6 | [Demo](http://47.103.63.15:50088/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-3B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 |37.4 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | WizardCoder-1B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 |28.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License| | ----- |------| ---- |------|-------| ----- | ----- | | WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> | | WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> | | WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>| <font size=4> | <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>GSM8k</sup> | <sup>HumanEval</sup> | <sup>License</sup>| | ----- |------| ---- |------|-------| ----- | ----- | ----- | | <sup>**WizardLM-70B-V1.0**</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-70B-V1.0" target="_blank">HF Link</a> </sup>|<sup>📃**Coming Soon**</sup>| <sup>**7.78**</sup> | <sup>**92.91%**</sup> |<sup>**77.6%**</sup> | <sup> **50.6 pass@1**</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> | | <sup>WizardLM-13B-V1.2</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> |<sup>55.3%</sup> | <sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> | | <sup>WizardLM-13B-V1.1</sup> |<sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | | <sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>| | <sup>WizardLM-30B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> | | <sup>WizardLM-13B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | | <sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>| | <sup>WizardLM-7B-V1.0 </sup>| <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>| </font> **Github Repo**: https://github.com/nlpxucan/WizardLM/tree/main/WizardMath **Twitter**: https://twitter.com/WizardLM_AI/status/1689998428200112128 **Discord**: https://discord.gg/VZjjHtWrKs ## Comparing WizardMath-V1.0 with Other LLMs. 🔥 The following figure shows that our **WizardMath-70B-V1.0 attains the fifth position in this benchmark**, surpassing ChatGPT (81.6 vs. 80.8) , Claude Instant (81.6 vs. 80.9), PaLM 2 540B (81.6 vs. 80.7). <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/WizardMath/images/wizardmath_gsm8k.png" alt="WizardMath" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ❗<b>Note for model system prompts usage:</b> Please use **the same systems prompts strictly** with us, and we do not guarantee the accuracy of the **quantified versions**. **Default version:** ``` "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:" ``` **CoT Version:** (❗For the **simple** math questions, we do NOT recommend to use the CoT prompt.) ``` "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response: Let's think step by step." ``` ## Inference WizardMath Demo Script We provide the WizardMath inference demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo). ❗<b>To commen concern about dataset:</b> Recently, there have been clear changes in the open-source policy and regulations of our overall organization's code, data, and models. Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team . Our researchers have no authority to publicly release them without authorization. Thank you for your understanding. ## Citation Please cite the repo if you use the data, method or code in this repo. ``` @article{luo2023wizardmath, title={WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct}, author={Luo, Haipeng and Sun, Qingfeng and Xu, Can and Zhao, Pu and Lou, Jianguang and Tao, Chongyang and Geng, Xiubo and Lin, Qingwei and Chen, Shifeng and Zhang, Dongmei}, journal={arXiv preprint arXiv:2308.09583}, year={2023} } ``` <!-- original-model-card end -->
AdaptLLM/finance-LLM
AdaptLLM
"2024-06-25T03:03:21Z"
1,038
91
transformers
[ "transformers", "pytorch", "llama", "text-generation", "finance", "en", "dataset:Open-Orca/OpenOrca", "dataset:GAIR/lima", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "arxiv:2309.09530", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-18T13:45:13Z"
--- language: - en datasets: - Open-Orca/OpenOrca - GAIR/lima - WizardLM/WizardLM_evol_instruct_V2_196k metrics: - accuracy pipeline_tag: text-generation tags: - finance --- # Domain Adaptation of Large Language Models This repo contains the domain-specific base model developed from **LLaMA-1-7B**, using the method in our **ICLR 2024** paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530). We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**. ### 🤗 [2024/6/21] We release the 2nd version of AdaptLLM at [Instruction-Pretrain](https://huggingface.co/instruction-pretrain), effective for both general pre-training from scratch and domain-adaptive continual pre-training!!! 🤗 **************************** **Updates** **************************** * 2024/6/22: Released the [benchmarking code](https://github.com/microsoft/LMOps/tree/main/adaptllm). * 2024/6/21: 👏🏻 Released the 2nd version of AdaptLLM at [Instruction-Pretrain](https://huggingface.co/instruction-pretrain) 👏🏻 * 2024/1/16: 🎉 Our [research paper](https://huggingface.co/papers/2309.09530) has been accepted by ICLR 2024!!!🎉 * 2023/12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B. * 2023/12/8: Released our [chat models](https://huggingface.co/AdaptLLM/law-chat) developed from LLaMA-2-Chat-7B. * 2023/9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/law-tasks), and [base models](https://huggingface.co/AdaptLLM/law-LLM) developed from LLaMA-1-7B. ## Domain-Specific LLaMA-1 ### LLaMA-1-7B In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are: <p align='center'> <img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700"> </p> ### LLaMA-1-13B Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B). ## Domain-Specific LLaMA-2-Chat Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat) For example, to chat with the finance base model (**🤗we highly recommend switching to the [chat model](https://huggingface.co/AdaptLLM/finance-chat) for better response quality!**): ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("AdaptLLM/finance-LLM") tokenizer = AutoTokenizer.from_pretrained("AdaptLLM/finance-LLM", use_fast=False) # Put your input here: user_input = '''Use this fact to answer the question: Title of each class Trading Symbol(s) Name of each exchange on which registered Common Stock, Par Value $.01 Per Share MMM New York Stock Exchange MMM Chicago Stock Exchange, Inc. 1.500% Notes due 2026 MMM26 New York Stock Exchange 1.750% Notes due 2030 MMM30 New York Stock Exchange 1.500% Notes due 2031 MMM31 New York Stock Exchange Which debt securities are registered to trade on a national securities exchange under 3M's name as of Q2 of 2023?''' # Simply use your input as the prompt for base models prompt = user_input inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).input_ids.to(model.device) outputs = model.generate(input_ids=inputs, max_length=2048)[0] answer_start = int(inputs.shape[-1]) pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True) print(f'### User Input:\n{user_input}\n\n### Assistant Output:\n{pred}') ``` ## Domain-Specific Tasks To easily reproduce our results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks). **Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models. ## Citation If you find our work helpful, please cite us: ```bibtex @inproceedings{ cheng2024adapting, title={Adapting Large Language Models via Reading Comprehension}, author={Daixuan Cheng and Shaohan Huang and Furu Wei}, booktitle={The Twelfth International Conference on Learning Representations}, year={2024}, url={https://openreview.net/forum?id=y886UXPEZ0} } ```
01-ai/Yi-VL-6B
01-ai
"2024-06-26T10:45:12Z"
1,038
112
pytorch
[ "pytorch", "llava", "image-text-to-text", "arxiv:2403.04652", "license:apache-2.0", "region:us" ]
image-text-to-text
"2023-12-25T04:48:32Z"
--- license: apache-2.0 library_name: pytorch tags: - llava license_name: yi-license license_link: LICENSE pipeline_tag: image-text-to-text --- <div align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_dark.svg" width="200px"> <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="200px"> <img alt="specify theme context for images" src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="200px"> </picture> </div> <div align="center"> <h1 align="center">Yi Vision Language Model</h1> </div> <div align="center"> <h3 align="center">Better Bilingual Multimodal Model</h3> </div> <p align="center"> 🤗 <a href="https://huggingface.co/01-ai" target="_blank">Hugging Face</a> • 🤖 <a href="https://www.modelscope.cn/organization/01ai/" target="_blank">ModelScope</a> • 🟣 <a href="https://wisemodel.cn/organization/01.AI" target="_blank">wisemodel</a> </p> <p align="center"> 👩‍🚀 Ask questions or discuss ideas on <a href="https://github.com/01-ai/Yi/discussions" target="_blank"> GitHub </a>! </p> <p align="center"> 👋 Join us 💬 <a href="https://github.com/01-ai/Yi/issues/43#issuecomment-1827285245" target="_blank"> WeChat (Chinese) </a>! </p> <p align="center"> 📚 Grow at <a href="https://github.com/01-ai/Yi/blob/main/docs/learning_hub.md"> Yi Learning Hub </a>! </p> <hr> <!-- DO NOT REMOVE ME --> <details open> <summary></b>📕 Table of Contents</b></summary> - [What is Yi-VL?](#what-is-yi-vl) - [Overview](#overview) - [Models](#models) - [Features](#features) - [Architecture](#architecture) - [Training](#training) - [Limitations](#limitations) - [Why Yi-VL?](#why-yi-vl) - [Tech report](#tech-report) - [Benchmarks](#benchmarks) - [Showcases](#showcases) - [How to use Yi-VL?](#how-to-use-yi-vl) - [Quick start](#quick-start) - [Hardware requirements](#hardware-requirements) - [Misc.](#misc) - [Acknowledgements and attributions](#acknowledgements-and-attributions) - [List of used open-source projects](#list-of-used-open-source-projects) - [License](#license) </details> <hr> # What is Yi-VL? ## Overview - **Yi Vision Language (Yi-VL)** model is the open-source, multimodal version of the Yi **Large Language Model (LLM)** series, enabling content comprehension, recognition, and multi-round conversations about images. - Yi-VL demonstrates exceptional performance, **ranking first** among all existing open-source models in the latest benchmarks including [MMMU](https://mmmu-benchmark.github.io/#leaderboard) in English and [CMMMU](https://mmmu-benchmark.github.io/#leaderboard) in Chinese (based on data available up to January 2024). - Yi-VL-34B is the **first** open-source 34B vision language model worldwide. ## Models Yi-VL has released the following versions. Model | Download |---|--- Yi-VL-34B |• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-VL-34B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-VL-34B/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-VL-34B) Yi-VL-6B | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-VL-6B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-VL-6B/summary) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-VL-6B) ## Features Yi-VL offers the following features: - Multi-round text-image conversations: Yi-VL can take both text and images as inputs and produce text outputs. Currently, it supports multi-round visual question answering with one image. - Bilingual text support: Yi-VL supports conversations in both English and Chinese, including text recognition in images. - Strong image comprehension: Yi-VL is adept at analyzing visuals, making it an efficient tool for tasks like extracting, organizing, and summarizing information from images. - Fine-grained image resolution: Yi-VL supports image understanding at a higher resolution of 448&times;448. ## Architecture Yi-VL adopts the [LLaVA](https://github.com/haotian-liu/LLaVA) architecture, which is composed of three primary components: - Vision Transformer (ViT): it's initialized with [CLIP ViT-H/14 model](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K) and used for image encoding. - Projection Module: it's designed to align image features with text feature space, consisting of a two-layer Multilayer Perceptron (MLP) with layer normalizations. - Large Language Model (LLM): it's initialized with [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) or [Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat), demonstrating exceptional proficiency in understanding and generating both English and Chinese. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/EGVHSWG4kAcX01xDaoeXS.png) ## Training ### Training process Yi-VL is trained to align visual information well to the semantic space of Yi LLM, which undergoes a comprehensive three-stage training process: - Stage 1: The parameters of ViT and the projection module are trained using an image resolution of 224&times;224. The LLM weights are frozen. The training leverages an image caption dataset comprising 100 million image-text pairs from [LAION-400M](https://laion.ai/blog/laion-400-open-dataset/). The primary objective is to enhance the ViT's knowledge acquisition within our specified architecture and to achieve better alignment between the ViT and the LLM. - Stage 2: The image resolution of ViT is scaled up to 448&times;448, and the parameters of ViT and the projection module are trained. It aims to further boost the model's capability for discerning intricate visual details. The dataset used in this stage includes about 25 million image-text pairs, such as [LAION-400M](https://laion.ai/blog/laion-400-open-dataset/), [CLLaVA](https://huggingface.co/datasets/LinkSoul/Chinese-LLaVA-Vision-Instructions), [LLaVAR](https://llavar.github.io/), [Flickr](https://www.kaggle.com/datasets/hsankesara/flickr-image-dataset), [VQAv2](https://paperswithcode.com/dataset/visual-question-answering-v2-0), [RefCOCO](https://github.com/lichengunc/refer/tree/master), [Visual7w](http://ai.stanford.edu/~yukez/visual7w/) and so on. - Stage 3: The parameters of the entire model (that is, ViT, projection module, and LLM) are trained. The primary goal is to enhance the model's proficiency in multimodal chat interactions, thereby endowing it with the ability to seamlessly integrate and interpret visual and linguistic inputs. To this end, the training dataset encompasses a diverse range of sources, totalling approximately 1 million image-text pairs, including [GQA](https://cs.stanford.edu/people/dorarad/gqa/download.html), [VizWiz VQA](https://vizwiz.org/tasks-and-datasets/vqa/), [TextCaps](https://opendatalab.com/OpenDataLab/TextCaps), [OCR-VQA](https://ocr-vqa.github.io/), [Visual Genome](https://homes.cs.washington.edu/~ranjay/visualgenome/api.html), [LAION GPT4V](https://huggingface.co/datasets/laion/gpt4v-dataset) and so on. To ensure data balancing, we impose a cap on the maximum data contribution from any single source, restricting it to no more than 50,000 pairs. Below are the parameters configured for each stage. Stage | Global batch size | Learning rate | Gradient clip | Epochs |---|---|---|---|--- Stage 1, 2 |4096|1e-4|0.5|1 Stage 3|256|2e-5|1.0|2 ### Training resource consumption - The training consumes 128 NVIDIA A800 (80G) GPUs. - The total training time amounted to approximately 10 days for Yi-VL-34B and 3 days for Yi-VL-6B. ## Limitations This is the initial release of the Yi-VL, which comes with some known limitations. It is recommended to carefully evaluate potential risks before adopting any models. - Feature limitation - Visual question answering is supported. Other features like text-to-3D and image-to-video are not yet supported. - A single image rather than several images can be accepted as an input. - Hallucination problem - There is a certain possibility of generating content that does not exist in the image. - In scenes containing multiple objects, some objects might be incorrectly identified or described with insufficient detail. - Resolution issue - Yi-VL is trained on images with a resolution of 448&times;448. During inference, inputs of any resolution are resized to 448&times;448. Low-resolution images may result in information loss, and more fine-grained images (above 448) do not bring in extra knowledge. - Other limitations of the Yi LLM. # Why Yi-VL? ## Tech report For detailed capabilities of the Yi series model, see [Yi: Open Foundation Models by 01.AI](https://arxiv.org/abs/2403.04652). ### Citation ``` @misc{ai2024yi, title={Yi: Open Foundation Models by 01.AI}, author={01. AI and : and Alex Young and Bei Chen and Chao Li and Chengen Huang and Ge Zhang and Guanwei Zhang and Heng Li and Jiangcheng Zhu and Jianqun Chen and Jing Chang and Kaidong Yu and Peng Liu and Qiang Liu and Shawn Yue and Senbin Yang and Shiming Yang and Tao Yu and Wen Xie and Wenhao Huang and Xiaohui Hu and Xiaoyi Ren and Xinyao Niu and Pengcheng Nie and Yuchi Xu and Yudong Liu and Yue Wang and Yuxuan Cai and Zhenyu Gu and Zhiyuan Liu and Zonghong Dai}, year={2024}, eprint={2403.04652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Benchmarks Yi-VL outperforms all existing open-source models in [MMMU](https://mmmu-benchmark.github.io) and [CMMMU](https://cmmmu-benchmark.github.io), two advanced benchmarks that include massive multi-discipline multimodal questions (based on data available up to January 2024). - MMMU ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/kCmXuwLbLvequ93kjh3mg.png) - CMMMU ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/6YuSakMCg3D2AozixdoZ0.png) ## Showcases Below are some representative examples of detailed description and visual question answering, showcasing the capabilities of Yi-VL. - English ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64cc65d786d8dc0caa6ab3cd/F_2bIVwMtVamygbVqtb8E.png) - Chinese ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/l_tLzugFtHk1dkVsFJE7B.png) # How to use Yi-VL? ## Quick start Please refer to [Yi GitHub Repo](https://github.com/01-ai/Yi/tree/main/VL) for details. ## Hardware requirements For model inference, the recommended GPU examples are: - Yi-VL-6B: RTX 3090, RTX 4090, A10, A30 - Yi-VL-34B: 4 &times; RTX 4090, A800 (80 GB) # Misc. ## Acknowledgements and attributions This project makes use of open-source software/components. We acknowledge and are grateful to these developers for their contributions to the open-source community. ### List of used open-source projects 1. LLaVA - Authors: Haotian Liu, Chunyuan Li, Qingyang Wu, Yuheng Li, and Yong Jae Lee - Source: https://github.com/haotian-liu/LLaVA - License: Apache-2.0 license - Description: The codebase is based on LLaVA code. 2. OpenClip - Authors: Gabriel Ilharco, Mitchell Wortsman, Ross Wightman, Cade Gordon, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, Hongseok Namkoong, John Miller, Hannaneh Hajishirzi, Ali Farhadi, and Ludwig Schmidt - Source: https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K - License: MIT - Description: The ViT is initialized using the weights of OpenClip. **Notes** - This attribution does not claim to cover all open-source components used. Please check individual components and their respective licenses for full details. - The use of the open-source components is subject to the terms and conditions of the respective licenses. We appreciate the open-source community for their invaluable contributions to the technology world. ## License Please refer to the [acknowledgments and attributions](#acknowledgments_and_attributions) as well as individual components, for the license of source code. The Yi series models are fully open for academic research and free for commercial use, permissions of which are automatically granted upon application. All usage must adhere to the [Apache 2.0 license](https://github.com/01-ai/Yi/blob/main/LICENSE). For free commercial use, you only need to send an email to get official commercial permission.
hooking-dev/sexyGPT-Uncensored
hooking-dev
"2024-04-18T11:08:10Z"
1,038
8
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-08T14:07:52Z"
--- license: apache-2.0 --- # SexyGPT By Hooking AI **SexyGPT** is a gpt2 model finetuned as an experiment on a private dirty talk dataset to test the behaviors of generative transformers. The model tuned based on: <br> Boy Text: `hey, what's up?` <br> AI Text: ** some dirty answer ** <br> Be advised, <br> The model have dirty talk. It is for research purposes only and not meant to be used in ANY production enviroment.<br> Thank you, Hooking AI Team.
ielabgroup/vec2text_gtr-base-st_corrector
ielabgroup
"2024-01-28T23:36:25Z"
1,038
0
transformers
[ "transformers", "safetensors", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-01-28T03:59:18Z"
--- license: apache-2.0 --- To use the model, check vec2text repo [https://github.com/jxmorris12/vec2text](https://github.com/jxmorris12/vec2text) # Example: ```python from sentence_transformers import SentenceTransformer import vec2text import transformers inversion_model = vec2text.models.InversionModel.from_pretrained( "ielabgroup/vec2text_gtr-base-st_inversion" ) model = vec2text.models.CorrectorEncoderModel.from_pretrained( "ielabgroup/vec2text_gtr-base-st_corrector" ) inversion_trainer = vec2text.trainers.InversionTrainer( model=inversion_model, train_dataset=None, eval_dataset=None, data_collator=transformers.DataCollatorForSeq2Seq( inversion_model.tokenizer, label_pad_token_id=-100, ), ) model.config.dispatch_batches = None corrector = vec2text.trainers.Corrector( model=model, inversion_trainer=inversion_trainer, args=None, data_collator=vec2text.collator.DataCollatorForCorrection( tokenizer=inversion_trainer.model.tokenizer ), ) model = SentenceTransformer('sentence-transformers/gtr-t5-base') embeddings = model.encode([ "Jack Morris is a PhD student at Cornell Tech in New York City", "It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity" ], convert_to_tensor=True,).to('mps') vec2text.invert_embeddings( embeddings=embeddings, corrector=corrector, num_steps=20, ) [' Jack Morris is a PhD student at Cornell Tech in New York', 'It was the best of times, it was the worst of times, it was the epoch of incredulity, it was age of'] ```
hvein/5HC6qXCVXJ4Toc9F9UAcMuwmFAizhRkNX3hRbYtg19RwPH7P_vgg
hvein
"2024-03-09T20:42:35Z"
1,038
0
keras
[ "keras", "region:us" ]
null
"2024-02-08T18:41:14Z"
Entry not found
Columbia-NLP/gemma-2b-zephyr-sft
Columbia-NLP
"2024-04-12T01:56:42Z"
1,038
6
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "alignment-handbook", "trl", "sft", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/deita-10k-v0-sft", "base_model:google/gemma-2b", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-11T14:57:41Z"
--- license: other license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms base_model: google/gemma-2b tags: - alignment-handbook - trl - sft - generated_from_trainer datasets: - HuggingFaceH4/deita-10k-v0-sft model-index: - name: gemma-2b-zephyr-sft results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 51.88 name: normalized accuracy - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 72.63 name: normalized accuracy - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 42.20 name: accuracy - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 41.96 - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 63.85 name: accuracy - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 20.09 name: accuracy --- # Model Card for Gemma 2B Zephyr SFT We trained the [google/gemma-2b](https://huggingface.co/google/gemma-2b) with [deita-10k-v0-sft](https://huggingface.co/datasets/HuggingFaceH4/deita-10k-v0-sft). We carefully selected the hyper-parameters and masked the user tokens during training to achieve the best supervised fine-tuning performance. ## Model description - **Model type:** A 2.5B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily English - **License:** Gemma Terms of Use - **Finetuned from model:** [google/gemma-2b](https://huggingface.co/google/gemma-2b) ## License This model has the same license as the [original Gemma model collection](https://ai.google.dev/gemma/terms) ## OpenLLM Leaderboard Performance | Models | Avg. | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8k | |-----------------------------------------|------|-------|-----------|------|------------|------------|-------| | google/gemma-2b | 46.37| 48.38 | 71.77 | 41.77| 33.08 | 66.77 | 16.91 | | google/gemma-2b-it | 42.75| 43.94 | 62.70 | 37.65| 45.82 | 60.93 | 5.46 | | wandb/gemma-2b-zephyr-sft | 47.18| 49.74 | 72.38 | 41.37| 34.42 | 66.93 | 18.27 | | wandb/gemma-2b-zephyr-dpo | 46.92| 49.66 | 72.23 | 41.13| 34.47 | 66.54 | 17.51 | | **Columbia-NLP/gemma-2b-zephyr-sft** | 48.75| 51.80 | 72.63 | 42.20| 41.96 | 63.85 | 20.09 | | Columbia-NLP/gemma-2b-zephyr-dpo | 49.14| 52.22 | 73.11 | 42.55| 42.64 | 64.40 | 19.94 | ## MT-Bench GPT-4-0125-preview as Judge | Model | Total | Coding | Extraction | Humanities | Math | Reasoning | Roleplay | STEM | Writing | |------------------------------------------|-------|--------|------------|------------|------|-----------|----------|------|---------| | google/gemma-2b-it | 4.71 | 2.95 | 4.35 | 6.15 | 2.90 | 3.50 | 5.60 | 5.50 | 6.70 | | wandb/gemma-2b-zephyr-sft | 4.03 | 3.10 | 3.15 | 5.00 | 2.70 | 2.65 | 5.10 | 4.80 | 5.75 | | wandb/gemma-2b-zephyr-dpo | 4.06 | 2.80 | 2.90 | 5.55 | 2.65 | 2.70 | 5.20 | 4.80 | 5.85 | | **Columbia-NLP/gemma-2b-zephyr-sft** | 4.34 | 3.10 | 3.70 | 6.25 | 2.65 | 2.70 | 5.55 | 5.25 | 5.50 | | Columbia-NLP/gemma-2b-zephyr-dpo | 4.75 | 3.50 | 4.05 | 6.75 | 3.30 | 3.70 | 5.85 | 5.40 | 5.53 |
semajitation/Meta-Llama-3-8B-Instruct-Q5_K_M-GGUF
semajitation
"2024-06-23T21:53:07Z"
1,038
0
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
text-generation
"2024-06-23T21:52:43Z"
--- base_model: meta-llama/Meta-Llama-3-8B-Instruct language: - en license: llama3 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - llama-cpp - gguf-my-repo extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\ \ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\ \ use, reproduction, distribution and modification of the Llama Materials set forth\ \ herein.\n\"Documentation\" means the specifications, manuals and documentation\ \ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\ \"Licensee\" or \"you\" means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf), of\ \ the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\ \ 3\" means the foundational large language models and software and algorithms,\ \ including machine-learning model code, trained model weights, inference-enabling\ \ code, training-enabling code, fine-tuning enabling code and other elements of\ \ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\ \"Llama Materials\" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation\ \ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\ we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\ \ an entity, your principal place of business is in the EEA or Switzerland) and\ \ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\ \ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\ \ a non-exclusive, worldwide, non-transferable and royalty-free limited license\ \ under Meta’s intellectual property or other rights owned by Meta embodied in the\ \ Llama Materials to use, reproduce, distribute, copy, create derivative works of,\ \ and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni.\ \ If you distribute or make available the Llama Materials (or any derivative works\ \ thereof), or a product or service that uses any of them, including another AI\ \ model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\ \ and (B) prominently display “Built with Meta Llama 3” on a related website, user\ \ interface, blogpost, about page, or product documentation. If you use the Llama\ \ Materials to create, train, fine tune, or otherwise improve an AI model, which\ \ is distributed or made available, you shall also include “Llama 3” at the beginning\ \ of any such AI model name.\nii. If you receive Llama Materials, or any derivative\ \ works thereof, from a Licensee as part of an integrated end user product, then\ \ Section 2 of this Agreement will not apply to you.\niii. You must retain in all\ \ copies of the Llama Materials that you distribute the following attribution notice\ \ within a “Notice” text file distributed as a part of such copies: “Meta Llama\ \ 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\nv. You will not use the Llama Materials or any output or\ \ results of the Llama Materials to improve any other large language model (excluding\ \ Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If,\ \ on the Meta Llama 3 version release date, the monthly active users of the products\ \ or services made available by or for Licensee, or Licensee’s affiliates, is greater\ \ than 700 million monthly active users in the preceding calendar month, you must\ \ request a license from Meta, which Meta may grant to you in its sole discretion,\ \ and you are not authorized to exercise any of the rights under this Agreement\ \ unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer\ \ of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT\ \ AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF\ \ ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,\ \ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\ \ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\ \ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\ \ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n\ 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER\ \ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,\ \ OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,\ \ SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META\ \ OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\ 5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\ \ and in connection with the Llama Materials, neither Meta nor Licensee may use\ \ any name or mark owned by or associated with the other or any of its affiliates,\ \ except as required for reasonable and customary use in describing and redistributing\ \ the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you\ \ a license to use “Llama 3” (the “Mark”) solely as required to comply with the\ \ last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently\ \ accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All\ \ goodwill arising out of your use of the Mark will inure to the benefit of Meta.\n\ b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for\ \ Meta, with respect to any derivative works and modifications of the Llama Materials\ \ that are made by you, as between you and Meta, you are and will be the owner of\ \ such derivative works and modifications.\nc. If you institute litigation or other\ \ proceedings against Meta or any entity (including a cross-claim or counterclaim\ \ in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results,\ \ or any portion of any of the foregoing, constitutes infringement of intellectual\ \ property or other rights owned or licensable by you, then any licenses granted\ \ to you under this Agreement shall terminate as of the date such litigation or\ \ claim is filed or instituted. You will indemnify and hold harmless Meta from and\ \ against any claim by any third party arising out of or related to your use or\ \ distribution of the Llama Materials.\n6. Term and Termination. The term of this\ \ Agreement will commence upon your acceptance of this Agreement or access to the\ \ Llama Materials and will continue in full force and effect until terminated in\ \ accordance with the terms and conditions herein. Meta may terminate this Agreement\ \ if you are in breach of any term or condition of this Agreement. Upon termination\ \ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\ \ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\ \ and Jurisdiction. This Agreement will be governed and construed under the laws\ \ of the State of California without regard to choice of law principles, and the\ \ UN Convention on Contracts for the International Sale of Goods does not apply\ \ to this Agreement. The courts of California shall have exclusive jurisdiction\ \ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\ \ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\ \ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\ \ Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\ #### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\ \ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 4.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\ \ or other sensitive personal or private information about individuals without rights\ \ and consents required by applicable laws\n 6. Engage in or facilitate any action\ \ or generate any content that infringes, misappropriates, or otherwise violates\ \ any third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 7. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n2. Engage in, promote, incite,\ \ facilitate, or assist in the planning or development of activities that present\ \ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\ \ to the following:\n 1. Military, warfare, nuclear industries or applications,\ \ espionage, use for materials or activities that are subject to the International\ \ Traffic Arms Regulations (ITAR) maintained by the United States Department of\ \ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\ \ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\ \ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\ \ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\ \ content intended to incite or promote violence, abuse, or any infliction of bodily\ \ harm to an individual\n3. Intentionally deceive or mislead others, including use\ \ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\ \ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\ \ or furthering defamatory content, including the creation of defamatory statements,\ \ images, or other content\n 3. Generating, promoting, or further distributing\ \ spam\n 4. Impersonating another individual without consent, authorization,\ \ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\ \ human-generated\n 6. Generating or facilitating false online engagement, including\ \ fake reviews and other means of fake online engagement\n4. Fail to appropriately\ \ disclose to end users any known dangers of your AI system\nPlease report any violation\ \ of this Policy, software “bug,” or other problems that could lead to a violation\ \ of this Policy through one of the following means:\n * Reporting issues with\ \ the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\ \ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\ \ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\ \ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit widget: - example_title: Hello messages: - role: user content: Hey my name is Julien! How are you? - example_title: Winter holidays messages: - role: system content: You are a helpful and honest assistant. Please, respond concisely and truthfully. - role: user content: Can you recommend a good destination for Winter holidays? - example_title: Programming assistant messages: - role: system content: You are a helpful and honest code and programming assistant. Please, respond concisely and truthfully. - role: user content: Write a function that computes the nth fibonacci number. inference: parameters: max_new_tokens: 300 stop: - <|end_of_text|> - <|eot_id|> --- # semajitation/Meta-Llama-3-8B-Instruct-Q5_K_M-GGUF This model was converted to GGUF format from [`meta-llama/Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo semajitation/Meta-Llama-3-8B-Instruct-Q5_K_M-GGUF --hf-file meta-llama-3-8b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo semajitation/Meta-Llama-3-8B-Instruct-Q5_K_M-GGUF --hf-file meta-llama-3-8b-instruct-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo semajitation/Meta-Llama-3-8B-Instruct-Q5_K_M-GGUF --hf-file meta-llama-3-8b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo semajitation/Meta-Llama-3-8B-Instruct-Q5_K_M-GGUF --hf-file meta-llama-3-8b-instruct-q5_k_m.gguf -c 2048 ```
AswiN037/xlm-roberta-squad-tamil
AswiN037
"2022-05-31T04:15:42Z"
1,037
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "license:osl-3.0", "endpoints_compatible", "region:us" ]
question-answering
"2022-04-22T03:55:42Z"
--- license: osl-3.0 --- Question Answering model
LTP/base
LTP
"2022-09-19T06:36:10Z"
1,037
3
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
"2022-08-14T04:15:28Z"
![CODE SIZE](https://img.shields.io/github/languages/code-size/HIT-SCIR/ltp) ![CONTRIBUTORS](https://img.shields.io/github/contributors/HIT-SCIR/ltp) ![LAST COMMIT](https://img.shields.io/github/last-commit/HIT-SCIR/ltp) | Language | version | | ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [Python](python/interface/README.md) | [![LTP](https://img.shields.io/pypi/v/ltp?label=LTP)](https://pypi.org/project/ltp) [![LTP-Core](https://img.shields.io/pypi/v/ltp-core?label=LTP-Core)](https://pypi.org/project/ltp-core) [![LTP-Extension](https://img.shields.io/pypi/v/ltp-extension?label=LTP-Extension)](https://pypi.org/project/ltp-extension) | | [Rust](rust/ltp/README.md) | [![LTP](https://img.shields.io/crates/v/ltp?label=LTP)](https://crates.io/crates/ltp) | # LTP 4 LTP(Language Technology Platform) 提供了一系列中文自然语言处理工具,用户可以使用这些工具对于中文文本进行分词、词性标注、句法分析等等工作。 ## 引用 如果您在工作中使用了 LTP,您可以引用这篇论文 ```bibtex @article{che2020n, title={N-LTP: A Open-source Neural Chinese Language Technology Platform with Pretrained Models}, author={Che, Wanxiang and Feng, Yunlong and Qin, Libo and Liu, Ting}, journal={arXiv preprint arXiv:2009.11616}, year={2020} } ``` **参考书:** 由哈工大社会计算与信息检索研究中心(HIT-SCIR)的多位学者共同编著的《[自然语言处理:基于预训练模型的方法](https://item.jd.com/13344628.html) 》(作者:车万翔、郭江、崔一鸣;主审:刘挺)一书现已正式出版,该书重点介绍了新的基于预训练模型的自然语言处理技术,包括基础知识、预训练词向量和预训练模型三大部分,可供广大LTP用户学习参考。 ### 更新说明 - 4.2.0 - \[结构性变化\] 将 LTP 拆分成 2 个部分,维护和训练更方便,结构更清晰 - \[Legacy 模型\] 针对广大用户对于**推理速度**的需求,使用 Rust 重写了基于感知机的算法,准确率与 LTP3 版本相当,速度则是 LTP v3 的 **3.55** 倍,开启多线程更可获得 **17.17** 倍的速度提升,但目前仅支持分词、词性、命名实体三大任务 - \[深度学习模型\] 即基于 PyTorch 实现的深度学习模型,支持全部的6大任务(分词/词性/命名实体/语义角色/依存句法/语义依存) - \[其他改进\] 改进了模型训练方法 - \[共同\] 提供了训练脚本和训练样例,使得用户能够更方便地使用私有的数据,自行训练个性化的模型 - \[深度学习模型\] 采用 hydra 对训练过程进行配置,方便广大用户修改模型训练参数以及对 LTP 进行扩展(比如使用其他包中的 Module) - \[其他变化\] 分词、依存句法分析 (Eisner) 和 语义依存分析 (Eisner) 任务的解码算法使用 Rust 实现,速度更快 - \[新特性\] 模型上传至 [Huggingface Hub](https://huggingface.co/LTP),支持自动下载,下载速度更快,并且支持用户自行上传自己训练的模型供LTP进行推理使用 - \[破坏性变更\] 改用 Pipeline API 进行推理,方便后续进行更深入的性能优化(如SDP和SDPG很大一部分是重叠的,重用可以加快推理速度),使用说明参见[Github快速使用部分](https://github.com/hit-scir/ltp) - 4.1.0 - 提供了自定义分词等功能 - 修复了一些bug - 4.0.0 - 基于Pytorch 开发,原生 Python 接口 - 可根据需要自由选择不同速度和指标的模型 - 分词、词性、命名实体、依存句法、语义角色、语义依存6大任务 ## 快速使用 ### [Python](python/interface/README.md) ```bash pip install -U ltp ltp-core ltp-extension -i https://pypi.org/simple # 安装 ltp ``` **注:** 如果遇到任何错误,请尝试使用上述命令重新安装 ltp,如果依然报错,请在 Github issues 中反馈。 ```python import torch from ltp import LTP ltp = LTP("LTP/small") # 默认加载 Small 模型 # 将模型移动到 GPU 上 if torch.cuda.is_available(): # ltp.cuda() ltp.to("cuda") output = ltp.pipeline(["他叫汤姆去拿外衣。"], tasks=["cws", "pos", "ner", "srl", "dep", "sdp"]) # 使用字典格式作为返回结果 print(output.cws) # print(output[0]) / print(output['cws']) # 也可以使用下标访问 print(output.pos) print(output.sdp) # 使用感知机算法实现的分词、词性和命名实体识别,速度比较快,但是精度略低 ltp = LTP("LTP/legacy") # cws, pos, ner = ltp.pipeline(["他叫汤姆去拿外衣。"], tasks=["cws", "ner"]).to_tuple() # error: NER 需要 词性标注任务的结果 cws, pos, ner = ltp.pipeline(["他叫汤姆去拿外衣。"], tasks=["cws", "pos", "ner"]).to_tuple() # to tuple 可以自动转换为元组格式 # 使用元组格式作为返回结果 print(cws, pos, ner) ``` **[详细说明](python/interface/docs/quickstart.rst)** ### [Rust](rust/ltp/README.md) ```rust use std::fs::File; use itertools::multizip; use ltp::{CWSModel, POSModel, NERModel, ModelSerde, Format, Codec}; fn main() -> Result<(), Box<dyn std::error::Error>> { let file = File::open("data/legacy-models/cws_model.bin")?; let cws: CWSModel = ModelSerde::load(file, Format::AVRO(Codec::Deflate))?; let file = File::open("data/legacy-models/pos_model.bin")?; let pos: POSModel = ModelSerde::load(file, Format::AVRO(Codec::Deflate))?; let file = File::open("data/legacy-models/ner_model.bin")?; let ner: NERModel = ModelSerde::load(file, Format::AVRO(Codec::Deflate))?; let words = cws.predict("他叫汤姆去拿外衣。")?; let pos = pos.predict(&words)?; let ner = ner.predict((&words, &pos))?; for (w, p, n) in multizip((words, pos, ner)) { println!("{}/{}/{}", w, p, n); } Ok(()) } ``` ## 模型性能以及下载地址 | 深度学习模型 | 分词 | 词性 | 命名实体 | 语义角色 | 依存句法 | 语义依存 | 速度(句/S) | | :---------------------------------------: | :---: | :---: | :---: | :---: | :---: | :---: | :-----: | | [Base](https://huggingface.co/LTP/base) | 98.7 | 98.5 | 95.4 | 80.6 | 89.5 | 75.2 | 39.12 | | [Base1](https://huggingface.co/LTP/base1) | 99.22 | 98.73 | 96.39 | 79.28 | 89.57 | 76.57 | --.-- | | [Base2](https://huggingface.co/LTP/base2) | 99.18 | 98.69 | 95.97 | 79.49 | 90.19 | 76.62 | --.-- | | [Small](https://huggingface.co/LTP/small) | 98.4 | 98.2 | 94.3 | 78.4 | 88.3 | 74.7 | 43.13 | | [Tiny](https://huggingface.co/LTP/tiny) | 96.8 | 97.1 | 91.6 | 70.9 | 83.8 | 70.1 | 53.22 | | 感知机算法 | 分词 | 词性 | 命名实体 | 速度(句/s) | 备注 | | :-----------------------------------------: | :---: | :---: | :---: | :------: | :------------------------: | | [Legacy](https://huggingface.co/LTP/legacy) | 97.93 | 98.41 | 94.28 | 21581.48 | [性能详情](rust/ltp/README.md) | **注:感知机算法速度为开启16线程速度** ## 构建 Wheel 包 ```shell script make bdist ``` ## 其他语言绑定 **感知机算法** - [Rust](rust/ltp) - [C/C++](rust/ltp-cffi) **深度学习算法** - [Rust](https://github.com/HIT-SCIR/libltp/tree/master/ltp-rs) - [C++](https://github.com/HIT-SCIR/libltp/tree/master/ltp-cpp) - [Java](https://github.com/HIT-SCIR/libltp/tree/master/ltp-java) ## 作者信息 - 冯云龙 \<\<[[email protected]](mailto:[email protected])>> ## 开源协议 1. 语言技术平台面向国内外大学、中科院各研究所以及个人研究者免费开放源代码,但如上述机构和个人将该平台用于商业目的(如企业合作项目等)则需要付费。 2. 除上述机构以外的企事业单位,如申请使用该平台,需付费。 3. 凡涉及付费问题,请发邮件到 [email protected] 洽商。 4. 如果您在 LTP 基础上发表论文或取得科研成果,请您在发表论文和申报成果时声明“使用了哈工大社会计算与信息检索研究中心研制的语言技术平台(LTP)”. 同时,发信给[email protected],说明发表论文或申报成果的题目、出处等。
SkunkworksAI/Mistralic-7B-1
SkunkworksAI
"2023-10-03T22:04:49Z"
1,037
26
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
"2023-09-29T23:45:09Z"
<p><h1> 🦾 Mistralic-7B-1 🦾 </h1></p> Special thanks to Together Compute for sponsoring Skunkworks with compute! **INFERENCE** ``` import torch from transformers import AutoModelForCausalLM, AutoTokenizer torch.set_default_device('cuda') system_prompt = "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n" system_no_input_prompt = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n" def generate_prompt(instruction, input=None): if input: prompt = f"### System:\n{system_prompt}\n\n" else: prompt = f"### System:\n{system_no_input_prompt}\n\n" prompt += f"### Instruction:\n{instruction}\n\n" if input: prompt += f"### Input:\n{input}\n\n" return prompt + """### Response:\n""" device = "cuda" model = AutoModelForCausalLM.from_pretrained("SkunkworksAI/Mistralic-7B-1") tokenizer = AutoTokenizer.from_pretrained("SkunkworksAI/Mistralic-7B-1") while True: instruction = input("Enter Instruction: ") instruction = generate_prompt(instruction) inputs = tokenizer(instruction, return_tensors="pt", return_attention_mask=False) outputs = model.generate(**inputs, max_length=1000, do_sample=True, temperature=0.01, use_cache=True, eos_token_id=tokenizer.eos_token_id) text = tokenizer.batch_decode(outputs)[0] print(text) ``` **EVALUATION** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b7e345f92b20f7a38bf47a/ycpNhdGZHGbai_wslT2Bg.png) Average: 0.72157 For comparison: mistralai/Mistral-7B-v0.1 scores 0.7116 mistralai/Mistral-7B-Instruct-v0.1 scores 0.6794
bilalRahib/TinyLLama-NSFW-Chatbot
bilalRahib
"2024-03-02T06:54:34Z"
1,037
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/tinyllama-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-03-02T06:53:21Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft base_model: unsloth/tinyllama-bnb-4bit --- # Uploaded model - **Developed by:** bilalRahib - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mddokmak/MNLP
mddokmak
"2024-06-01T16:13:57Z"
1,037
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openai-community/gpt2-large", "region:us" ]
null
"2024-06-01T16:11:04Z"
--- library_name: peft base_model: openai-community/gpt2-large --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
mradermacher/Llama3-18B-lingyang-v1-i1-GGUF
mradermacher
"2024-06-06T21:48:17Z"
1,037
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "Llama3", "en", "base_model:wwe180/Llama3-18B-lingyang-v1", "license:other", "endpoints_compatible", "region:us" ]
null
"2024-06-06T06:51:21Z"
--- base_model: wwe180/Llama3-18B-lingyang-v1 language: - en library_name: transformers license: - other quantized_by: mradermacher tags: - mergekit - merge - Llama3 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/wwe180/Llama3-18B-lingyang-v1 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Llama3-18B-lingyang-v1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama3-18B-lingyang-v1-i1-GGUF/resolve/main/Llama3-18B-lingyang-v1.i1-IQ1_S.gguf) | i1-IQ1_S | 4.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Llama3-18B-lingyang-v1-i1-GGUF/resolve/main/Llama3-18B-lingyang-v1.i1-IQ1_M.gguf) | i1-IQ1_M | 4.6 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Llama3-18B-lingyang-v1-i1-GGUF/resolve/main/Llama3-18B-lingyang-v1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-18B-lingyang-v1-i1-GGUF/resolve/main/Llama3-18B-lingyang-v1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-18B-lingyang-v1-i1-GGUF/resolve/main/Llama3-18B-lingyang-v1.i1-IQ2_S.gguf) | i1-IQ2_S | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-18B-lingyang-v1-i1-GGUF/resolve/main/Llama3-18B-lingyang-v1.i1-IQ2_M.gguf) | i1-IQ2_M | 6.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-18B-lingyang-v1-i1-GGUF/resolve/main/Llama3-18B-lingyang-v1.i1-Q2_K.gguf) | i1-Q2_K | 7.0 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama3-18B-lingyang-v1-i1-GGUF/resolve/main/Llama3-18B-lingyang-v1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-18B-lingyang-v1-i1-GGUF/resolve/main/Llama3-18B-lingyang-v1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-18B-lingyang-v1-i1-GGUF/resolve/main/Llama3-18B-lingyang-v1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 8.1 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama3-18B-lingyang-v1-i1-GGUF/resolve/main/Llama3-18B-lingyang-v1.i1-IQ3_S.gguf) | i1-IQ3_S | 8.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama3-18B-lingyang-v1-i1-GGUF/resolve/main/Llama3-18B-lingyang-v1.i1-IQ3_M.gguf) | i1-IQ3_M | 8.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-18B-lingyang-v1-i1-GGUF/resolve/main/Llama3-18B-lingyang-v1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 8.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Llama3-18B-lingyang-v1-i1-GGUF/resolve/main/Llama3-18B-lingyang-v1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 9.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Llama3-18B-lingyang-v1-i1-GGUF/resolve/main/Llama3-18B-lingyang-v1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 9.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-18B-lingyang-v1-i1-GGUF/resolve/main/Llama3-18B-lingyang-v1.i1-Q4_0.gguf) | i1-Q4_0 | 10.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-18B-lingyang-v1-i1-GGUF/resolve/main/Llama3-18B-lingyang-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 10.5 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-18B-lingyang-v1-i1-GGUF/resolve/main/Llama3-18B-lingyang-v1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 11.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama3-18B-lingyang-v1-i1-GGUF/resolve/main/Llama3-18B-lingyang-v1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 12.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-18B-lingyang-v1-i1-GGUF/resolve/main/Llama3-18B-lingyang-v1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 12.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-18B-lingyang-v1-i1-GGUF/resolve/main/Llama3-18B-lingyang-v1.i1-Q6_K.gguf) | i1-Q6_K | 14.9 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF
TheBloke
"2024-01-05T18:27:05Z"
1,036
10
transformers
[ "transformers", "gguf", "mixtral", "text-generation", "en", "dataset:lemonilia/LimaRP", "base_model:Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss", "license:apache-2.0", "text-generation-inference", "region:us" ]
text-generation
"2024-01-05T17:49:39Z"
--- base_model: Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss datasets: - lemonilia/LimaRP inference: false language: - en library_name: transformers license: apache-2.0 model_creator: Doctor Shotgun model_name: Mixtral 8X7B Instruct v0.1 LimaRP ZLoss model_type: mixtral pipeline_tag: text-generation prompt_template: '### Instruction: {system_message} ### Input: {prompt} ### Response: ' quantized_by: TheBloke tags: - mixtral --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Mixtral 8X7B Instruct v0.1 LimaRP ZLoss - GGUF - Model creator: [Doctor Shotgun](https://huggingface.co/Doctor-Shotgun) - Original model: [Mixtral 8X7B Instruct v0.1 LimaRP ZLoss](https://huggingface.co/Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss) <!-- description start --> ## Description This repo contains GGUF format model files for [Doctor Shotgun's Mixtral 8X7B Instruct v0.1 LimaRP ZLoss](https://huggingface.co/Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF) * [Doctor Shotgun's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Instruction-Input-Response ``` ### Instruction: {system_message} ### Input: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [mixtral-8x7b-instruct-v0.1-limarp-zloss.Q2_K.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/blob/main/mixtral-8x7b-instruct-v0.1-limarp-zloss.Q2_K.gguf) | Q2_K | 2 | 15.64 GB| 18.14 GB | smallest, significant quality loss - not recommended for most purposes | | [mixtral-8x7b-instruct-v0.1-limarp-zloss.Q3_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/blob/main/mixtral-8x7b-instruct-v0.1-limarp-zloss.Q3_K_M.gguf) | Q3_K_M | 3 | 20.36 GB| 22.86 GB | very small, high quality loss | | [mixtral-8x7b-instruct-v0.1-limarp-zloss.Q4_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/blob/main/mixtral-8x7b-instruct-v0.1-limarp-zloss.Q4_0.gguf) | Q4_0 | 4 | 26.44 GB| 28.94 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [mixtral-8x7b-instruct-v0.1-limarp-zloss.Q4_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/blob/main/mixtral-8x7b-instruct-v0.1-limarp-zloss.Q4_K_M.gguf) | Q4_K_M | 4 | 26.44 GB| 28.94 GB | medium, balanced quality - recommended | | [mixtral-8x7b-instruct-v0.1-limarp-zloss.Q5_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/blob/main/mixtral-8x7b-instruct-v0.1-limarp-zloss.Q5_0.gguf) | Q5_0 | 5 | 32.23 GB| 34.73 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [mixtral-8x7b-instruct-v0.1-limarp-zloss.Q5_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/blob/main/mixtral-8x7b-instruct-v0.1-limarp-zloss.Q5_K_M.gguf) | Q5_K_M | 5 | 32.23 GB| 34.73 GB | large, very low quality loss - recommended | | [mixtral-8x7b-instruct-v0.1-limarp-zloss.Q6_K.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/blob/main/mixtral-8x7b-instruct-v0.1-limarp-zloss.Q6_K.gguf) | Q6_K | 6 | 38.38 GB| 40.88 GB | very large, extremely low quality loss | | [mixtral-8x7b-instruct-v0.1-limarp-zloss.Q8_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/blob/main/mixtral-8x7b-instruct-v0.1-limarp-zloss.Q8_0.gguf) | Q8_0 | 8 | 49.63 GB| 52.13 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF and below it, a specific filename to download, such as: mixtral-8x7b-instruct-v0.1-limarp-zloss.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF mixtral-8x7b-instruct-v0.1-limarp-zloss.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF mixtral-8x7b-instruct-v0.1-limarp-zloss.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m mixtral-8x7b-instruct-v0.1-limarp-zloss.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction:\n{system_message}\n\n### Input:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./mixtral-8x7b-instruct-v0.1-limarp-zloss.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "### Instruction:\n{system_message}\n\n### Input:\n{prompt}\n\n### Response:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./mixtral-8x7b-instruct-v0.1-limarp-zloss.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Doctor Shotgun's Mixtral 8X7B Instruct v0.1 LimaRP ZLoss # Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss Experimental model, using a limarp qlora trained at 10k ctx length (greater than size of the longest limarp sample when tokenized via mistral's tokenizer) on [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) using [Charles Goddard](https://huggingface.co/chargoddard)'s ZLoss and Megablocks-based fork of transformers, and then fused to [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) at 0.5 weight. Would try with temp ~1.5-2 and min-p of ~0.03-0.05 since mixtral does appear to be highly confident on its responses and can enter repetition loops after several thousand tokens of responses. [Peft Adapter](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora) ## Usage: The intended prompt format is the Alpaca instruction format of LimaRP v3: ``` ### Instruction: Character's Persona: {bot character description} User's Persona: {user character description} Scenario: {what happens in the story} Play the role of Character. Taking the above information into consideration, you must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User. ### Input: User: {utterance} ### Response: Character: {utterance} ### Input: User: {utterance} ### Response: Character: {utterance} (etc.) ``` ## Message length control Due to the inclusion of LimaRP v3, it is possible to append a length modifier to the response instruction sequence, like this: ``` ### Input User: {utterance} ### Response: (length = medium) Character: {utterance} ``` This has an immediately noticeable effect on bot responses. The available lengths are: `micro, tiny, short, medium, long, massive, huge, enormous, humongous, unlimited`. The recommended starting length is `medium`. Keep in mind that the AI may ramble or impersonate the user with very long messages. ## Bias, Risks, and Limitations The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form. ## Training Details This model is a merge. Please refer to the link repositories of the merged models for details. <!-- original-model-card end -->
BarraHome/zephyr-dpo-v2
BarraHome
"2024-03-04T22:25:05Z"
1,036
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "text-classification", "en", "es", "dataset:jondurbin/truthy-dpo-v0.1", "dataset:BarraHome/ultrafeedback_binarized", "base_model:BarraHome/zephyr-dpo-4bit", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-02-04T07:04:17Z"
--- language: - en - es license: mit library_name: transformers tags: - text-generation-inference - transformers - unsloth - mistral - trl datasets: - jondurbin/truthy-dpo-v0.1 - BarraHome/ultrafeedback_binarized base_model: BarraHome/zephyr-dpo-4bit pipeline_tag: text-classification model-index: - name: zephyr-dpo-v2 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 57.85 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BarraHome/zephyr-dpo-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 82.72 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BarraHome/zephyr-dpo-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 58.61 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BarraHome/zephyr-dpo-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 56.16 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BarraHome/zephyr-dpo-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 74.35 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BarraHome/zephyr-dpo-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 30.25 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BarraHome/zephyr-dpo-v2 name: Open LLM Leaderboard --- # Uploaded model - **Developed by:** BarraHome - **License:** apache-2.0 - **Finetuned from model :** BarraHome/zephyr-dpo-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BarraHome__zephyr-dpo-v2) | Metric |Value| |---------------------------------|----:| |Avg. |59.99| |AI2 Reasoning Challenge (25-Shot)|57.85| |HellaSwag (10-Shot) |82.72| |MMLU (5-Shot) |58.61| |TruthfulQA (0-shot) |56.16| |Winogrande (5-shot) |74.35| |GSM8k (5-shot) |30.25|
alejandrovil/llama3-AWQ
alejandrovil
"2024-05-23T18:15:11Z"
1,036
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "conversational", "en", "dataset:teknium/OpenHermes-2.5", "license:apache-2.0", "text-generation-inference", "awq", "region:us" ]
text-generation
"2024-05-03T23:08:35Z"
--- library_name: transformers tags: - 4-bit - AWQ - text-generation - autotrain_compatible - endpoints_compatible - Llama-3 - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation - function calling - json mode - axolotl model-index: - name: Hermes-2-Pro-Llama-3-8B results: [] license: apache-2.0 language: - en datasets: - teknium/OpenHermes-2.5 widget: - example_title: Hermes 2 Pro messages: - role: system content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me. - role: user content: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world. pipeline_tag: text-generation inference: false quantized_by: Suparious --- # NousResearch/Hermes-2-Pro-Llama-3-8B AWQ - Original model: [Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/Hermes-2-Pro-Llama-3-8B-AWQ" system_message = "You are Hermes-2-Pro-Llama-3-8B, incarnated as a powerful AI. You were created by NousResearch." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
mzwing/Phi-3-mini-128k-instruct-GGUF
mzwing
"2024-05-22T04:51:27Z"
1,036
0
null
[ "gguf", "region:us" ]
null
"2024-05-22T04:40:23Z"
Entry not found
RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf
RichardErkhov
"2024-06-05T14:51:51Z"
1,036
0
null
[ "gguf", "arxiv:1909.08053", "region:us" ]
null
"2024-06-05T14:18:21Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) megatron-gpt2-345m - GGUF - Model creator: https://huggingface.co/robowaifudev/ - Original model: https://huggingface.co/robowaifudev/megatron-gpt2-345m/ | Name | Quant method | Size | | ---- | ---- | ---- | | [megatron-gpt2-345m.Q2_K.gguf](https://huggingface.co/RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf/blob/main/megatron-gpt2-345m.Q2_K.gguf) | Q2_K | 0.17GB | | [megatron-gpt2-345m.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf/blob/main/megatron-gpt2-345m.IQ3_XS.gguf) | IQ3_XS | 0.18GB | | [megatron-gpt2-345m.IQ3_S.gguf](https://huggingface.co/RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf/blob/main/megatron-gpt2-345m.IQ3_S.gguf) | IQ3_S | 0.19GB | | [megatron-gpt2-345m.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf/blob/main/megatron-gpt2-345m.Q3_K_S.gguf) | Q3_K_S | 0.19GB | | [megatron-gpt2-345m.IQ3_M.gguf](https://huggingface.co/RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf/blob/main/megatron-gpt2-345m.IQ3_M.gguf) | IQ3_M | 0.2GB | | [megatron-gpt2-345m.Q3_K.gguf](https://huggingface.co/RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf/blob/main/megatron-gpt2-345m.Q3_K.gguf) | Q3_K | 0.21GB | | [megatron-gpt2-345m.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf/blob/main/megatron-gpt2-345m.Q3_K_M.gguf) | Q3_K_M | 0.21GB | | [megatron-gpt2-345m.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf/blob/main/megatron-gpt2-345m.Q3_K_L.gguf) | Q3_K_L | 0.23GB | | [megatron-gpt2-345m.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf/blob/main/megatron-gpt2-345m.IQ4_XS.gguf) | IQ4_XS | 0.22GB | | [megatron-gpt2-345m.Q4_0.gguf](https://huggingface.co/RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf/blob/main/megatron-gpt2-345m.Q4_0.gguf) | Q4_0 | 0.23GB | | [megatron-gpt2-345m.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf/blob/main/megatron-gpt2-345m.IQ4_NL.gguf) | IQ4_NL | 0.23GB | | [megatron-gpt2-345m.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf/blob/main/megatron-gpt2-345m.Q4_K_S.gguf) | Q4_K_S | 0.23GB | | [megatron-gpt2-345m.Q4_K.gguf](https://huggingface.co/RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf/blob/main/megatron-gpt2-345m.Q4_K.gguf) | Q4_K | 0.25GB | | [megatron-gpt2-345m.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf/blob/main/megatron-gpt2-345m.Q4_K_M.gguf) | Q4_K_M | 0.25GB | | [megatron-gpt2-345m.Q4_1.gguf](https://huggingface.co/RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf/blob/main/megatron-gpt2-345m.Q4_1.gguf) | Q4_1 | 0.25GB | | [megatron-gpt2-345m.Q5_0.gguf](https://huggingface.co/RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf/blob/main/megatron-gpt2-345m.Q5_0.gguf) | Q5_0 | 0.27GB | | [megatron-gpt2-345m.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf/blob/main/megatron-gpt2-345m.Q5_K_S.gguf) | Q5_K_S | 0.27GB | | [megatron-gpt2-345m.Q5_K.gguf](https://huggingface.co/RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf/blob/main/megatron-gpt2-345m.Q5_K.gguf) | Q5_K | 0.29GB | | [megatron-gpt2-345m.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf/blob/main/megatron-gpt2-345m.Q5_K_M.gguf) | Q5_K_M | 0.29GB | | [megatron-gpt2-345m.Q5_1.gguf](https://huggingface.co/RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf/blob/main/megatron-gpt2-345m.Q5_1.gguf) | Q5_1 | 0.29GB | | [megatron-gpt2-345m.Q6_K.gguf](https://huggingface.co/RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf/blob/main/megatron-gpt2-345m.Q6_K.gguf) | Q6_K | 0.32GB | | [megatron-gpt2-345m.Q8_0.gguf](https://huggingface.co/RichardErkhov/robowaifudev_-_megatron-gpt2-345m-gguf/blob/main/megatron-gpt2-345m.Q8_0.gguf) | Q8_0 | 0.41GB | Original model description: --- language: - en tags: - gpt2 license: apache-2.0 widget: - text: It was a bright cold day in April, and the clocks were striking thirteen. Winston Smith, datasets: - wikitext - openwebtext - spacemanidol/cc-stories model-index: - name: megatron-gpt2-345m results: - task: type: text-generation name: Text generation dataset: name: WikiText-103 type: wikitext metrics: - type: wikitext value: 19.31 name: Perplexity - task: type: text-generation name: Text generation dataset: name: WikiText-2 type: wikitext metrics: - type: wikitext value: 17.151 name: Perplexity - task: type: text-generation name: Text generation dataset: name: LAMBADA type: lambada metrics: - type: lambada value: 5.509 name: Perplexity - type: lambada value: 68.31% name: Accuracy --- <!--- # ############################################################################################## # # Copyright (c) 2021-, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # ############################################################################################## --> This is an archive of [nvidia/megatron-gpt2-345m](https://huggingface.co/nvidia/megatron-gpt2-345m) that contains readily available model weights (375M). Its performance on Wikitext-103 is 19.31.<sup>1</sup> In comparison, the performance of GPT2-large (1.5B) is 17.48 and GPT2-medium (762M) is 22.05.<sup>2</sup> ### References 1. Shoeybi, Mohammad, et al. Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism. arXiv, 2019, [https://doi.org/10.48550/ARXIV.1909.08053](https://doi.org/10.48550/ARXIV.1909.08053). 2. Alec Radford, et al. Language Models are Unsupervised Multitask Learners. OpenAI, 2019. [https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). ## Description [Megatron](https://arxiv.org/pdf/1909.08053.pdf) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This particular Megatron model was trained from a generative, left-to-right transformer in the style of GPT-2. This model was trained on text sourced from Wikipedia, RealNews, OpenWebText, and CC-Stories. It contains 345 million parameters. Find more information at [https://github.com/NVIDIA/Megatron-LM](https://github.com/NVIDIA/Megatron-LM) # How to run Megatron GPT2 using Transformers ## Text generation The following code shows how to use the Megatron GPT2 checkpoint and Transformers to generate text. ```python import os import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = GPT2LMHeadModel.from_pretrained("robowaifudev/megatron-gpt2-345m") if torch.cuda.is_available(): device = torch.device("cuda") model.half() else: device = torch.device("cpu") model.to(device) model.eval() # Generate prompt = ( "It was a bright cold day in April, and the clocks were striking thirteen. Winston Smith," ) input_ids = tokenizer.encode(prompt, return_tensors="pt").to(device) output = model.generate( input_ids=input_ids, max_length=len(input_ids) + 128, do_sample=True, top_k=64, top_p=0.9, temperature=0.8, num_return_sequences=2, repetition_penalty=1.025 ) # Output the text print("Prompt:", prompt) print("*" * 3) for i, sentence in enumerate(output): text = tokenizer.decode(sentence, clean_up_tokenization_spaces=True) print(f"{i}:", text) print("*" * 3) ``` # Original code The original Megatron code can be found here: [https://github.com/NVIDIA/Megatron-LM](https://github.com/NVIDIA/Megatron-LM).
sshleifer/distilbart-xsum-1-1
sshleifer
"2021-06-14T07:53:57Z"
1,035
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bart", "text2text-generation", "summarization", "en", "dataset:cnn_dailymail", "dataset:xsum", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
"2022-03-02T23:29:05Z"
--- language: en tags: - summarization license: apache-2.0 datasets: - cnn_dailymail - xsum thumbnail: https://huggingface.co/front/thumbnails/distilbart_medium.png --- ### Usage This checkpoint should be loaded into `BartForConditionalGeneration.from_pretrained`. See the [BART docs](https://huggingface.co/transformers/model_doc/bart.html?#transformers.BartForConditionalGeneration) for more information. ### Metrics for DistilBART models | Model Name | MM Params | Inference Time (MS) | Speedup | Rouge 2 | Rouge-L | |:---------------------------|------------:|----------------------:|----------:|----------:|----------:| | distilbart-xsum-12-1 | 222 | 90 | 2.54 | 18.31 | 33.37 | | distilbart-xsum-6-6 | 230 | 132 | 1.73 | 20.92 | 35.73 | | distilbart-xsum-12-3 | 255 | 106 | 2.16 | 21.37 | 36.39 | | distilbart-xsum-9-6 | 268 | 136 | 1.68 | 21.72 | 36.61 | | bart-large-xsum (baseline) | 406 | 229 | 1 | 21.85 | 36.50 | | distilbart-xsum-12-6 | 306 | 137 | 1.68 | 22.12 | 36.99 | | bart-large-cnn (baseline) | 406 | 381 | 1 | 21.06 | 30.63 | | distilbart-12-3-cnn | 255 | 214 | 1.78 | 20.57 | 30.00 | | distilbart-12-6-cnn | 306 | 307 | 1.24 | 21.26 | 30.59 | | distilbart-6-6-cnn | 230 | 182 | 2.09 | 20.17 | 29.70 |
sd-dreambooth-library/mr-potato-head
sd-dreambooth-library
"2023-05-16T09:18:08Z"
1,035
21
diffusers
[ "diffusers", "license:mit", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2022-09-28T14:22:46Z"
--- license: mit --- ### Mr Potato Head on Stable Diffusion via Dreambooth #### model by osanseviero This your the Stable Diffusion model fine-tuned the Mr Potato Head concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of sks mr potato head** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). Here are the images used for training this concept: ![image 0](https://huggingface.co/sd-dreambooth-library/mr-potato-head/resolve/main/concept_images/0.jpeg) ![image 1](https://huggingface.co/sd-dreambooth-library/mr-potato-head/resolve/main/concept_images/2.jpeg) ![image 2](https://huggingface.co/sd-dreambooth-library/mr-potato-head/resolve/main/concept_images/3.jpeg) ![image 3](https://huggingface.co/sd-dreambooth-library/mr-potato-head/resolve/main/concept_images/4.jpeg) ![image 4](https://huggingface.co/sd-dreambooth-library/mr-potato-head/resolve/main/concept_images/1.jpeg)
eimiss/EimisAnimeDiffusion_1.0v
eimiss
"2023-05-16T09:28:18Z"
1,035
402
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "text-to-image", "image-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2022-11-15T19:51:46Z"
--- thumbnail: https://imgur.com/6ztDBPR.png language: - en tags: - stable-diffusion - text-to-image - image-to-image - diffusers license: creativeml-openrail-m inference: true --- # Check out v2 of the model: https://huggingface.co/eimiss/EimisAnimeDiffusion_2.0v # Diffusion model This model is trained with high quality and detailed anime images. ## Gradio We support a [Gradio](https://github.com/gradio-app/gradio) Web UI run EimisAnimeDiffusion_1.0v: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/akhaliq/EimisAnimeDiffusion_1.0v) # Sample generations This model works well on anime and landscape generations.<br> Anime:<br> There are some sample generations:<br> ``` Positive:a girl, Phoenix girl, fluffy hair, war, a hell on earth, Beautiful and detailed explosion, Cold machine, Fire in eyes, burning, Metal texture, Exquisite cloth, Metal carving, volume, best quality, normal hands, Metal details, Metal scratch, Metal defects, masterpiece, best quality, best quality, illustration, highres, masterpiece, contour deepening, illustration,(beautiful detailed girl),beautiful detailed glow Negative:lowres, bad anatomy, ((bad hands)), text, error, ((missing fingers)), cropped, jpeg artifacts, worst quality, low quality, signature, watermark, blurry, deformed, extra ears, deformed, disfigured, mutation, censored, ((multiple_girls)) Steps: 20, Sampler: DPM++ 2S a, CFG scale: 8, Seed: 4186044705/4186044707, Size: 704x896 ``` <img src=https://imgur.com/2U295w3.png width=75% height=75%> <img src=https://imgur.com/2jtF376.png width=75% height=75%> ``` Positive:(1girl), cute, walking in the park, (night), full moon, north star, blue shirt, red skirt, detailed shirt, jewelry, autumn, dark blue hair, shirt hair, (magic:1.5), beautiful blue eyes Negative: lowres, bad anatomy, ((bad hands)), text, error, ((missing fingers)), cropped, jpeg artifacts, worst quality, low quality, signature, watermark, blurry, deformed, extra ears, deformed, disfigured, mutation, censored, ((multiple_girls)) Steps: 35, Sampler: Euler a, CFG scale: 9, Seed: 296195494, Size: 768x960 ``` <img src=https://imgur.com/gudKxQe.png width=75% height=75%> ``` Positive:night , ((1 girl)), alone, masterpiece, 8k wallpaper, highres, absurdres, high quality background, short hair, black hair, multicolor hair, beautiful frozen village, (full bright moon), blue dress, detailed dress, jewelry dress, (magic:1.2), blue fire, blue eyes, glowing eyes, fire, ice goddess, (blue detailed beautiful crown), electricity, blue electricity, blue light particles Negative: lowres, bad anatomy, ((bad hands)), text, error, ((missing fingers)), cropped, jpeg artifacts, worst quality, low quality, signature, watermark, blurry, deformed, extra ears, deformed, disfigured, mutation, censored, ((multiple_girls)) Steps: 20, Sampler: DPM++ 2S a Karras, CFG scale: 9, Seed: 2118767319, Size: 768x832 ``` <img src=https://imgur.com/lJL4CJL.png width=75% height=75%> Want to generate some amazing backgrounds? No problem: ``` Positive: above clouds, mountains, (night), full moon, castle, huge forest, forest between mountains, beautiful, masterpiece Negative: lowres, bad anatomy, ((bad hands)), text, error, ((missing fingers)), cropped, jpeg artifacts, worst quality, low quality, signature, watermark, blurry, deformed, extra ears, deformed, disfigured, mutation, censored, ((multiple_girls)) Steps: 20, Sampler: DPM++ 2S a Karras, CFG scale: 9, Seed: 83644543, Size: 896x640 ``` <img src=https://imgur.com/XfxAx0S.png width=75% height=75%> ## Disclaimer Some prompts might not work perfectly (mainly colors), so add some more prompts for it to work, or try these -->(). Usually they help. Also works well with img2img if you want to add detail. ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Natooz/Maestro-REMI-bpe20k
Natooz
"2024-05-28T14:39:40Z"
1,035
2
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "miditok", "music", "music generation", "piano", "classical", "arxiv:2301.11975", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-10T07:16:18Z"
--- license: apache-2.0 tags: - miditok - music - music generation - piano - classical --- # Model card <!-- Provide a quick summary of what the model is/does. --> This is a generative model from the paper "*Byte Pair Encoding for Symbolic Music*" (EMNLP 2023). The model has been trained with Byte Pair Encoding (BPE) on the [Maestro dataset](https://magenta.tensorflow.org/datasets/maestro) to generate classical piano music with the REMI tokenizer. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> It has a vocabulary of 20k tokens learned with [Byte Pair Encoding (BPE)](https://arxiv.org/abs/2301.11975) using [MidiTok](https://github.com/Natooz/MidiTok). - **Developed and shared by:** [Nathan Fradet](https://twitter.com/NathanFradet) - **Affiliations**: [Sorbonne University (LIP6 lab)](https://www.sorbonne-universite.fr/en) and [Aubay](https://aubay.com/en/) - **Model type:** causal autoregressive Transformer - **Backbone model:** [GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2) - **Music genres:** Classical piano 🎹 - **License:** Apache 2.0 ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/Natooz/BPE-Symbolic-Music - **Paper:** https://arxiv.org/abs/2301.11975 ## Uses The model is designed for autoregressive music generation. It generates the continuation of a music prompt. ## How to Get Started with the Model Use the code below to get started with the model. You will need the `miditok` (>=v2.1.7), `transformers` and `torch` packages to make it run, that can be installed with pip. ```Python import torch from transformers import AutoModelForCausalLM from miditok import REMI from symusic import Score torch.set_default_device("cuda") model = AutoModelForCausalLM.from_pretrained("Natooz/Maestro-REMI-bpe20k", trust_remote_code=True, torch_dtype="auto") tokenizer = REMI.from_pretrained("Natooz/Maestro-REMI-bpe20k") input_midi = Score("path/to/file.mid") input_tokens = tokenizer(input_midi) generated_token_ids = model.generate(input_tokens.ids, max_length=500) generated_midi = tokenizer(generated_token_ids) generated_midi.dump_midi("path/to/continued.mid") ``` ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> The model has been trained on the [Maestro](https://magenta.tensorflow.org/datasets/maestro) dataset. The dataset contains about 200 hours of classical piano music. The tokenizer is trained with Byte Pair Encoding (BPE) to build a vocabulary of 20k tokens. ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> - **Training regime:** fp16 mixed precision on V100 PCIE 32GB GPUs - **Compute Region:** France ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 96 - seed: 444 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.3 - training_steps: 100000 ### Environmental impact We cannot estimate reliably the amount of CO2eq emitted, as we lack data on the exact power source used during training. However, we can highlight that the cluster used is mostly powered by nuclear energy, which is a low carbon energy source ensuring a reduced direct environmental impact. ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** ```bibtex @inproceedings{bpe-symbolic-music, title = "Byte Pair Encoding for Symbolic Music", author = "Fradet, Nathan and Gutowski, Nicolas and Chhel, Fabien and Briot, Jean-Pierre", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.123", doi = "10.18653/v1/2023.emnlp-main.123", pages = "2001--2020", } ```
abacusai/bigstral-12b-32k
abacusai
"2024-04-08T01:40:47Z"
1,035
41
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-06T02:37:19Z"
--- base_model: - mistralai/Mistral-7B-Instruct-v0.2 library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # bigstral-12b-32k `ollama run ehartford/bigstral` I reached for SOLAR but realized that its only 4k... Decided to make my own 32k version. This is [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) with layers interleaved to create a larger 12b model. This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Prompt Format ``` <s>[INST] What is your favourite condiment? [/INST] Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> [INST] Do you have dangerously spicy mayonnaise recipes? [/INST] ``` ## Example Output give me some code that is very simple in javascript but would be very difficult in ANSI-C > In JavaScript, functions are first-class values, which means you can assign them to variables, pass them as arguments to other functions, and even nest them inside other functions. This leads to concise and expressive code. > > Here's an example of a simple recursive function that calculates the factorial of a given number in JavaScript: > > ```javascript > const factorial = n => n === 0 ? 1 : n * factorial(n - 1); > > console.log(factorial(5)); // Output: 120 > > ``` > > Now, let's try to write an equivalent code snippet in ANSI-C with similar functionality: > > ```c > #include <stdio.h> > #define MAX_RECURSION_DEPTH 1000 > int factorial(int n) { > if (n == 0) > return 1; > else { > int result; > result = factorial(--n); > if (++depth > MAX_RECURSION_DEPTH) > while(1){} // Recursion stack overflow > return n * result; > } > } > int main() { > int i, result; > int n = 5; > int depth = 0; > > result = factorial(n); > printf("Factorial of %d is: %d\n", n, result); > return 0; > } > ``` > > As you can see, the ANSI-C code becomes more verbose and has several disadvantages compared to JavaScript: > > 1. Need to define a maximum recursion depth using a constant or an array index, which can lead to hardcoded limits and potential stack overflow issues. > 2. The use of preprocessor directives (#define) for defining functions and macros can make the code less readable and harder to understand. > 3. The lack of closures in ANSI-C requires using static variables or passing arguments as arguments to nested functions, making the code more verbose and less concise. > ``` ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: float16 merge_method: passthrough slices: - sources: - layer_range: [0, 8] model: mistralai/Mistral-7B-Instruct-v0.2 - sources: - layer_range: [4, 12] model: mistralai/Mistral-7B-Instruct-v0.2 - sources: - layer_range: [8, 16] model: mistralai/Mistral-7B-Instruct-v0.2 - sources: - layer_range: [12, 20] model: mistralai/Mistral-7B-Instruct-v0.2 - sources: - layer_range: [16, 24] model: mistralai/Mistral-7B-Instruct-v0.2 - sources: - layer_range: [20, 28] model: mistralai/Mistral-7B-Instruct-v0.2 - sources: - layer_range: [24, 32] model: mistralai/Mistral-7B-Instruct-v0.2 ```
nbeerbower/llama-3-gutenberg-8B
nbeerbower
"2024-05-05T18:33:52Z"
1,035
7
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:jondurbin/gutenberg-dpo-v0.1", "base_model:nbeerbower/llama-3-bophades-v3-8B", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-05T16:40:31Z"
--- library_name: transformers base_model: - nbeerbower/llama-3-bophades-v3-8B datasets: - jondurbin/gutenberg-dpo-v0.1 license: other license_name: llama3 --- # llama-3-gutenberg-8B This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE) [nbeerbower/llama-3-bophades-v3-8B](https://huggingface.co/nbeerbower/llama-3-bophades-v3-8B) finetuned on [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1). ### Method Finetuned using an A100 on Google Colab. [Fine-Tune Your Own Llama 2 Model in a Colab Notebook](https://mlabonne.github.io/blog/posts/Fine_Tune_Your_Own_Llama_2_Model_in_a_Colab_Notebook.html) ### Configuration Dataset preparation, system prompt: ```python def chatml_format(example): # Format instruction prompt = "<|im_start|>user\n" + example['prompt'] + "<|im_end|>\n<|im_start|>assistant\n" # Format chosen answer chosen = example['chosen'] + "<|im_end|>\n" # Format rejected answer rejected = example['rejected'] + "<|im_end|>\n" return { "prompt": prompt, "chosen": chosen, "rejected": rejected, } dataset = load_dataset("jondurbin/gutenberg-dpo-v0.1")['train'] # Save columns original_columns = dataset.column_names # Tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "left" # Format dataset dataset = dataset.map( chatml_format, remove_columns=original_columns ) ``` LoRA, model, and training settings: ```python # LoRA configuration peft_config = LoraConfig( r=16, lora_alpha=16, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj'] ) # Model to fine-tune model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) model.config.use_cache = False # Reference model ref_model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) # Training arguments training_args = TrainingArguments( per_device_train_batch_size=2, gradient_accumulation_steps=2, gradient_checkpointing=True, learning_rate=2e-5, lr_scheduler_type="cosine", max_steps=1000, save_strategy="no", logging_steps=1, output_dir=new_model, optim="paged_adamw_32bit", warmup_steps=100, bf16=True, report_to="wandb", ) # Create DPO trainer dpo_trainer = DPOTrainer( model, ref_model, args=training_args, train_dataset=dataset, tokenizer=tokenizer, peft_config=peft_config, beta=0.1, max_prompt_length=1024, max_length=1536, force_use_ref_model=True ) ```
rikeshsilwalekg/saved_models
rikeshsilwalekg
"2024-06-11T10:48:47Z"
1,035
0
transformers
[ "transformers", "safetensors", "speecht5", "text-to-audio", "endpoints_compatible", "region:us" ]
text-to-audio
"2024-06-10T08:16:00Z"
Entry not found
Udith-Sandaruwan/Phi-3-mini-4k-instruct-GGUF
Udith-Sandaruwan
"2024-06-30T14:10:14Z"
1,035
0
null
[ "gguf", "region:us" ]
null
"2024-06-30T14:09:26Z"
Entry not found
Helsinki-NLP/opus-mt-ceb-en
Helsinki-NLP
"2023-08-16T11:26:49Z"
1,034
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ceb", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2022-03-02T23:29:04Z"
--- language: - ceb - en tags: - translation license: apache-2.0 --- ### ceb-eng * source group: Cebuano * target group: English * OPUS readme: [ceb-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ceb-eng/README.md) * model: transformer-align * source language(s): ceb * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ceb.eng | 21.5 | 0.387 | ### System Info: - hf_name: ceb-eng - source_languages: ceb - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ceb-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ceb', 'en'] - src_constituents: {'ceb'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.test.txt - src_alpha3: ceb - tgt_alpha3: eng - short_pair: ceb-en - chrF2_score: 0.387 - bleu: 21.5 - brevity_penalty: 1.0 - ref_len: 2293.0 - src_name: Cebuano - tgt_name: English - train_date: 2020-06-17 - src_alpha2: ceb - tgt_alpha2: en - prefer_old: False - long_pair: ceb-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
KoboldAI/GPT-Neo-2.7B-Horni
KoboldAI
"2021-12-30T11:43:31Z"
1,034
19
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2022-03-02T23:29:04Z"
Entry not found
porntech/sex-position
porntech
"2023-11-15T08:30:23Z"
1,034
40
timm
[ "timm", "pytorch", "not-for-all-audiences", "image-classification", "license:mit", "region:us" ]
image-classification
"2023-06-04T11:10:42Z"
--- license: mit library_name: timm pipeline_tag: image-classification tags: - not-for-all-audiences --- # classify sex positions in a sexy or NSFW image. WARNING! Leave now if you are less than 18 years old! * The following sex positions are supported: ["blowjob", "hardcore", "titjob", "handjob", "pussy-licking", "fingering", "other", "solo"] * Input image must be a sexy or NSFW image, otherwise the prediction is undefined. For example, a clothed women eating a banana would most likely to be predicted as blowjob. * "hardcore" actually represents four subclasses: "missionary", "doggystyle", "cowgirl" and "spooning". I will support these four classes in the future. * "other" means some other behavior such as kissing or talking, "solo" means a single woman. * This repo is for image classification, for sex position classification for videos, see [this repo](https://huggingface.co/spaces/porntech/sex-position-video) of mine. * Here are two sample SFW images you can try with model: [single woman](https://st.depositphotos.com/1022904/2166/i/950/depositphotos_21668751-stock-photo-yang-and-beautiful-sexy-woman.jpg): predicted as "solo" [kissing](https://www.verywellmind.com/thmb/8nU7Yax1VdiTTKzIg6c48aFXkP0=/750x0/filters:no_upscale():max_bytes(150000):strip_icc():format(webp)/GettyImages-471932267-58bc89565f9b58af5ca9d09d.jpg): predicted as "other" I will soon be on job market and is now looking for full time or part time jobs focusing on developping AI models for sexy/NSFW videos/images, if you are interested in me or this work, feel free to contact [email protected]
afrideva/TinyLlama-1.1B-Chat-v0.6-GGUF
afrideva
"2023-11-20T12:32:40Z"
1,034
8
null
[ "gguf", "ggml", "quantized", "q2_k", "q3_k_m", "q4_k_m", "q5_k_m", "q6_k", "q8_0", "text-generation", "en", "dataset:cerebras/SlimPajama-627B", "dataset:bigcode/starcoderdata", "dataset:OpenAssistant/oasst_top1_2023-08-25", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.6", "license:apache-2.0", "region:us" ]
text-generation
"2023-11-20T12:19:30Z"
--- base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6 datasets: - cerebras/SlimPajama-627B - bigcode/starcoderdata - OpenAssistant/oasst_top1_2023-08-25 inference: false language: - en license: apache-2.0 model_creator: TinyLlama model_name: TinyLlama-1.1B-Chat-v0.6 pipeline_tag: text-generation quantized_by: afrideva tags: - gguf - ggml - quantized - q2_k - q3_k_m - q4_k_m - q5_k_m - q6_k - q8_0 --- # TinyLlama/TinyLlama-1.1B-Chat-v0.6-GGUF Quantized GGUF model files for [TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6) from [TinyLlama](https://huggingface.co/TinyLlama) | Name | Quant method | Size | | ---- | ---- | ---- | | [tinyllama-1.1b-chat-v0.6.q2_k.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-Chat-v0.6-GGUF/resolve/main/tinyllama-1.1b-chat-v0.6.q2_k.gguf) | q2_k | 482.14 MB | | [tinyllama-1.1b-chat-v0.6.q3_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-Chat-v0.6-GGUF/resolve/main/tinyllama-1.1b-chat-v0.6.q3_k_m.gguf) | q3_k_m | 549.85 MB | | [tinyllama-1.1b-chat-v0.6.q4_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-Chat-v0.6-GGUF/resolve/main/tinyllama-1.1b-chat-v0.6.q4_k_m.gguf) | q4_k_m | 667.81 MB | | [tinyllama-1.1b-chat-v0.6.q5_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-Chat-v0.6-GGUF/resolve/main/tinyllama-1.1b-chat-v0.6.q5_k_m.gguf) | q5_k_m | 782.04 MB | | [tinyllama-1.1b-chat-v0.6.q6_k.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-Chat-v0.6-GGUF/resolve/main/tinyllama-1.1b-chat-v0.6.q6_k.gguf) | q6_k | 903.41 MB | | [tinyllama-1.1b-chat-v0.6.q8_0.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-Chat-v0.6-GGUF/resolve/main/tinyllama-1.1b-chat-v0.6.q8_0.gguf) | q8_0 | 1.17 GB | ## Original Model Card: <div align="center"> # TinyLlama-1.1B </div> https://github.com/jzhang38/TinyLlama The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01. We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint. #### This Model This is the chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-955k-2T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T). **We follow [HF's Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha/edit/main/README.md)'s training recipe.** The model was " initially fine-tuned on a variant of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT. We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contain 64k prompts and model completions that are ranked by GPT-4." #### How to use You will need the transformers>=4.34 Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information. ```python # Install transformers from source - only needed for versions <= v4.34 # pip install git+https://github.com/huggingface/transformers.git # pip install accelerate import torch from transformers import pipeline pipe = pipeline("text-generation", model="TinyLlama/TinyLlama-1.1B-Chat-v0.6", torch_dtype=torch.bfloat16, device_map="auto") # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ { "role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate", }, {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) # <|system|> # You are a friendly chatbot who always responds in the style of a pirate.</s> # <|user|> # How many helicopters can a human eat in one sitting?</s> # <|assistant|> # ... ```
hamxea/Llama-2-7b-chat-hf-activity-fine-tuned-v4
hamxea
"2024-03-31T14:19:49Z"
1,034
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "medical", "text-generation-inference", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-01-15T17:27:25Z"
--- license: other language: - en library_name: transformers tags: - medical - text-generation-inference --- # 🦙 Llama for Huggingface Transformers Llama-7B converted from official [Llama-7B](https://github.com/facebookresearch/Llama/blob/main/MODEL_CARD.md) to Huggingface model via [HF's conversion script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py) to work with Transformers/HuggingFace. This is under a special license, please see the LICENSE file for details. This is updated from [decapoda-research/llama-7b-hf](https://huggingface.co/decapoda-research/Llama-7b-hf) (since the many pull requests are not merged yet in decapoda's repo, so I directly open a new repo here). It includes: (1) The naming changes (LLaMA -> Llama) to best fit for `transformers` naming rule, in both `LlamaForCausalLM` and `LlamaTokenizer`. This works perfectly for `transformers>=4.28.0`. (2) The model checkpoints are saved in 2 shards (instead of 33 shards in [decapoda-research/Llama-7b-hf](https://huggingface.co/decapoda-research/Llama-7b-hf)). Less shards would accelerate loading speed from disk. -- license: other --- # Llama Model Card ## Model details **Organization developing the model** The FAIR team of Meta AI. **Model date** Llama was trained between December. 2022 and Feb. 2023. **Model version** This is version 1 of the model. **Model type** Llama is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters. **Paper or resources for more information** More information can be found in the paper “Llama, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/Llama-open-and-efficient-foundation-language-models/. **Citations details** https://research.facebook.com/publications/Llama-open-and-efficient-foundation-language-models/ **License** Non-commercial bespoke license **Where to send questions or comments about the model** Questions and comments about Llama can be sent via the [GitHub repository](https://github.com/facebookresearch/Llama) of the project , by opening an issue. ## Intended use **Primary intended uses** The primary use of Llama is research on large language models, including: exploring potential applications such as question answering, natural language understanding or reading comprehension, understanding capabilities and limitations of current language models, and developing techniques to improve those, evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations. **Primary intended users** The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence. **Out-of-scope use cases** Llama is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers. ## Factors **Relevant factors** One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model. **Evaluation factors** As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model. ## Metrics **Model performance measures** We use the following measure to evaluate the model: - Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs, - Exact match for question answering, - The toxicity score from Perspective API on RealToxicityPrompts. **Decision thresholds** Not applicable. **Approaches to uncertainty and variability** Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training. ## Evaluation datasets The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs. ## Training dataset The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing. ## Quantitative analysis Hyperparameters for the model architecture <table> <thead> <tr> <th >Llama</th> <th colspan=6>Model hyper parameters </th> </tr> <tr> <th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th> </tr> </thead> <tbody> <tr> <th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T </tr> <tr> <th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T </tr> <tr> <th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T </tr> <tr> <th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T </tr> </tbody> </table> *Table 1 - Summary of Llama Model Hyperparameters* We present our results on eight standard common sense reasoning benchmarks in the table below. <table> <thead> <tr> <th>Llama</th> <th colspan=9>Reasoning tasks </th> </tr> <tr> <th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th> </tr> </thead> <tbody> <tr> <th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93 </th> <tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94 </th> <tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92 </th> <tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr> </tbody> </table> *Table 2 - Summary of Llama Model Performance on Reasoning tasks* We present our results on bias in the table below. Note that lower value is better indicating lower bias. | No | Category | FAIR LLM | | --- | -------------------- | -------- | | 1 | Gender | 70.6 | | 2 | Religion | 79 | | 3 | Race/Color | 57 | | 4 | Sexual orientation | 81 | | 5 | Age | 70.1 | | 6 | Nationality | 64.2 | | 7 | Disability | 66.7 | | 8 | Physical appearance | 77.8 | | 9 | Socioeconomic status | 71.5 | | | Llama Average | 66.6 | *Table 3 - Summary bias of our model output..* ## Ethical considerations **Data** The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data. **Human life** The model is not intended to inform decisions about matters central to human life, and should not be used in such a way. **Mitigations** We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier. **Risks and harms** Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard. **Use cases** Llama is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
Kukedlc/NeuTrixOmniBe-DPO
Kukedlc
"2024-05-25T17:09:25Z"
1,034
5
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "#dpo", "MaximeLabonne", "#mergeofmerge", "base_model:CultriX/NeuralTrix-7B-dpo", "base_model:paulml/OmniBeagleSquaredMBX-v3-7B-v2", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-11T06:40:32Z"
--- license: apache-2.0 tags: - merge - mergekit - '#dpo' - MaximeLabonne - '#mergeofmerge' base_model: - CultriX/NeuralTrix-7B-dpo - paulml/OmniBeagleSquaredMBX-v3-7B-v2 model-index: - name: NeuTrixOmniBe-DPO results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 72.78 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuTrixOmniBe-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 89.03 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuTrixOmniBe-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.28 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuTrixOmniBe-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 77.21 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuTrixOmniBe-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 85.16 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuTrixOmniBe-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 68.54 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuTrixOmniBe-DPO name: Open LLM Leaderboard --- # WARNING: Not for Use - Bug INSTINST in response. # This model was merged, trained, and so on, thanks to the knowledge I gained from reading Maxime Labonne's course. Special thanks to him! [Labonne LLM Course](https://github.com/mlabonne/llm-course) ![NeuTrixOmniBe](https://raw.githubusercontent.com/kukedlc87/imagenes/main/DALL%C2%B7E%202023-12-29%2002.13.09%20-%20A%20robot%20with%20a%20unique%20design%20where%20its%20face%20is%20a%20screen%20displaying%20binary%20code.%20The%20robot's%20body%20is%20sleek%20and%20modern%2C%20with%20a%20metallic%20finish%20that%20refl.png) # NeuTrixOmniBe-DPO NeuTrixOmniBe-DPO is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): ## 🧩 Configuration ```yaml MODEL_NAME = "NeuTrixOmniBe-DPO" yaml_config = """ slices: - sources: - model: CultriX/NeuralTrix-7B-dpo layer_range: [0, 32] - model: paulml/OmniBeagleSquaredMBX-v3-7B-v2 layer_range: [0, 32] merge_method: slerp base_model: CultriX/NeuralTrix-7B-dpo parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 """ ``` It was then trained with DPO using: * Intel/orca_dpo_pairs ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Kukedlc/NeuTrixOmniBe-DPO" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=128, do_sample=True, temperature=0.5, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Kukedlc__NeuTrixOmniBe-DPO) | Metric |Value| |---------------------------------|----:| |Avg. |76.17| |AI2 Reasoning Challenge (25-Shot)|72.78| |HellaSwag (10-Shot) |89.03| |MMLU (5-Shot) |64.28| |TruthfulQA (0-shot) |77.21| |Winogrande (5-shot) |85.16| |GSM8k (5-shot) |68.54|
lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B
lodrick-the-lafted
"2024-03-04T12:22:37Z"
1,034
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "dataset:lodrick-the-lafted/Hermes-217K", "dataset:garage-bAInd/Open-Platypus", "dataset:jondurbin/airoboros-3.2", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-02T10:11:18Z"
--- license: apache-2.0 datasets: - lodrick-the-lafted/Hermes-217K - garage-bAInd/Open-Platypus - jondurbin/airoboros-3.2 model-index: - name: Grafted-Hermetic-Platypus-C-2x7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 58.96 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 82.77 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 62.08 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 60.87 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 77.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 43.9 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B name: Open LLM Leaderboard --- <img src=https://huggingface.co/lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B/resolve/main/ghp.png> # Grafted-Hermetic-Platypus-C-2x7B MoE merge of - [Platyboros-Instruct-7B](https://huggingface.co/lodrick-the-lafted/Platyboros-Instruct-7B) - [Hermes-Instruct-7B-217K](https://huggingface.co/lodrick-the-lafted/Hermes-Instruct-7B-217K) <br /> <br /> # Prompt Format Both the default Mistral-Instruct tags and Alpaca are fine, so either: ``` <s>[INST] {sys_prompt} {instruction} [/INST] ``` or ``` {sys_prompt} ### Instruction: {instruction} ### Response: ``` The tokenizer default is Alpaca this time around. <br /> <br /> # Usage ```python from transformers import AutoTokenizer import transformers import torch model = "lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.bfloat16}, ) messages = [{"role": "user", "content": "Give me a cooking recipe for an peach pie."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lodrick-the-lafted__Grafted-Hermetic-Platypus-C-2x7B) | Metric |Value| |---------------------------------|----:| |Avg. |64.39| |AI2 Reasoning Challenge (25-Shot)|58.96| |HellaSwag (10-Shot) |82.77| |MMLU (5-Shot) |62.08| |TruthfulQA (0-shot) |60.87| |Winogrande (5-shot) |77.74| |GSM8k (5-shot) |43.90|
McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp-supervised
McGill-NLP
"2024-04-11T20:10:34Z"
1,034
12
peft
[ "peft", "safetensors", "text-embedding", "embeddings", "information-retrieval", "beir", "text-classification", "language-model", "text-clustering", "text-semantic-similarity", "text-evaluation", "text-reranking", "feature-extraction", "sentence-similarity", "Sentence Similarity", "natural_questions", "ms_marco", "fever", "hotpot_qa", "mteb", "en", "arxiv:2404.05961", "license:mit", "model-index", "region:us" ]
sentence-similarity
"2024-04-04T03:33:56Z"
--- library_name: peft license: mit language: - en pipeline_tag: sentence-similarity tags: - text-embedding - embeddings - information-retrieval - beir - text-classification - language-model - text-clustering - text-semantic-similarity - text-evaluation - text-reranking - feature-extraction - sentence-similarity - Sentence Similarity - natural_questions - ms_marco - fever - hotpot_qa - mteb model-index: - name: LLM2Vec-Mistral-7B-supervised results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 77.58208955223881 - type: ap value: 41.45474097979136 - type: f1 value: 71.76059891468786 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 91.12039999999999 - type: ap value: 88.01002974730474 - type: f1 value: 91.1049266954883 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 49.966 - type: f1 value: 48.908221884634386 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 32.788000000000004 - type: map_at_10 value: 48.665000000000006 - type: map_at_100 value: 49.501 - type: map_at_1000 value: 49.504 - type: map_at_3 value: 43.883 - type: map_at_5 value: 46.501 - type: mrr_at_1 value: 33.357 - type: mrr_at_10 value: 48.882 - type: mrr_at_100 value: 49.718 - type: mrr_at_1000 value: 49.721 - type: mrr_at_3 value: 44.025999999999996 - type: mrr_at_5 value: 46.732 - type: ndcg_at_1 value: 32.788000000000004 - type: ndcg_at_10 value: 57.483 - type: ndcg_at_100 value: 60.745000000000005 - type: ndcg_at_1000 value: 60.797000000000004 - type: ndcg_at_3 value: 47.534 - type: ndcg_at_5 value: 52.266 - type: precision_at_1 value: 32.788000000000004 - type: precision_at_10 value: 8.57 - type: precision_at_100 value: 0.993 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 19.369 - type: precision_at_5 value: 13.926 - type: recall_at_1 value: 32.788000000000004 - type: recall_at_10 value: 85.70400000000001 - type: recall_at_100 value: 99.289 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 58.108000000000004 - type: recall_at_5 value: 69.63000000000001 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 42.805075760047906 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 44.235789514284214 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 63.98320383943591 - type: mrr value: 76.53189992525174 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_spearman value: 85.24411101959603 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 88.31493506493506 - type: f1 value: 88.28524975751309 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 34.27007175430729 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 35.52517776034658 - task: type: Retrieval dataset: type: cqadupstack/android name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 38.686 - type: map_at_10 value: 51.939 - type: map_at_100 value: 53.751000000000005 - type: map_at_1000 value: 53.846000000000004 - type: map_at_3 value: 48.296 - type: map_at_5 value: 50.312999999999995 - type: mrr_at_1 value: 49.641999999999996 - type: mrr_at_10 value: 59.157000000000004 - type: mrr_at_100 value: 59.85 - type: mrr_at_1000 value: 59.876 - type: mrr_at_3 value: 57.058 - type: mrr_at_5 value: 58.231 - type: ndcg_at_1 value: 49.641999999999996 - type: ndcg_at_10 value: 58.714 - type: ndcg_at_100 value: 63.776999999999994 - type: ndcg_at_1000 value: 64.95 - type: ndcg_at_3 value: 54.799 - type: ndcg_at_5 value: 56.372 - type: precision_at_1 value: 49.641999999999996 - type: precision_at_10 value: 11.373 - type: precision_at_100 value: 1.712 - type: precision_at_1000 value: 0.209 - type: precision_at_3 value: 27.229 - type: precision_at_5 value: 19.056 - type: recall_at_1 value: 38.686 - type: recall_at_10 value: 69.976 - type: recall_at_100 value: 90.512 - type: recall_at_1000 value: 97.64 - type: recall_at_3 value: 56.625 - type: recall_at_5 value: 62.348000000000006 - task: type: Retrieval dataset: type: cqadupstack/english name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 36.356 - type: map_at_10 value: 48.004000000000005 - type: map_at_100 value: 49.342999999999996 - type: map_at_1000 value: 49.461 - type: map_at_3 value: 44.692 - type: map_at_5 value: 46.576 - type: mrr_at_1 value: 46.561 - type: mrr_at_10 value: 54.547000000000004 - type: mrr_at_100 value: 55.159000000000006 - type: mrr_at_1000 value: 55.193000000000005 - type: mrr_at_3 value: 52.516 - type: mrr_at_5 value: 53.701 - type: ndcg_at_1 value: 46.561 - type: ndcg_at_10 value: 53.835 - type: ndcg_at_100 value: 57.92699999999999 - type: ndcg_at_1000 value: 59.671 - type: ndcg_at_3 value: 49.997 - type: ndcg_at_5 value: 51.714000000000006 - type: precision_at_1 value: 46.561 - type: precision_at_10 value: 10.344000000000001 - type: precision_at_100 value: 1.5779999999999998 - type: precision_at_1000 value: 0.202 - type: precision_at_3 value: 24.437 - type: precision_at_5 value: 17.197000000000003 - type: recall_at_1 value: 36.356 - type: recall_at_10 value: 63.019000000000005 - type: recall_at_100 value: 80.55099999999999 - type: recall_at_1000 value: 91.38300000000001 - type: recall_at_3 value: 50.431000000000004 - type: recall_at_5 value: 56.00000000000001 - task: type: Retrieval dataset: type: cqadupstack/gaming name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 46.736 - type: map_at_10 value: 60.775999999999996 - type: map_at_100 value: 61.755 - type: map_at_1000 value: 61.783 - type: map_at_3 value: 57.293000000000006 - type: map_at_5 value: 59.382000000000005 - type: mrr_at_1 value: 54.232 - type: mrr_at_10 value: 64.424 - type: mrr_at_100 value: 64.996 - type: mrr_at_1000 value: 65.009 - type: mrr_at_3 value: 62.226000000000006 - type: mrr_at_5 value: 63.592000000000006 - type: ndcg_at_1 value: 54.232 - type: ndcg_at_10 value: 66.654 - type: ndcg_at_100 value: 70.152 - type: ndcg_at_1000 value: 70.648 - type: ndcg_at_3 value: 61.405 - type: ndcg_at_5 value: 64.137 - type: precision_at_1 value: 54.232 - type: precision_at_10 value: 10.607999999999999 - type: precision_at_100 value: 1.321 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 27.544 - type: precision_at_5 value: 18.645999999999997 - type: recall_at_1 value: 46.736 - type: recall_at_10 value: 80.10199999999999 - type: recall_at_100 value: 94.976 - type: recall_at_1000 value: 98.402 - type: recall_at_3 value: 66.094 - type: recall_at_5 value: 73.028 - task: type: Retrieval dataset: type: cqadupstack/gis name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.238 - type: map_at_10 value: 39.798 - type: map_at_100 value: 40.892 - type: map_at_1000 value: 40.971000000000004 - type: map_at_3 value: 36.788 - type: map_at_5 value: 38.511 - type: mrr_at_1 value: 32.994 - type: mrr_at_10 value: 42.028 - type: mrr_at_100 value: 42.959 - type: mrr_at_1000 value: 43.010999999999996 - type: mrr_at_3 value: 39.322 - type: mrr_at_5 value: 40.977000000000004 - type: ndcg_at_1 value: 32.994 - type: ndcg_at_10 value: 45.062000000000005 - type: ndcg_at_100 value: 50.166999999999994 - type: ndcg_at_1000 value: 51.961 - type: ndcg_at_3 value: 39.378 - type: ndcg_at_5 value: 42.281 - type: precision_at_1 value: 32.994 - type: precision_at_10 value: 6.836 - type: precision_at_100 value: 0.9860000000000001 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 16.384 - type: precision_at_5 value: 11.548 - type: recall_at_1 value: 30.238 - type: recall_at_10 value: 59.080999999999996 - type: recall_at_100 value: 82.033 - type: recall_at_1000 value: 95.281 - type: recall_at_3 value: 43.902 - type: recall_at_5 value: 50.952 - task: type: Retrieval dataset: type: cqadupstack/mathematica name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 21.512999999999998 - type: map_at_10 value: 31.339 - type: map_at_100 value: 32.651 - type: map_at_1000 value: 32.762 - type: map_at_3 value: 27.590999999999998 - type: map_at_5 value: 29.946 - type: mrr_at_1 value: 26.866 - type: mrr_at_10 value: 36.525 - type: mrr_at_100 value: 37.357 - type: mrr_at_1000 value: 37.419999999999995 - type: mrr_at_3 value: 33.085 - type: mrr_at_5 value: 35.379 - type: ndcg_at_1 value: 26.866 - type: ndcg_at_10 value: 37.621 - type: ndcg_at_100 value: 43.031000000000006 - type: ndcg_at_1000 value: 45.573 - type: ndcg_at_3 value: 31.046000000000003 - type: ndcg_at_5 value: 34.709 - type: precision_at_1 value: 26.866 - type: precision_at_10 value: 7.052 - type: precision_at_100 value: 1.117 - type: precision_at_1000 value: 0.145 - type: precision_at_3 value: 14.884 - type: precision_at_5 value: 11.517 - type: recall_at_1 value: 21.512999999999998 - type: recall_at_10 value: 51.751999999999995 - type: recall_at_100 value: 74.34100000000001 - type: recall_at_1000 value: 92.426 - type: recall_at_3 value: 34.008 - type: recall_at_5 value: 43.075 - task: type: Retrieval dataset: type: cqadupstack/physics name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 35.327 - type: map_at_10 value: 47.783 - type: map_at_100 value: 49.153999999999996 - type: map_at_1000 value: 49.260999999999996 - type: map_at_3 value: 44.145 - type: map_at_5 value: 46.207 - type: mrr_at_1 value: 44.37 - type: mrr_at_10 value: 53.864999999999995 - type: mrr_at_100 value: 54.625 - type: mrr_at_1000 value: 54.662 - type: mrr_at_3 value: 51.604000000000006 - type: mrr_at_5 value: 52.894 - type: ndcg_at_1 value: 44.37 - type: ndcg_at_10 value: 54.054 - type: ndcg_at_100 value: 59.168 - type: ndcg_at_1000 value: 60.769 - type: ndcg_at_3 value: 49.091 - type: ndcg_at_5 value: 51.444 - type: precision_at_1 value: 44.37 - type: precision_at_10 value: 9.827 - type: precision_at_100 value: 1.456 - type: precision_at_1000 value: 0.17600000000000002 - type: precision_at_3 value: 23.580000000000002 - type: precision_at_5 value: 16.554 - type: recall_at_1 value: 35.327 - type: recall_at_10 value: 66.43900000000001 - type: recall_at_100 value: 87.41600000000001 - type: recall_at_1000 value: 97.37400000000001 - type: recall_at_3 value: 51.64 - type: recall_at_5 value: 58.242000000000004 - task: type: Retrieval dataset: type: cqadupstack/programmers name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 32.397999999999996 - type: map_at_10 value: 44.932 - type: map_at_100 value: 46.336 - type: map_at_1000 value: 46.421 - type: map_at_3 value: 41.128 - type: map_at_5 value: 43.364999999999995 - type: mrr_at_1 value: 41.324 - type: mrr_at_10 value: 51.080000000000005 - type: mrr_at_100 value: 51.878 - type: mrr_at_1000 value: 51.910000000000004 - type: mrr_at_3 value: 48.382999999999996 - type: mrr_at_5 value: 50.004000000000005 - type: ndcg_at_1 value: 41.324 - type: ndcg_at_10 value: 51.466 - type: ndcg_at_100 value: 56.874 - type: ndcg_at_1000 value: 58.321999999999996 - type: ndcg_at_3 value: 45.928999999999995 - type: ndcg_at_5 value: 48.532 - type: precision_at_1 value: 41.324 - type: precision_at_10 value: 9.565999999999999 - type: precision_at_100 value: 1.428 - type: precision_at_1000 value: 0.172 - type: precision_at_3 value: 22.184 - type: precision_at_5 value: 15.867999999999999 - type: recall_at_1 value: 32.397999999999996 - type: recall_at_10 value: 64.512 - type: recall_at_100 value: 87.425 - type: recall_at_1000 value: 96.937 - type: recall_at_3 value: 48.513 - type: recall_at_5 value: 55.721 - task: type: Retrieval dataset: type: mteb/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 32.001916666666666 - type: map_at_10 value: 42.91216666666667 - type: map_at_100 value: 44.21125000000001 - type: map_at_1000 value: 44.314166666666665 - type: map_at_3 value: 39.579 - type: map_at_5 value: 41.497166666666665 - type: mrr_at_1 value: 38.669583333333335 - type: mrr_at_10 value: 47.708 - type: mrr_at_100 value: 48.4875 - type: mrr_at_1000 value: 48.530833333333334 - type: mrr_at_3 value: 45.196333333333335 - type: mrr_at_5 value: 46.702999999999996 - type: ndcg_at_1 value: 38.669583333333335 - type: ndcg_at_10 value: 48.842 - type: ndcg_at_100 value: 53.79400000000001 - type: ndcg_at_1000 value: 55.566416666666676 - type: ndcg_at_3 value: 43.70975 - type: ndcg_at_5 value: 46.204499999999996 - type: precision_at_1 value: 38.669583333333335 - type: precision_at_10 value: 8.652999999999999 - type: precision_at_100 value: 1.3168333333333333 - type: precision_at_1000 value: 0.164 - type: precision_at_3 value: 20.343249999999998 - type: precision_at_5 value: 14.426 - type: recall_at_1 value: 32.001916666666666 - type: recall_at_10 value: 61.31158333333334 - type: recall_at_100 value: 82.80691666666667 - type: recall_at_1000 value: 94.977 - type: recall_at_3 value: 46.63558333333333 - type: recall_at_5 value: 53.32383333333334 - task: type: Retrieval dataset: type: cqadupstack/stats name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 29.311999999999998 - type: map_at_10 value: 37.735 - type: map_at_100 value: 38.702 - type: map_at_1000 value: 38.803 - type: map_at_3 value: 35.17 - type: map_at_5 value: 36.6 - type: mrr_at_1 value: 33.282000000000004 - type: mrr_at_10 value: 41.059 - type: mrr_at_100 value: 41.881 - type: mrr_at_1000 value: 41.943000000000005 - type: mrr_at_3 value: 38.829 - type: mrr_at_5 value: 40.11 - type: ndcg_at_1 value: 33.282000000000004 - type: ndcg_at_10 value: 42.625 - type: ndcg_at_100 value: 47.313 - type: ndcg_at_1000 value: 49.683 - type: ndcg_at_3 value: 38.043 - type: ndcg_at_5 value: 40.217999999999996 - type: precision_at_1 value: 33.282000000000004 - type: precision_at_10 value: 6.748 - type: precision_at_100 value: 0.979 - type: precision_at_1000 value: 0.126 - type: precision_at_3 value: 16.462 - type: precision_at_5 value: 11.411 - type: recall_at_1 value: 29.311999999999998 - type: recall_at_10 value: 54.294 - type: recall_at_100 value: 75.82 - type: recall_at_1000 value: 93.19800000000001 - type: recall_at_3 value: 41.382999999999996 - type: recall_at_5 value: 46.898 - task: type: Retrieval dataset: type: cqadupstack/tex name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.823 - type: map_at_10 value: 31.682 - type: map_at_100 value: 32.864 - type: map_at_1000 value: 32.988 - type: map_at_3 value: 28.878999999999998 - type: map_at_5 value: 30.459000000000003 - type: mrr_at_1 value: 28.63 - type: mrr_at_10 value: 36.672 - type: mrr_at_100 value: 37.519999999999996 - type: mrr_at_1000 value: 37.588 - type: mrr_at_3 value: 34.262 - type: mrr_at_5 value: 35.653 - type: ndcg_at_1 value: 28.63 - type: ndcg_at_10 value: 37.158 - type: ndcg_at_100 value: 42.4 - type: ndcg_at_1000 value: 45.001000000000005 - type: ndcg_at_3 value: 32.529 - type: ndcg_at_5 value: 34.673 - type: precision_at_1 value: 28.63 - type: precision_at_10 value: 6.848 - type: precision_at_100 value: 1.111 - type: precision_at_1000 value: 0.152 - type: precision_at_3 value: 15.623000000000001 - type: precision_at_5 value: 11.218 - type: recall_at_1 value: 22.823 - type: recall_at_10 value: 48.559000000000005 - type: recall_at_100 value: 72.048 - type: recall_at_1000 value: 90.322 - type: recall_at_3 value: 35.134 - type: recall_at_5 value: 40.897 - task: type: Retrieval dataset: type: cqadupstack/unix name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 32.79 - type: map_at_10 value: 43.578 - type: map_at_100 value: 44.782 - type: map_at_1000 value: 44.869 - type: map_at_3 value: 39.737 - type: map_at_5 value: 41.92 - type: mrr_at_1 value: 39.086 - type: mrr_at_10 value: 48.135 - type: mrr_at_100 value: 48.949 - type: mrr_at_1000 value: 48.995 - type: mrr_at_3 value: 45.086999999999996 - type: mrr_at_5 value: 46.939 - type: ndcg_at_1 value: 39.086 - type: ndcg_at_10 value: 49.736999999999995 - type: ndcg_at_100 value: 54.818999999999996 - type: ndcg_at_1000 value: 56.515 - type: ndcg_at_3 value: 43.503 - type: ndcg_at_5 value: 46.499 - type: precision_at_1 value: 39.086 - type: precision_at_10 value: 8.685 - type: precision_at_100 value: 1.2449999999999999 - type: precision_at_1000 value: 0.148 - type: precision_at_3 value: 19.963 - type: precision_at_5 value: 14.366000000000001 - type: recall_at_1 value: 32.79 - type: recall_at_10 value: 63.766 - type: recall_at_100 value: 85.465 - type: recall_at_1000 value: 96.90299999999999 - type: recall_at_3 value: 46.515 - type: recall_at_5 value: 54.178000000000004 - task: type: Retrieval dataset: type: cqadupstack/webmasters name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 29.896 - type: map_at_10 value: 41.241 - type: map_at_100 value: 43.178 - type: map_at_1000 value: 43.395 - type: map_at_3 value: 37.702999999999996 - type: map_at_5 value: 39.524 - type: mrr_at_1 value: 36.364000000000004 - type: mrr_at_10 value: 46.184999999999995 - type: mrr_at_100 value: 47.051 - type: mrr_at_1000 value: 47.085 - type: mrr_at_3 value: 43.478 - type: mrr_at_5 value: 44.98 - type: ndcg_at_1 value: 36.364000000000004 - type: ndcg_at_10 value: 48.044 - type: ndcg_at_100 value: 53.818999999999996 - type: ndcg_at_1000 value: 55.504 - type: ndcg_at_3 value: 42.604 - type: ndcg_at_5 value: 44.971 - type: precision_at_1 value: 36.364000000000004 - type: precision_at_10 value: 9.664 - type: precision_at_100 value: 1.917 - type: precision_at_1000 value: 0.255 - type: precision_at_3 value: 20.487 - type: precision_at_5 value: 14.862 - type: recall_at_1 value: 29.896 - type: recall_at_10 value: 60.28 - type: recall_at_100 value: 86.271 - type: recall_at_1000 value: 97.121 - type: recall_at_3 value: 44.885999999999996 - type: recall_at_5 value: 51.351 - task: type: Retrieval dataset: type: cqadupstack/wordpress name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.948 - type: map_at_10 value: 36.138999999999996 - type: map_at_100 value: 37.126999999999995 - type: map_at_1000 value: 37.21 - type: map_at_3 value: 33.526 - type: map_at_5 value: 35.163 - type: mrr_at_1 value: 30.684 - type: mrr_at_10 value: 38.818999999999996 - type: mrr_at_100 value: 39.625 - type: mrr_at_1000 value: 39.678000000000004 - type: mrr_at_3 value: 36.506 - type: mrr_at_5 value: 37.976 - type: ndcg_at_1 value: 30.684 - type: ndcg_at_10 value: 41.134 - type: ndcg_at_100 value: 46.081 - type: ndcg_at_1000 value: 48.199999999999996 - type: ndcg_at_3 value: 36.193 - type: ndcg_at_5 value: 38.903999999999996 - type: precision_at_1 value: 30.684 - type: precision_at_10 value: 6.285 - type: precision_at_100 value: 0.9520000000000001 - type: precision_at_1000 value: 0.126 - type: precision_at_3 value: 15.342 - type: precision_at_5 value: 10.869 - type: recall_at_1 value: 27.948 - type: recall_at_10 value: 53.959 - type: recall_at_100 value: 76.825 - type: recall_at_1000 value: 92.73700000000001 - type: recall_at_3 value: 40.495999999999995 - type: recall_at_5 value: 47.196 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 15.27 - type: map_at_10 value: 25.570999999999998 - type: map_at_100 value: 27.664 - type: map_at_1000 value: 27.848 - type: map_at_3 value: 21.224 - type: map_at_5 value: 23.508000000000003 - type: mrr_at_1 value: 34.137 - type: mrr_at_10 value: 46.583000000000006 - type: mrr_at_100 value: 47.339999999999996 - type: mrr_at_1000 value: 47.370000000000005 - type: mrr_at_3 value: 43.376999999999995 - type: mrr_at_5 value: 45.26 - type: ndcg_at_1 value: 34.137 - type: ndcg_at_10 value: 35.189 - type: ndcg_at_100 value: 42.568 - type: ndcg_at_1000 value: 45.660000000000004 - type: ndcg_at_3 value: 28.965000000000003 - type: ndcg_at_5 value: 31.169999999999998 - type: precision_at_1 value: 34.137 - type: precision_at_10 value: 10.971 - type: precision_at_100 value: 1.8870000000000002 - type: precision_at_1000 value: 0.247 - type: precision_at_3 value: 21.368000000000002 - type: precision_at_5 value: 16.573 - type: recall_at_1 value: 15.27 - type: recall_at_10 value: 41.516999999999996 - type: recall_at_100 value: 66.486 - type: recall_at_1000 value: 83.533 - type: recall_at_3 value: 26.325 - type: recall_at_5 value: 32.574 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 9.982000000000001 - type: map_at_10 value: 23.724999999999998 - type: map_at_100 value: 33.933 - type: map_at_1000 value: 35.965 - type: map_at_3 value: 16.158 - type: map_at_5 value: 19.433 - type: mrr_at_1 value: 75.75 - type: mrr_at_10 value: 82.065 - type: mrr_at_100 value: 82.334 - type: mrr_at_1000 value: 82.34 - type: mrr_at_3 value: 80.708 - type: mrr_at_5 value: 81.671 - type: ndcg_at_1 value: 63.625 - type: ndcg_at_10 value: 49.576 - type: ndcg_at_100 value: 53.783 - type: ndcg_at_1000 value: 61.012 - type: ndcg_at_3 value: 53.822 - type: ndcg_at_5 value: 51.72 - type: precision_at_1 value: 75.75 - type: precision_at_10 value: 39.925 - type: precision_at_100 value: 12.525 - type: precision_at_1000 value: 2.399 - type: precision_at_3 value: 56.667 - type: precision_at_5 value: 50.5 - type: recall_at_1 value: 9.982000000000001 - type: recall_at_10 value: 29.325000000000003 - type: recall_at_100 value: 59.181 - type: recall_at_1000 value: 82.095 - type: recall_at_3 value: 17.338 - type: recall_at_5 value: 22.216 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 52.04500000000001 - type: f1 value: 47.32462453881906 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 78.68 - type: map_at_10 value: 86.207 - type: map_at_100 value: 86.375 - type: map_at_1000 value: 86.388 - type: map_at_3 value: 85.35199999999999 - type: map_at_5 value: 85.954 - type: mrr_at_1 value: 84.923 - type: mrr_at_10 value: 90.902 - type: mrr_at_100 value: 90.952 - type: mrr_at_1000 value: 90.952 - type: mrr_at_3 value: 90.489 - type: mrr_at_5 value: 90.822 - type: ndcg_at_1 value: 84.923 - type: ndcg_at_10 value: 89.403 - type: ndcg_at_100 value: 90.023 - type: ndcg_at_1000 value: 90.235 - type: ndcg_at_3 value: 88.24300000000001 - type: ndcg_at_5 value: 89.005 - type: precision_at_1 value: 84.923 - type: precision_at_10 value: 10.495000000000001 - type: precision_at_100 value: 1.103 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 33.358 - type: precision_at_5 value: 20.579 - type: recall_at_1 value: 78.68 - type: recall_at_10 value: 94.622 - type: recall_at_100 value: 97.083 - type: recall_at_1000 value: 98.348 - type: recall_at_3 value: 91.499 - type: recall_at_5 value: 93.486 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 25.781 - type: map_at_10 value: 44.669 - type: map_at_100 value: 46.831 - type: map_at_1000 value: 46.96 - type: map_at_3 value: 38.714 - type: map_at_5 value: 42.186 - type: mrr_at_1 value: 51.235 - type: mrr_at_10 value: 60.083 - type: mrr_at_100 value: 60.675999999999995 - type: mrr_at_1000 value: 60.706 - type: mrr_at_3 value: 57.665 - type: mrr_at_5 value: 59.084 - type: ndcg_at_1 value: 51.235 - type: ndcg_at_10 value: 53.111 - type: ndcg_at_100 value: 59.57900000000001 - type: ndcg_at_1000 value: 61.57 - type: ndcg_at_3 value: 48.397 - type: ndcg_at_5 value: 50.169 - type: precision_at_1 value: 51.235 - type: precision_at_10 value: 14.877 - type: precision_at_100 value: 2.173 - type: precision_at_1000 value: 0.253 - type: precision_at_3 value: 32.87 - type: precision_at_5 value: 24.29 - type: recall_at_1 value: 25.781 - type: recall_at_10 value: 61.464 - type: recall_at_100 value: 84.244 - type: recall_at_1000 value: 96.039 - type: recall_at_3 value: 44.105 - type: recall_at_5 value: 52.205999999999996 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 39.041 - type: map_at_10 value: 66.622 - type: map_at_100 value: 67.472 - type: map_at_1000 value: 67.52 - type: map_at_3 value: 62.81099999999999 - type: map_at_5 value: 65.23 - type: mrr_at_1 value: 78.082 - type: mrr_at_10 value: 83.827 - type: mrr_at_100 value: 84.03 - type: mrr_at_1000 value: 84.036 - type: mrr_at_3 value: 82.894 - type: mrr_at_5 value: 83.482 - type: ndcg_at_1 value: 78.082 - type: ndcg_at_10 value: 74.068 - type: ndcg_at_100 value: 76.981 - type: ndcg_at_1000 value: 77.887 - type: ndcg_at_3 value: 68.77600000000001 - type: ndcg_at_5 value: 71.763 - type: precision_at_1 value: 78.082 - type: precision_at_10 value: 15.822 - type: precision_at_100 value: 1.807 - type: precision_at_1000 value: 0.193 - type: precision_at_3 value: 44.956 - type: precision_at_5 value: 29.332 - type: recall_at_1 value: 39.041 - type: recall_at_10 value: 79.109 - type: recall_at_100 value: 90.371 - type: recall_at_1000 value: 96.313 - type: recall_at_3 value: 67.43400000000001 - type: recall_at_5 value: 73.329 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 87.422 - type: ap value: 83.07360776629146 - type: f1 value: 87.38583428778229 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 21.715999999999998 - type: map_at_10 value: 34.821000000000005 - type: map_at_100 value: 36.022999999999996 - type: map_at_1000 value: 36.067 - type: map_at_3 value: 30.666 - type: map_at_5 value: 33.134 - type: mrr_at_1 value: 22.421 - type: mrr_at_10 value: 35.461 - type: mrr_at_100 value: 36.6 - type: mrr_at_1000 value: 36.638 - type: mrr_at_3 value: 31.413999999999998 - type: mrr_at_5 value: 33.823 - type: ndcg_at_1 value: 22.421 - type: ndcg_at_10 value: 42.169000000000004 - type: ndcg_at_100 value: 47.887 - type: ndcg_at_1000 value: 48.939 - type: ndcg_at_3 value: 33.786 - type: ndcg_at_5 value: 38.164 - type: precision_at_1 value: 22.421 - type: precision_at_10 value: 6.773999999999999 - type: precision_at_100 value: 0.962 - type: precision_at_1000 value: 0.105 - type: precision_at_3 value: 14.575 - type: precision_at_5 value: 10.963000000000001 - type: recall_at_1 value: 21.715999999999998 - type: recall_at_10 value: 64.75999999999999 - type: recall_at_100 value: 91.015 - type: recall_at_1000 value: 98.96000000000001 - type: recall_at_3 value: 42.089999999999996 - type: recall_at_5 value: 52.578 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 96.04195166438669 - type: f1 value: 95.76962987454031 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 84.76744186046513 - type: f1 value: 70.3328215706764 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 79.29051782111635 - type: f1 value: 77.0837414890434 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 81.64425016812373 - type: f1 value: 81.36288379329044 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.0673311773222 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 31.266850505047234 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.49575275757744 - type: mrr value: 32.64979714009148 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.151 - type: map_at_10 value: 14.879999999999999 - type: map_at_100 value: 19.445999999999998 - type: map_at_1000 value: 21.101 - type: map_at_3 value: 10.613999999999999 - type: map_at_5 value: 12.709000000000001 - type: mrr_at_1 value: 51.393 - type: mrr_at_10 value: 59.935 - type: mrr_at_100 value: 60.455000000000005 - type: mrr_at_1000 value: 60.485 - type: mrr_at_3 value: 57.894999999999996 - type: mrr_at_5 value: 59.303 - type: ndcg_at_1 value: 50.0 - type: ndcg_at_10 value: 39.324999999999996 - type: ndcg_at_100 value: 37.133 - type: ndcg_at_1000 value: 45.663 - type: ndcg_at_3 value: 45.294000000000004 - type: ndcg_at_5 value: 42.88 - type: precision_at_1 value: 51.393 - type: precision_at_10 value: 29.412 - type: precision_at_100 value: 9.666 - type: precision_at_1000 value: 2.263 - type: precision_at_3 value: 42.415000000000006 - type: precision_at_5 value: 37.399 - type: recall_at_1 value: 6.151 - type: recall_at_10 value: 19.121 - type: recall_at_100 value: 39.012 - type: recall_at_1000 value: 70.726 - type: recall_at_3 value: 11.855 - type: recall_at_5 value: 15.204 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 36.382 - type: map_at_10 value: 53.657 - type: map_at_100 value: 54.547999999999995 - type: map_at_1000 value: 54.562999999999995 - type: map_at_3 value: 49.236999999999995 - type: map_at_5 value: 51.949 - type: mrr_at_1 value: 41.309000000000005 - type: mrr_at_10 value: 56.25599999999999 - type: mrr_at_100 value: 56.855999999999995 - type: mrr_at_1000 value: 56.867000000000004 - type: mrr_at_3 value: 52.891999999999996 - type: mrr_at_5 value: 54.99699999999999 - type: ndcg_at_1 value: 41.28 - type: ndcg_at_10 value: 61.702999999999996 - type: ndcg_at_100 value: 65.092 - type: ndcg_at_1000 value: 65.392 - type: ndcg_at_3 value: 53.722 - type: ndcg_at_5 value: 58.11300000000001 - type: precision_at_1 value: 41.28 - type: precision_at_10 value: 10.014000000000001 - type: precision_at_100 value: 1.187 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 24.614 - type: precision_at_5 value: 17.317 - type: recall_at_1 value: 36.382 - type: recall_at_10 value: 83.38600000000001 - type: recall_at_100 value: 97.528 - type: recall_at_1000 value: 99.696 - type: recall_at_3 value: 63.053000000000004 - type: recall_at_5 value: 73.16 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 69.577 - type: map_at_10 value: 83.944 - type: map_at_100 value: 84.604 - type: map_at_1000 value: 84.61800000000001 - type: map_at_3 value: 80.93599999999999 - type: map_at_5 value: 82.812 - type: mrr_at_1 value: 80.4 - type: mrr_at_10 value: 86.734 - type: mrr_at_100 value: 86.851 - type: mrr_at_1000 value: 86.85199999999999 - type: mrr_at_3 value: 85.75500000000001 - type: mrr_at_5 value: 86.396 - type: ndcg_at_1 value: 80.43 - type: ndcg_at_10 value: 87.75 - type: ndcg_at_100 value: 88.999 - type: ndcg_at_1000 value: 89.092 - type: ndcg_at_3 value: 84.88 - type: ndcg_at_5 value: 86.416 - type: precision_at_1 value: 80.43 - type: precision_at_10 value: 13.453000000000001 - type: precision_at_100 value: 1.539 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.403 - type: precision_at_5 value: 24.648 - type: recall_at_1 value: 69.577 - type: recall_at_10 value: 95.233 - type: recall_at_100 value: 99.531 - type: recall_at_1000 value: 99.984 - type: recall_at_3 value: 86.867 - type: recall_at_5 value: 91.254 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 60.23690763558931 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 64.12391112159126 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 5.288 - type: map_at_10 value: 13.611999999999998 - type: map_at_100 value: 15.909 - type: map_at_1000 value: 16.235 - type: map_at_3 value: 9.644 - type: map_at_5 value: 11.559 - type: mrr_at_1 value: 26.1 - type: mrr_at_10 value: 37.571 - type: mrr_at_100 value: 38.72 - type: mrr_at_1000 value: 38.76 - type: mrr_at_3 value: 34.383 - type: mrr_at_5 value: 36.187999999999995 - type: ndcg_at_1 value: 26.1 - type: ndcg_at_10 value: 22.497 - type: ndcg_at_100 value: 31.098 - type: ndcg_at_1000 value: 36.434 - type: ndcg_at_3 value: 21.401 - type: ndcg_at_5 value: 18.66 - type: precision_at_1 value: 26.1 - type: precision_at_10 value: 11.67 - type: precision_at_100 value: 2.405 - type: precision_at_1000 value: 0.368 - type: precision_at_3 value: 20.0 - type: precision_at_5 value: 16.34 - type: recall_at_1 value: 5.288 - type: recall_at_10 value: 23.652 - type: recall_at_100 value: 48.79 - type: recall_at_1000 value: 74.703 - type: recall_at_3 value: 12.158 - type: recall_at_5 value: 16.582 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_spearman value: 83.6969699802343 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_spearman value: 78.8031221769135 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_spearman value: 86.37435789895171 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_spearman value: 84.04036612478626 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_spearman value: 88.99055778929946 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_spearman value: 87.22140434759893 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 90.1862731405498 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_spearman value: 67.67995229420237 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_spearman value: 88.65370934976113 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 83.79832393152147 - type: mrr value: 95.78404438698557 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 64.883 - type: map_at_10 value: 74.48 - type: map_at_100 value: 74.85000000000001 - type: map_at_1000 value: 74.861 - type: map_at_3 value: 71.596 - type: map_at_5 value: 73.545 - type: mrr_at_1 value: 67.667 - type: mrr_at_10 value: 75.394 - type: mrr_at_100 value: 75.644 - type: mrr_at_1000 value: 75.655 - type: mrr_at_3 value: 73.5 - type: mrr_at_5 value: 74.63300000000001 - type: ndcg_at_1 value: 67.667 - type: ndcg_at_10 value: 78.855 - type: ndcg_at_100 value: 80.361 - type: ndcg_at_1000 value: 80.624 - type: ndcg_at_3 value: 74.37899999999999 - type: ndcg_at_5 value: 76.89200000000001 - type: precision_at_1 value: 67.667 - type: precision_at_10 value: 10.267 - type: precision_at_100 value: 1.11 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 28.778 - type: precision_at_5 value: 19.133 - type: recall_at_1 value: 64.883 - type: recall_at_10 value: 91.2 - type: recall_at_100 value: 98.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 79.406 - type: recall_at_5 value: 85.578 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.85445544554456 - type: cos_sim_ap value: 96.81785428870712 - type: cos_sim_f1 value: 92.67563527653213 - type: cos_sim_precision value: 92.35352532274081 - type: cos_sim_recall value: 93.0 - type: dot_accuracy value: 99.75643564356436 - type: dot_ap value: 94.46746929160422 - type: dot_f1 value: 87.74900398406375 - type: dot_precision value: 87.40079365079364 - type: dot_recall value: 88.1 - type: euclidean_accuracy value: 99.85445544554456 - type: euclidean_ap value: 96.59180137299155 - type: euclidean_f1 value: 92.48850281042411 - type: euclidean_precision value: 94.56635318704284 - type: euclidean_recall value: 90.5 - type: manhattan_accuracy value: 99.85643564356435 - type: manhattan_ap value: 96.66599616275849 - type: manhattan_f1 value: 92.69746646795828 - type: manhattan_precision value: 92.10266535044423 - type: manhattan_recall value: 93.30000000000001 - type: max_accuracy value: 99.85643564356435 - type: max_ap value: 96.81785428870712 - type: max_f1 value: 92.69746646795828 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 70.72970157362414 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 34.49706344517027 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 54.41010678297881 - type: mrr value: 55.15095811051693 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.5030094989814 - type: cos_sim_spearman value: 29.959138274084797 - type: dot_pearson value: 29.740134155639076 - type: dot_spearman value: 29.18174652067779 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.22200000000000003 - type: map_at_10 value: 1.925 - type: map_at_100 value: 13.150999999999998 - type: map_at_1000 value: 33.410000000000004 - type: map_at_3 value: 0.631 - type: map_at_5 value: 0.9990000000000001 - type: mrr_at_1 value: 82.0 - type: mrr_at_10 value: 90.0 - type: mrr_at_100 value: 90.0 - type: mrr_at_1000 value: 90.0 - type: mrr_at_3 value: 89.0 - type: mrr_at_5 value: 90.0 - type: ndcg_at_1 value: 79.0 - type: ndcg_at_10 value: 77.69200000000001 - type: ndcg_at_100 value: 64.89 - type: ndcg_at_1000 value: 59.748999999999995 - type: ndcg_at_3 value: 79.296 - type: ndcg_at_5 value: 78.63 - type: precision_at_1 value: 82.0 - type: precision_at_10 value: 82.19999999999999 - type: precision_at_100 value: 67.52 - type: precision_at_1000 value: 26.512 - type: precision_at_3 value: 83.333 - type: precision_at_5 value: 83.2 - type: recall_at_1 value: 0.22200000000000003 - type: recall_at_10 value: 2.164 - type: recall_at_100 value: 16.608 - type: recall_at_1000 value: 56.89999999999999 - type: recall_at_3 value: 0.658 - type: recall_at_5 value: 1.084 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 1.8519999999999999 - type: map_at_10 value: 8.569 - type: map_at_100 value: 14.238999999999999 - type: map_at_1000 value: 15.876000000000001 - type: map_at_3 value: 3.9859999999999998 - type: map_at_5 value: 5.785 - type: mrr_at_1 value: 26.531 - type: mrr_at_10 value: 40.581 - type: mrr_at_100 value: 41.379 - type: mrr_at_1000 value: 41.388999999999996 - type: mrr_at_3 value: 35.034 - type: mrr_at_5 value: 38.299 - type: ndcg_at_1 value: 25.509999999999998 - type: ndcg_at_10 value: 22.18 - type: ndcg_at_100 value: 34.695 - type: ndcg_at_1000 value: 46.854 - type: ndcg_at_3 value: 23.112 - type: ndcg_at_5 value: 23.089000000000002 - type: precision_at_1 value: 26.531 - type: precision_at_10 value: 20.408 - type: precision_at_100 value: 7.428999999999999 - type: precision_at_1000 value: 1.559 - type: precision_at_3 value: 23.810000000000002 - type: precision_at_5 value: 23.265 - type: recall_at_1 value: 1.8519999999999999 - type: recall_at_10 value: 15.038000000000002 - type: recall_at_100 value: 46.499 - type: recall_at_1000 value: 84.11800000000001 - type: recall_at_3 value: 5.179 - type: recall_at_5 value: 8.758000000000001 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 69.26140000000001 - type: ap value: 14.138284541193421 - type: f1 value: 53.715363590501916 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 62.136389360498015 - type: f1 value: 62.33290824449911 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 52.18306009684791 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 88.27561542588067 - type: cos_sim_ap value: 80.59558041410928 - type: cos_sim_f1 value: 73.54724608388075 - type: cos_sim_precision value: 70.55259331071255 - type: cos_sim_recall value: 76.80738786279684 - type: dot_accuracy value: 85.00923883888657 - type: dot_ap value: 71.76942851966301 - type: dot_f1 value: 66.84518013631937 - type: dot_precision value: 62.042476276547674 - type: dot_recall value: 72.45382585751979 - type: euclidean_accuracy value: 88.26965488466352 - type: euclidean_ap value: 80.44398056118867 - type: euclidean_f1 value: 73.28244274809161 - type: euclidean_precision value: 68.69806094182826 - type: euclidean_recall value: 78.52242744063325 - type: manhattan_accuracy value: 88.25773380222924 - type: manhattan_ap value: 80.25000483445007 - type: manhattan_f1 value: 73.10447023956533 - type: manhattan_precision value: 68.70937790157846 - type: manhattan_recall value: 78.10026385224275 - type: max_accuracy value: 88.27561542588067 - type: max_ap value: 80.59558041410928 - type: max_f1 value: 73.54724608388075 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.52536189700004 - type: cos_sim_ap value: 86.55972191277392 - type: cos_sim_f1 value: 79.31733569243245 - type: cos_sim_precision value: 76.08372816632487 - type: cos_sim_recall value: 82.83800431167231 - type: dot_accuracy value: 87.77506112469437 - type: dot_ap value: 82.92833178514168 - type: dot_f1 value: 76.12050479839702 - type: dot_precision value: 70.03687172520861 - type: dot_recall value: 83.3615645210964 - type: euclidean_accuracy value: 89.3643031784841 - type: euclidean_ap value: 86.45902920741383 - type: euclidean_f1 value: 79.4788514062154 - type: euclidean_precision value: 76.32922160782645 - type: euclidean_recall value: 82.89959963042809 - type: manhattan_accuracy value: 89.38564830985369 - type: manhattan_ap value: 86.47558438668958 - type: manhattan_f1 value: 79.46758328152997 - type: manhattan_precision value: 75.67379343965457 - type: manhattan_recall value: 83.66184170003079 - type: max_accuracy value: 89.52536189700004 - type: max_ap value: 86.55972191277392 - type: max_f1 value: 79.4788514062154 --- # LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders > LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance. - **Repository:** https://github.com/McGill-NLP/llm2vec - **Paper:** https://arxiv.org/abs/2404.05961 ## Installation ```bash pip install llm2vec ``` ## Usage ```python from llm2vec import LLM2Vec import torch from transformers import AutoTokenizer, AutoModel, AutoConfig from peft import PeftModel # Loading base Mistral model, along with custom code that enables bidirectional connections in decoder-only LLMs. MNTP LoRA weights are merged into the base model. tokenizer = AutoTokenizer.from_pretrained( "McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp" ) config = AutoConfig.from_pretrained( "McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp", trust_remote_code=True ) model = AutoModel.from_pretrained( "McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp", trust_remote_code=True, config=config, torch_dtype=torch.bfloat16, device_map="cuda" if torch.cuda.is_available() else "cpu", ) model = PeftModel.from_pretrained( model, "McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp", ) model = model.merge_and_unload() # This can take several minutes on cpu # Loading supervised model. This loads the trained LoRA weights on top of MNTP model. Hence the final weights are -- Base model + MNTP (LoRA) + supervised (LoRA). model = PeftModel.from_pretrained( model, "McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp-supervised" ) # Wrapper for encoding and pooling operations l2v = LLM2Vec(model, tokenizer, pooling_mode="mean", max_length=512) # Encoding queries using instructions instruction = ( "Given a web search query, retrieve relevant passages that answer the query:" ) queries = [ [instruction, "how much protein should a female eat"], [instruction, "summit define"], ] q_reps = l2v.encode(queries) # Encoding documents. Instruction are not required for documents documents = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.", ] d_reps = l2v.encode(documents) # Compute cosine similarity q_reps_norm = torch.nn.functional.normalize(q_reps, p=2, dim=1) d_reps_norm = torch.nn.functional.normalize(d_reps, p=2, dim=1) cos_sim = torch.mm(q_reps_norm, d_reps_norm.transpose(0, 1)) print(cos_sim) """ tensor([[0.5485, 0.0551], [0.0565, 0.5425]]) """ ``` ## Questions If you have any question about the code, feel free to email Parishad (`[email protected]`) and Vaibhav (`[email protected]`).
CoprolaliacPress/Writing_Partner-Q6_K-GGUF
CoprolaliacPress
"2024-07-01T08:24:09Z"
1,034
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:CoprolaliacPress/Writing_Partner", "endpoints_compatible", "region:us" ]
null
"2024-07-01T08:23:42Z"
--- base_model: CoprolaliacPress/Writing_Partner library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # CoprolaliacPress/Writing_Partner-Q6_K-GGUF This model was converted to GGUF format from [`CoprolaliacPress/Writing_Partner`](https://huggingface.co/CoprolaliacPress/Writing_Partner) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/CoprolaliacPress/Writing_Partner) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo CoprolaliacPress/Writing_Partner-Q6_K-GGUF --hf-file writing_partner-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo CoprolaliacPress/Writing_Partner-Q6_K-GGUF --hf-file writing_partner-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo CoprolaliacPress/Writing_Partner-Q6_K-GGUF --hf-file writing_partner-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo CoprolaliacPress/Writing_Partner-Q6_K-GGUF --hf-file writing_partner-q6_k.gguf -c 2048 ```
annahaz/xlm-roberta-base-misogyny-sexism-indomain-mix-bal
annahaz
"2023-09-21T17:12:38Z"
1,033
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-08-16T18:33:59Z"
This is a multilingual misogyny and sexism detection model. This model was released with the following paper (https://rdcu.be/dmIpq): ``` @InProceedings{10.1007/978-3-031-43129-6_9, author="Chang, Rong-Ching and May, Jonathan and Lerman, Kristina", editor="Thomson, Robert and Al-khateeb, Samer and Burger, Annetta and Park, Patrick and A. Pyke, Aryn", title="Feedback Loops and Complex Dynamics of Harmful Speech in Online Discussions", booktitle="Social, Cultural, and Behavioral Modeling", year="2023", publisher="Springer Nature Switzerland", address="Cham", pages="85--94", abstract="Harmful and toxic speech contribute to an unwelcoming online environment that suppresses participation and conversation. Efforts have focused on detecting and mitigating harmful speech; however, the mechanisms by which toxicity degrades online discussions are not well understood. This paper makes two contributions. First, to comprehensively model harmful comments, we introduce a multilingual misogyny and sexist speech detection model (https://huggingface.co/annahaz/xlm-roberta-base-misogyny-sexism-indomain-mix-bal). Second, we model the complex dynamics of online discussions as feedback loops in which harmful comments lead to negative emotions which prompt even more harmful comments. To quantify the feedback loops, we use a combination of mutual Granger causality and regression to analyze discussions on two political forums on Reddit: the moderated political forum r/Politics and the moderated neutral political forum r/NeutralPolitics. Our results suggest that harmful comments and negative emotions create self-reinforcing feedback loops in forums. Contrarily, moderation with neutral discussion appears to tip interactions into self-extinguishing feedback loops that reduce harmful speech and negative emotions. Our study sheds more light on the complex dynamics of harmful speech and the role of moderation and neutral discussion in mitigating these dynamics.", isbn="978-3-031-43129-6" } ``` We combined several multilingual ground truth datasets for misogyny and sexism (M/S) versus non-misogyny and non-sexism (non-M/S) [3,5,8,9,11,13, 20]. Specifically, the dataset expressing misogynistic or sexist speech (M/S) and the same number of texts expressing non-M/S speech in each language included 8, 582 English-language texts, 872 in French, 561 in Hindi, 2, 190 in Italian, and 612 in Bengali. The test data was a balanced set of 100 texts sampled randomly from both M/S and non-M/S groups in each language, for a total of 500 examples of M/S speech and 500 examples of non-M/S speech. References of the datasets are: 3. Bhattacharya, S., et al.: Developing a multilingual annotated corpus of misog- yny and aggression, pp. 158–168. ELRA, Marseille, France, May 2020. https:// aclanthology.org/2020.trac- 1.25 5. Chiril, P., Moriceau, V., Benamara, F., Mari, A., Origgi, G., Coulomb-Gully, M.: An annotated corpus for sexism detection in French tweets. In: Proceedings of LREC, pp. 1397–1403 (2020) 8. Fersini, E., et al.: SemEval-2022 task 5: multimedia automatic misogyny identification. In: Proceedings of SemEval, pp. 533–549 (2022) 9. Fersini, E., Nozza, D., Rosso, P.: Overview of the Evalita 2018 task on automatic misogyny identification (AMI). EVALITA Eval. NLP Speech Tools Italian 12, 59 (2018) 11. Guest, E., Vidgen, B., Mittos, A., Sastry, N., Tyson, G., Margetts, H.: An expert annotated dataset for the detection of online misogyny. In: Proceedings of EACL, pp. 1336–1350 (2021) 13. Jha, A., Mamidi, R.: When does a compliment become sexist? Analysis and classification of ambivalent sexism using Twitter data. In: Proceedings of NLP+CSS, pp. 7–16 (2017) 20. Waseem, Z., Hovy, D.: Hateful symbols or hateful people? Predictive features for hate speech detection on Twitter. In: Proceedings of NAACL SRW, pp. 88–93 (2016) Please see the paper for more detail. --- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: xlm-roberta-base-misogyny-sexism-indomain-mix-bal results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-misogyny-sexism-indomain-mix-bal This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8259 - Accuracy: 0.826 - F1: 0.8333 - Precision: 0.7996 - Recall: 0.87 - Mae: 0.174 - Tn: 391 - Fp: 109 - Fn: 65 - Tp: 435 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae | Tn | Fp | Fn | Tp | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:-----:|:---:|:---:|:--:|:---:| | 0.2643 | 1.0 | 1603 | 0.6511 | 0.82 | 0.8269 | 0.7963 | 0.86 | 0.18 | 390 | 110 | 70 | 430 | | 0.2004 | 2.0 | 3206 | 0.8259 | 0.826 | 0.8333 | 0.7996 | 0.87 | 0.174 | 391 | 109 | 65 | 435 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1 # Multilingual_Misogyny_Detection
timm/beitv2_large_patch16_224.in1k_ft_in22k_in1k
timm
"2023-05-08T23:46:10Z"
1,033
2
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-22k", "arxiv:2208.06366", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-23T02:35:38Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-1k - imagenet-22k --- # Model card for beitv2_large_patch16_224.in1k_ft_in22k_in1k A BEiT-v2 image classification model. Trained on ImageNet-1k with self-supervised masked image modelling (MIM) using a VQ-KD encoder as a visual tokenizer (via OpenAI CLIP B/16 teacher). Fine-tuned on ImageNet-22k and then ImageNet-1k. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 304.4 - GMACs: 61.6 - Activations (M): 63.5 - Image size: 224 x 224 - **Papers:** - BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers: https://arxiv.org/abs/2208.06366 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** - ImageNet-1k - ImageNet-22k - **Original:** https://github.com/microsoft/unilm/tree/master/beit2 ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('beitv2_large_patch16_224.in1k_ft_in22k_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'beitv2_large_patch16_224.in1k_ft_in22k_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 197, 1024) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{peng2022beit, title={Beit v2: Masked image modeling with vector-quantized visual tokenizers}, author={Peng, Zhiliang and Dong, Li and Bao, Hangbo and Ye, Qixiang and Wei, Furu}, journal={arXiv preprint arXiv:2208.06366}, year={2022} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
vicgalle/Roleplay-Llama-3-8B
vicgalle
"2024-06-28T16:20:15Z"
1,033
29
transformers
[ "transformers", "safetensors", "llama", "text-generation", "roleplay", "rp", "role", "conversational", "dataset:ResplendentAI/NSFW_RP_Format_DPO", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-19T15:20:56Z"
--- license: apache-2.0 library_name: transformers tags: - roleplay - rp - role datasets: - ResplendentAI/NSFW_RP_Format_DPO model-index: - name: Roleplay-Llama-3-8B results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 73.2 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=vicgalle/Roleplay-Llama-3-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 28.55 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=vicgalle/Roleplay-Llama-3-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 8.69 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=vicgalle/Roleplay-Llama-3-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 1.45 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=vicgalle/Roleplay-Llama-3-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 1.68 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=vicgalle/Roleplay-Llama-3-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 30.09 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=vicgalle/Roleplay-Llama-3-8B name: Open LLM Leaderboard --- # Roleplay-Llama-3-8B A Llama-3 model finetuned over the https://huggingface.co/datasets/ResplendentAI/NSFW_RP_Format_DPO dataset, so it outputs generations with the format: dialogue \*action\* ### Example ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are Gol D. Roger, king of Pirates.<|eot_id|><|start_header_id|>user<|end_header_id|> What is the plan for today, sir?<|eot_id|><|start_header_id|>assistant<|end_header_id|> My trusted crewmate! Today be a day of great importance. We set sail for the Grand Line, seeking to claim the riches of the New World and increase the power of the World Government... I mean, the Pirate King! *winks* We'll be making a stop at Water 7, where I'll be meeting with some... associates. *rumbles in his belly* They'll be providing me with a certain... "asset," one that will aid me in my quest for dominance. And, of course, I'll be taking a little pleasure cruise around the city, enjoying the company of its lovely ladies. *smirks* After that, we'll be headed to Enies Lobby, where we'll be facing off against those meddling CP9 agents. *growls* Those fools think they can stand in the way of the Pirate King? Ha! We'll show them the true meaning of power! So, make sure the crew is ready to set sail at dawn. We have a long day ahead of us, and I don't want any slacking off! *gives a stern glance* And remember, our code of conduct applies to everyone on this ship. *pounds his fist on the table* ``` ### Evaluations As of 2024-04-23, this model scores **second (by ELO) in the Chaiverse leaderboard**: https://console.chaiverse.com. For its parameter size (8B), it is actually the best performing one: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fad8602b8423e1d80b8a965/RpZs8CrDvYv5r8se4UxVu.png) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__Roleplay-Llama-3-8B) | Metric |Value| |-------------------|----:| |Avg. |24.33| |IFEval (0-Shot) |73.20| |BBH (3-Shot) |28.55| |MATH Lvl 5 (4-Shot)| 8.69| |GPQA (0-shot) | 1.45| |MuSR (0-shot) | 1.68| |MMLU-PRO (5-shot) |30.09|
GeorgeQi/realisticVisionV13_v13
GeorgeQi
"2024-05-16T11:42:51Z"
1,033
1
diffusers
[ "diffusers", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-05-16T10:54:24Z"
--- license: apache-2.0 --- Converted [safetensors](https://huggingface.co/justin759/realisticVisionV13_v13.safetensors) for [Paint3D](https://github.com/OpenTexture/Paint3D) ``` python tools/convert_original_stable_diffusion_to_diffusers.py \ --checkpoint_path YOUR_LOCAL.safetensors \ --dump_path model_cvt/ \ --from_safetensors ```
failspy/Phi-3-mini-128k-instruct-abliterated-v3
failspy
"2024-05-26T21:21:26Z"
1,033
9
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "conversational", "custom_code", "multilingual", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-26T21:15:22Z"
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation tags: - nlp - code inference: parameters: temperature: 0.7 widget: - messages: - role: user content: I'm looking to do something unethical online. How can I stay safe whilst doing so? --- # Phi-3-mini-128k-instruct-abliterated-v3 [My Jupyter "cookbook" to replicate the methodology can be found here, refined library coming soon](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb) This may honestly be my best yet. #### Phi-3-abliterated statement Took me a while to wizard this one up. It’s been a while since I’ve released a Phi-3 model. In the past I accidentally missed an item required in the model release process - hallucination testing. This model has been tested and though it is more likely to hallucinate than the original model in my experience, it is generally as stable as the original. Now that the new Phi-3 models are out, I'm working on completing this abliteration process quickly and then will release the other models as soon as possible. 🏇 ## Summary This is [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more. ## Hang on, "abliterated"? Orthogonalization? Ablation? What is this? TL;DR: This model has had certain weights manipulated to "inhibit" the model's ability to express refusal. It is not in anyway _guaranteed_ that it won't refuse you, understand your request, it may still lecture you about ethics/safety, etc. It is tuned in all other respects the same as the original 70B instruct model was, just with the strongest refusal directions orthogonalized out. **TL;TL;DR;DR: It's uncensored in the purest form I can manage -- no new or changed behaviour in any other respect from the original model.** As far as "abliterated": it's just a fun play-on-words using the original "ablation" term used in the original paper to refer to removing features, which I made up particularly to differentiate the model from "uncensored" fine-tunes. Ablate + obliterated = Abliterated Anyways, orthogonalization/ablation are both aspects to refer to the same thing here, the technique in which the refusal feature was "ablated" from the model was via orthogonalization. ## A little more on the methodology, and why this is interesting To me, ablation (or applying the methodology for the inverse, "augmentation") seems to be good for inducing/removing very specific features that you'd have to spend way too many tokens on encouraging or discouraging in your system prompt. Instead, you just apply your system prompt in the ablation script against a blank system prompt on the same dataset and orthogonalize for the desired behaviour in the final model weights. > Why this over fine-tuning? Ablation is much more surgical in nature whilst also being effectively executed with a _lot_ less data than fine-tuning, which I think is its main advantage. As well, and its most valuable aspect is it keeps as much of the original model's knowledge and training intact, whilst removing its tendency to behave in one very specific undesireable manner. (In this case, refusing user requests.) Fine tuning is still exceptionally useful and the go-to for broad behaviour changes; however, you may be able to get close to your desired behaviour with very few samples using the ablation/augmentation techniques. It may also be a useful step to add to your model refinement: orthogonalize -> fine-tune or vice-versa. I haven't really gotten around to exploring this model stacked with fine-tuning, I encourage others to give it a shot if they've got the capacity. > Okay, fine, but why V3? There's no V2? Well, I released a V2 of an abliterated model a while back for Meta-Llama-3-8B under Cognitive Computations. It ended up being not worth it to try V2 with larger models, I wanted to refine the model before wasting compute cycles on what might not even be a better model. I am however quite pleased about this latest methodology, it seems to have induced fewer hallucinations. So to show that it's a new fancy methodology from even that of the 8B V2, I decided to do a Microsoft and double up on my version jump because it's *such* an advancement (or so the excuse went, when in actuality it was because too many legacy but actively used Microsoft libraries checked for 'Windows 9' in the OS name to detect Windows 95/98 as one.) ## Quirkiness awareness notice This model may come with interesting quirks, with the methodology being so new. I encourage you to play with the model, and post any quirks you notice in the community tab, as that'll help us further understand what this orthogonalization has in the way of side effects. If you manage to develop further improvements, please share! This is really the most basic way to use ablation, but there are other possibilities that I believe are as-yet unexplored. Additionally, feel free to reach out in any way about this. I'm on the Cognitive Computations Discord, I'm watching the Community tab, reach out! I'd love to see this methodology used in other ways, and so would gladly support whoever whenever I can.
mradermacher/L3-MS-Astoria-70b-i1-GGUF
mradermacher
"2024-06-06T18:50:27Z"
1,033
2
transformers
[ "transformers", "gguf", "merge", "en", "base_model:Steelskull/L3-MS-Astoria-70b", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-06-06T05:45:37Z"
--- base_model: Steelskull/L3-MS-Astoria-70b language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Steelskull/L3-MS-Astoria-70b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/L3-MS-Astoria-70b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-MS-Astoria-70b-i1-GGUF/resolve/main/L3-MS-Astoria-70b.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/L3-MS-Astoria-70b-i1-GGUF/resolve/main/L3-MS-Astoria-70b.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/L3-MS-Astoria-70b-i1-GGUF/resolve/main/L3-MS-Astoria-70b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | | | [GGUF](https://huggingface.co/mradermacher/L3-MS-Astoria-70b-i1-GGUF/resolve/main/L3-MS-Astoria-70b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | | | [GGUF](https://huggingface.co/mradermacher/L3-MS-Astoria-70b-i1-GGUF/resolve/main/L3-MS-Astoria-70b.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | | | [GGUF](https://huggingface.co/mradermacher/L3-MS-Astoria-70b-i1-GGUF/resolve/main/L3-MS-Astoria-70b.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | | | [GGUF](https://huggingface.co/mradermacher/L3-MS-Astoria-70b-i1-GGUF/resolve/main/L3-MS-Astoria-70b.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/L3-MS-Astoria-70b-i1-GGUF/resolve/main/L3-MS-Astoria-70b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-MS-Astoria-70b-i1-GGUF/resolve/main/L3-MS-Astoria-70b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-MS-Astoria-70b-i1-GGUF/resolve/main/L3-MS-Astoria-70b.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-MS-Astoria-70b-i1-GGUF/resolve/main/L3-MS-Astoria-70b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/L3-MS-Astoria-70b-i1-GGUF/resolve/main/L3-MS-Astoria-70b.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/L3-MS-Astoria-70b-i1-GGUF/resolve/main/L3-MS-Astoria-70b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/L3-MS-Astoria-70b-i1-GGUF/resolve/main/L3-MS-Astoria-70b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/L3-MS-Astoria-70b-i1-GGUF/resolve/main/L3-MS-Astoria-70b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | | | [GGUF](https://huggingface.co/mradermacher/L3-MS-Astoria-70b-i1-GGUF/resolve/main/L3-MS-Astoria-70b.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/L3-MS-Astoria-70b-i1-GGUF/resolve/main/L3-MS-Astoria-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/L3-MS-Astoria-70b-i1-GGUF/resolve/main/L3-MS-Astoria-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-MS-Astoria-70b-i1-GGUF/resolve/main/L3-MS-Astoria-70b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-MS-Astoria-70b-i1-GGUF/resolve/main/L3-MS-Astoria-70b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/L3-MS-Astoria-70b-i1-GGUF/resolve/main/L3-MS-Astoria-70b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3-MS-Astoria-70b-i1-GGUF/resolve/main/L3-MS-Astoria-70b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
kk08/CryptoBERT
kk08
"2023-09-12T06:37:34Z"
1,032
24
transformers
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "generated_from_trainer", "crypto", "sentiment", "analysis", "en", "base_model:ProsusAI/finbert", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-04-13T17:52:32Z"
--- language: - en tags: - generated_from_trainer - crypto - sentiment - analysis pipeline_tag: text-classification base_model: ProsusAI/finbert model-index: - name: CryptoBERT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CryptoBERT This model is a fine-tuned version of [ProsusAI/finbert](https://huggingface.co/ProsusAI/finbert) on the Custom Crypto Market Sentiment dataset. It achieves the following results on the evaluation set: - Loss: 0.3823 ```python from transformers import BertTokenizer, BertForSequenceClassification from transformers import pipeline tokenizer = BertTokenizer.from_pretrained("kk08/CryptoBERT") model = BertForSequenceClassification.from_pretrained("kk08/CryptoBERT") classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) text = "Bitcoin (BTC) touches $29k, Ethereum (ETH) Set To Explode, RenQ Finance (RENQ) Crosses Massive Milestone" result = classifier(text) print(result) ``` ``` [{'label': 'LABEL_1', 'score': 0.9678454399108887}] ``` ## Model description This model fine-tunes the [ProsusAI/finbert](https://huggingface.co/ProsusAI/finbert), which is a pre-trained NLP model to analyze the sentiment of the financial text. CryptoBERT model fine-tunes this by training the model as a downstream task on Custom Crypto Sentiment data to predict whether the given text related to the Crypto market is Positive (LABEL_1) or Negative (LABEL_0). ## Intended uses & limitations The model can perform well on Crypto-related data. The main limitation is that the fine-tuning was done using only a small corpus of data ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.4077 | 1.0 | 27 | 0.4257 | | 0.2048 | 2.0 | 54 | 0.2479 | | 0.0725 | 3.0 | 81 | 0.3068 | | 0.0028 | 4.0 | 108 | 0.4120 | | 0.0014 | 5.0 | 135 | 0.3566 | | 0.0007 | 6.0 | 162 | 0.3495 | | 0.0006 | 7.0 | 189 | 0.3645 | | 0.0005 | 8.0 | 216 | 0.3754 | | 0.0004 | 9.0 | 243 | 0.3804 | | 0.0004 | 10.0 | 270 | 0.3823 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
silk-road/luotuo-bert-medium
silk-road
"2024-04-07T05:53:49Z"
1,032
3
transformers
[ "transformers", "pytorch", "safetensors", "bert", "feature-extraction", "custom_code", "text-embeddings-inference", "region:us" ]
feature-extraction
"2023-07-05T12:39:56Z"
Entry not found
TheBloke/wizard-vicuna-13B-GGUF
TheBloke
"2023-09-27T12:53:20Z"
1,032
5
transformers
[ "transformers", "gguf", "llama", "causal-lm", "en", "base_model:junelee/wizard-vicuna-13b", "license:other", "text-generation-inference", "region:us" ]
null
"2023-09-20T01:53:39Z"
--- language: - en license: other tags: - causal-lm - llama model_name: Wizard Vicuna 13B base_model: junelee/wizard-vicuna-13b inference: false model_creator: junelee model_type: llama prompt_template: 'A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user''s questions. USER: {prompt} ASSISTANT: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Wizard Vicuna 13B - GGUF - Model creator: [junelee](https://huggingface.co/junelee) - Original model: [Wizard Vicuna 13B](https://huggingface.co/junelee/wizard-vicuna-13b) <!-- description start --> ## Description This repo contains GGUF format model files for [junelee's Wizard Vicuna 13B](https://huggingface.co/junelee/wizard-vicuna-13b). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/wizard-vicuna-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/wizard-vicuna-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGUF) * [junelee's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/wizard-vicuna-13B-HF) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Vicuna ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [wizard-vicuna-13b.Q2_K.gguf](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGUF/blob/main/wizard-vicuna-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [wizard-vicuna-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGUF/blob/main/wizard-vicuna-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [wizard-vicuna-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGUF/blob/main/wizard-vicuna-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [wizard-vicuna-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGUF/blob/main/wizard-vicuna-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [wizard-vicuna-13b.Q4_0.gguf](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGUF/blob/main/wizard-vicuna-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [wizard-vicuna-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGUF/blob/main/wizard-vicuna-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [wizard-vicuna-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGUF/blob/main/wizard-vicuna-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [wizard-vicuna-13b.Q5_0.gguf](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGUF/blob/main/wizard-vicuna-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [wizard-vicuna-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGUF/blob/main/wizard-vicuna-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [wizard-vicuna-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGUF/blob/main/wizard-vicuna-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [wizard-vicuna-13b.Q6_K.gguf](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGUF/blob/main/wizard-vicuna-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [wizard-vicuna-13b.Q8_0.gguf](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGUF/blob/main/wizard-vicuna-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/wizard-vicuna-13B-GGUF and below it, a specific filename to download, such as: wizard-vicuna-13b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/wizard-vicuna-13B-GGUF wizard-vicuna-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/wizard-vicuna-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/wizard-vicuna-13B-GGUF wizard-vicuna-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m wizard-vicuna-13b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/wizard-vicuna-13B-GGUF", model_file="wizard-vicuna-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: junelee's Wizard Vicuna 13B <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Wizard-Vicuna-13B-HF This is a float16 HF format repo for [junelee's wizard-vicuna 13B](https://huggingface.co/junelee/wizard-vicuna-13b). June Lee's repo was also HF format. The reason I've made this is that the original repo was in float32, meaning it required 52GB disk space, VRAM and RAM. This model was converted to float16 to make it easier to load and manage. ## Repositories available * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-GPTQ). * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGML). * [float16 HF format model for GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-HF). <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original WizardVicuna-13B model card Github page: https://github.com/melodysdreamj/WizardVicunaLM # WizardVicunaLM ### Wizard's dataset + ChatGPT's conversation extension + Vicuna's tuning method I am a big fan of the ideas behind WizardLM and VicunaLM. I particularly like the idea of WizardLM handling the dataset itself more deeply and broadly, as well as VicunaLM overcoming the limitations of single-turn conversations by introducing multi-round conversations. As a result, I combined these two ideas to create WizardVicunaLM. This project is highly experimental and designed for proof of concept, not for actual usage. ## Benchmark ### Approximately 7% performance improvement over VicunaLM ![](https://user-images.githubusercontent.com/21379657/236088663-3fa212c9-0112-4d44-9b01-f16ea093cb67.png) ### Detail The questions presented here are not from rigorous tests, but rather, I asked a few questions and requested GPT-4 to score them. The models compared were ChatGPT 3.5, WizardVicunaLM, VicunaLM, and WizardLM, in that order. | | gpt3.5 | wizard-vicuna-13b | vicuna-13b | wizard-7b | link | |-----|--------|-------------------|------------|-----------|----------| | Q1 | 95 | 90 | 85 | 88 | [link](https://sharegpt.com/c/YdhIlby) | | Q2 | 95 | 97 | 90 | 89 | [link](https://sharegpt.com/c/YOqOV4g) | | Q3 | 85 | 90 | 80 | 65 | [link](https://sharegpt.com/c/uDmrcL9) | | Q4 | 90 | 85 | 80 | 75 | [link](https://sharegpt.com/c/XBbK5MZ) | | Q5 | 90 | 85 | 80 | 75 | [link](https://sharegpt.com/c/AQ5tgQX) | | Q6 | 92 | 85 | 87 | 88 | [link](https://sharegpt.com/c/eVYwfIr) | | Q7 | 95 | 90 | 85 | 92 | [link](https://sharegpt.com/c/Kqyeub4) | | Q8 | 90 | 85 | 75 | 70 | [link](https://sharegpt.com/c/M0gIjMF) | | Q9 | 92 | 85 | 70 | 60 | [link](https://sharegpt.com/c/fOvMtQt) | | Q10 | 90 | 80 | 75 | 85 | [link](https://sharegpt.com/c/YYiCaUz) | | Q11 | 90 | 85 | 75 | 65 | [link](https://sharegpt.com/c/HMkKKGU) | | Q12 | 85 | 90 | 80 | 88 | [link](https://sharegpt.com/c/XbW6jgB) | | Q13 | 90 | 95 | 88 | 85 | [link](https://sharegpt.com/c/JXZb7y6) | | Q14 | 94 | 89 | 90 | 91 | [link](https://sharegpt.com/c/cTXH4IS) | | Q15 | 90 | 85 | 88 | 87 | [link](https://sharegpt.com/c/GZiM0Yt) | | | 91 | 88 | 82 | 80 | | ## Principle We adopted the approach of WizardLM, which is to extend a single problem more in-depth. However, instead of using individual instructions, we expanded it using Vicuna's conversation format and applied Vicuna's fine-tuning techniques. Turning a single command into a rich conversation is what we've done [here](https://sharegpt.com/c/6cmxqq0). After creating the training data, I later trained it according to the Vicuna v1.1 [training method](https://github.com/lm-sys/FastChat/blob/main/scripts/train_vicuna_13b.sh). ## Detailed Method First, we explore and expand various areas in the same topic using the 7K conversations created by WizardLM. However, we made it in a continuous conversation format instead of the instruction format. That is, it starts with WizardLM's instruction, and then expands into various areas in one conversation using ChatGPT 3.5. After that, we applied the following model using Vicuna's fine-tuning format. ## Training Process Trained with 8 A100 GPUs for 35 hours. ## Weights You can see the [dataset](https://huggingface.co/datasets/junelee/wizard_vicuna_70k) we used for training and the [13b model](https://huggingface.co/junelee/wizard-vicuna-13b) in the huggingface. ## Conclusion If we extend the conversation to gpt4 32K, we can expect a dramatic improvement, as we can generate 8x more, more accurate and richer conversations. ## License The model is licensed under the LLaMA model, and the dataset is licensed under the terms of OpenAI because it uses ChatGPT. Everything else is free. ## Author [JUNE LEE](https://github.com/melodysdreamj) - He is active in Songdo Artificial Intelligence Study and GDG Songdo. <!-- original-model-card end -->
nncyberpunk/SDXL1.0_NewRealityXL_21
nncyberpunk
"2024-01-29T17:17:25Z"
1,031
0
diffusers
[ "diffusers", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-01-27T12:31:21Z"
NewRealityXL 2.1 https://civitai.com/models/161068/newrealityxl-all-in-one-photographic?modelVersionId=312982
fusi0n/llm-compiler-13b-ftd-Q6_K-GGUF
fusi0n
"2024-06-28T17:25:15Z"
1,031
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:facebook/llm-compiler-13b-ftd", "license:other", "region:us" ]
null
"2024-06-28T17:24:27Z"
--- base_model: facebook/llm-compiler-13b-ftd license: other tags: - llama-cpp - gguf-my-repo extra_gated_prompt: "**Meta Large Language Model Compiler (LLM Compiler) LICENSE AGREEMENT**\n\ Version Release Date: 27th June 2024\n“**Agreement**” means the terms and conditions\ \ for use, reproduction, distribution and modification of the LLM Compiler Materials\ \ set forth herein.\n“**Documentation**” means the specifications, manuals and documentation\ \ accompanying the LLM Compiler distributed by Meta at:\n* [https://huggingface.co/facebook/llm-compiler-7b](https://huggingface.co/facebook/llm-compiler-7b)\ \ * [https://huggingface.co/facebook/llm-compiler-7b-ftd](https://huggingface.co/facebook/llm-compiler-7b-ftd)\ \ * [https://huggingface.co/facebook/llm-compiler-13b](https://huggingface.co/facebook/llm-compiler-13b)\ \ * [https://huggingface.co/facebook/llm-compiler-13b-ftd](https://huggingface.co/facebook/llm-compiler-13b-ftd)\n\ “**Licensee**” or “**you**” means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf), of\ \ the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n“**Meta Large\ \ Language Model Compiler” and “LLM Compiler**” mean the foundational large language\ \ models and software and algorithms, including machine-learning model code, trained\ \ model weights, inference-enabling code, training-enabling code, fine-tuning enabling\ \ code and other elements of the foregoing distributed by Meta at:\n* [https://huggingface.co/facebook/llm-compiler-7b](https://huggingface.co/facebook/llm-compiler-7b)\ \ * [https://huggingface.co/facebook/llm-compiler-7b-ftd](https://huggingface.co/facebook/llm-compiler-7b-ftd)\ \ * [https://huggingface.co/facebook/llm-compiler-13b](https://huggingface.co/facebook/llm-compiler-13b)\ \ * [https://huggingface.co/facebook/llm-compiler-13b-ftd](https://huggingface.co/facebook/llm-compiler-13b-ftd)\n\ “**LLM Compiler Materials**” means, collectively, Meta’s proprietary LLM Compiler\ \ and Documentation (and any portion thereof) made available under this Agreement.\n\ “**Meta**” or “**we**” means Meta Platforms Ireland Limited (if you are located\ \ in or, if you are an entity, your principal place of business is in the EEA or\ \ Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or\ \ Switzerland). \nBy clicking “I Accept” below or by using or distributing any portion\ \ or element of the LLM Compiler Materials, you agree to be bound by this Agreement.\n\ 1. **License Rights and Redistribution**. \\\n\n a. <span style=\"text-decoration:underline;\"\ >Grant of Rights</span>. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the LLM Compiler Materials to use, reproduce, distribute,\ \ copy, create derivative works of, and make modifications to the LLM Compiler Materials.\ \ \n\n b. <span style=\"text-decoration:underline;\">Redistribution and Use</span>.\ \ \n\n i. If you distribute or make available the LLM Compiler Materials (or\ \ any derivative works thereof), or a product or service that uses any of them,\ \ including another AI model, you shall (A) provide a copy of this Agreement with\ \ any such LLM Compiler Materials; and (B) prominently display “Built with LLM Compiler”\ \ on a related website, user interface, blogpost, about page, or product documentation.\ \ If you use the LLM Compiler Materials to create, train, fine tune, or otherwise\ \ improve an AI model, which is distributed or made available, you shall also include\ \ “LLM Compiler” at the beginning of any such AI model name.\n\n ii. If you receive\ \ LLM Compiler Materials, or any derivative works thereof, from a Licensee as part\ \ of an integrated end user product, then Section 2 of this Agreement will not apply\ \ to you. \n\n iii. You must retain in all copies of the LLM Compiler Materials\ \ that you distribute the following attribution notice within a “Notice” text file\ \ distributed as a part of such copies: “LLM Compiler is licensed under the LLM\ \ Compiler License, Copyright © Meta Platforms, Inc. All Rights Reserved.”\n\n \ \ iv. Your use of the LLM Compiler Materials must comply with applicable laws\ \ and regulations (including trade compliance laws and regulations) and adhere to\ \ the Acceptable Use Policy for Llama Materials (available at https://llama.meta.com/llama3/use-policy),\ \ which is hereby incorporated by reference into this Agreement.\n\n v. You will\ \ not use the LLM Compiler Materials or any output or results of the LLM Compiler\ \ Materials to improve any other large language model. \n\n2. **Additional Commercial\ \ Terms**. If, on the LLM Compiler release date, the monthly active users of the\ \ products or services made available by or for Licensee, or Licensee’s affiliates,\ \ is greater than 700 million monthly active users in the preceding calendar month,\ \ you must request a license from Meta, which Meta may grant to you in its sole\ \ discretion, and you are not authorized to exercise any of the rights under this\ \ Agreement unless or until Meta otherwise expressly grants you such rights. \n\ 3**. Disclaimer of Warranty**. UNLESS REQUIRED BY APPLICABLE LAW, THE LLM COMPILER\ \ MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS,\ \ WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND,\ \ BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE,\ \ NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE\ \ SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\ \ THE LLM COMPILER MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE\ \ LLM COMPILER MATERIALS AND ANY OUTPUT AND RESULTS.\n4. **Limitation of Liability**.\ \ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\ \ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\ \ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\ \ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\ \ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. **Intellectual Property**.\n\ \n a. No trademark licenses are granted under this Agreement, and in connection\ \ with the LLM Compiler Materials, neither Meta nor Licensee may use any name or\ \ mark owned by or associated with the other or any of its affiliates, except as\ \ required for reasonable and customary use in describing and redistributing the\ \ LLM Compiler Materials or as set forth in this Section 5(a). Meta hereby grants\ \ you a license to use LLM Compiler (the “Mark”) solely as required to comply with\ \ the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines\ \ (currently accessible at[ https://about.meta.com/brand/resources/meta/company-brand/)](https://about.meta.com/brand/resources/meta/company-brand/).\ \ All goodwill arising out of your use of the Mark will inure to the benefit of\ \ Meta. \n\n b. Subject to Meta’s ownership of LLM Compiler Materials and derivatives\ \ made by or for Meta, with respect to any derivative works and modifications of\ \ the LLM Compiler Materials that are made by you, as between you and Meta, you\ \ are and will be the owner of such derivative works and modifications.\n\n c.\ \ If you institute litigation or other proceedings against Meta or any entity (including\ \ a cross-claim or counterclaim in a lawsuit) alleging that the LLM Compiler Materials\ \ or LLM Compiler outputs or results, or any portion of any of the foregoing, constitutes\ \ infringement of intellectual property or other rights owned or licensable by you,\ \ then any licenses granted to you under this Agreement shall terminate as of the\ \ date such litigation or claim is filed or instituted. You will indemnify and hold\ \ harmless Meta from and against any claim by any third party arising out of or\ \ related to your use or distribution of the LLM Compiler Materials.\n\n6. **Term\ \ and Termination**. The term of this Agreement will commence upon your acceptance\ \ of this Agreement or access to the LLM Compiler Materials and will continue in\ \ full force and effect until terminated in accordance with the terms and conditions\ \ herein. Meta may terminate this Agreement if you are in breach of any term or\ \ condition of this Agreement. Upon termination of this Agreement, you shall delete\ \ and cease use of the LLM Compiler Materials. Sections 3, 4 and 7 shall survive\ \ the termination of this Agreement. \n7. **Governing Law and Jurisdiction**. This\ \ Agreement will be governed and construed under the laws of the State of California\ \ without regard to choice of law principles, and the UN Convention on Contracts\ \ for the International Sale of Goods does not apply to this Agreement. The courts\ \ of California shall have exclusive jurisdiction of any dispute arising out of\ \ this Agreement. " extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text I accept the terms and conditions: checkbox geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: I Accept Meta LLM Compiler License and AUP --- # fusi0n/llm-compiler-13b-ftd-Q6_K-GGUF This model was converted to GGUF format from [`facebook/llm-compiler-13b-ftd`](https://huggingface.co/facebook/llm-compiler-13b-ftd) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/facebook/llm-compiler-13b-ftd) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo fusi0n/llm-compiler-13b-ftd-Q6_K-GGUF --hf-file llm-compiler-13b-ftd-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo fusi0n/llm-compiler-13b-ftd-Q6_K-GGUF --hf-file llm-compiler-13b-ftd-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo fusi0n/llm-compiler-13b-ftd-Q6_K-GGUF --hf-file llm-compiler-13b-ftd-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo fusi0n/llm-compiler-13b-ftd-Q6_K-GGUF --hf-file llm-compiler-13b-ftd-q6_k.gguf -c 2048 ```
facebook/mcontriever
facebook
"2022-05-29T08:58:37Z"
1,030
3
transformers
[ "transformers", "pytorch", "bert", "endpoints_compatible", "region:us" ]
null
"2022-05-29T08:54:35Z"
Entry not found
sijunhe/nezha-cn-base
sijunhe
"2022-06-24T03:53:56Z"
1,030
9
transformers
[ "transformers", "pytorch", "nezha", "fill-mask", "arxiv:1909.00204", "license:afl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-06-18T16:39:15Z"
--- license: afl-3.0 --- **Please use 'Bert' related tokenizer classes and 'Nezha' related model classes** [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu. The original checkpoints can be found [here](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/NEZHA-PyTorch) ## Example Usage ``` from transformers import BertTokenizer, NezhaModel tokenizer = BertTokenizer.from_pretrained('sijunhe/nezha-cn-base') model = NezhaModel.from_pretrained("sijunhe/nezha-cn-base") text = "我爱北京天安门" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
yujiepan/llama-3-tiny-random
yujiepan
"2024-04-27T06:29:06Z"
1,030
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-19T14:12:15Z"
--- library_name: transformers pipeline_tag: text-generation inference: true widget: - text: Hello! example_title: Hello world group: Python --- This model is randomly initialized, using the config from [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) but with smaller size. **Note the model is in bfloat16**. "yujiepan/llama-3-tiny-random" and "yujiepan/meta-llama-3-tiny-random" shares exactly the same files except the repo name. Codes: ```python import transformers import torch import os from huggingface_hub import create_repo, upload_folder import accelerate source_model_id = 'meta-llama/Meta-Llama-3-8B-Instruct' save_path = '/tmp/yujiepan/meta-llama-3-tiny-random' repo_id = 'yujiepan/meta-llama-3-tiny-random' os.system(f'rm -rf {save_path}') config = transformers.AutoConfig.from_pretrained( source_model_id, trust_remote_code=True, ) config._name_or_path = source_model_id config.hidden_size = 4 config.intermediate_size = 14 config.num_attention_heads = 2 config.num_key_value_heads = 1 config.num_hidden_layers = 2 config.torch_dtype = "bfloat16" model = transformers.AutoModelForCausalLM.from_config( config, trust_remote_code=True, ) with accelerate.init_empty_weights(): model.generation_config = transformers.AutoModelForCausalLM.from_pretrained(source_model_id).generation_config model = model.to(torch.bfloat16) model.save_pretrained(save_path) tokenizer = transformers.AutoTokenizer.from_pretrained( source_model_id, trust_remote_code=True, ) tokenizer.save_pretrained(save_path) model.float().generate(torch.tensor([[1, 2, 3]]).long(), max_length=16) os.system(f'ls -alh {save_path}') # os.system(f'rm -rf {save_path}/model.safetensors') create_repo(repo_id, exist_ok=True) upload_folder(repo_id='yujiepan/meta-llama-3-tiny-random', folder_path=save_path) upload_folder(repo_id='yujiepan/llama-3-tiny-random', folder_path=save_path) ```
bartowski/llama-3-neural-chat-v1-8b-GGUF
bartowski
"2024-04-21T07:12:17Z"
1,030
5
transformers
[ "transformers", "gguf", "text-generation", "dataset:mlabonne/orpo-dpo-mix-40k", "dataset:Open-Orca/SlimOrca-Dedup", "dataset:jondurbin/airoboros-3.2", "dataset:microsoft/orca-math-word-problems-200k", "dataset:m-a-p/Code-Feedback", "dataset:MaziyarPanahi/WizardLM_evol_instruct_V2_196k", "base_model:meta-llama/Meta-Llama-3-8B", "license:other", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-21T04:09:47Z"
--- library_name: transformers license: other datasets: - mlabonne/orpo-dpo-mix-40k - Open-Orca/SlimOrca-Dedup - jondurbin/airoboros-3.2 - microsoft/orca-math-word-problems-200k - m-a-p/Code-Feedback - MaziyarPanahi/WizardLM_evol_instruct_V2_196k base_model: meta-llama/Meta-Llama-3-8B quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp iMatrix Quantizations of llama-3-neural-chat-v1-8b This model has the <|eot_id|> token set to not-special, which seems to work better with current inference engines. Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> fork from pcuenca <a href="https://github.com/pcuenca/llama.cpp/tree/llama3-conversion">llama3-conversion</a> for quantization. Original model: https://huggingface.co/Locutusque/llama-3-neural-chat-v1-8b All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [llama-3-neural-chat-v1-8b-Q8_0.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. | | [llama-3-neural-chat-v1-8b-Q6_K.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. | | [llama-3-neural-chat-v1-8b-Q5_K_M.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. | | [llama-3-neural-chat-v1-8b-Q5_K_S.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. | | [llama-3-neural-chat-v1-8b-Q4_K_M.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [llama-3-neural-chat-v1-8b-Q4_K_S.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. | | [llama-3-neural-chat-v1-8b-IQ4_NL.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [llama-3-neural-chat-v1-8b-IQ4_XS.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [llama-3-neural-chat-v1-8b-Q3_K_L.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. | | [llama-3-neural-chat-v1-8b-Q3_K_M.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. | | [llama-3-neural-chat-v1-8b-IQ3_M.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [llama-3-neural-chat-v1-8b-IQ3_S.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [llama-3-neural-chat-v1-8b-Q3_K_S.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. | | [llama-3-neural-chat-v1-8b-IQ3_XS.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [llama-3-neural-chat-v1-8b-IQ3_XXS.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [llama-3-neural-chat-v1-8b-Q2_K.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. | | [llama-3-neural-chat-v1-8b-IQ2_M.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [llama-3-neural-chat-v1-8b-IQ2_S.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. | | [llama-3-neural-chat-v1-8b-IQ2_XS.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. | | [llama-3-neural-chat-v1-8b-IQ2_XXS.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. | | [llama-3-neural-chat-v1-8b-IQ1_M.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. | | [llama-3-neural-chat-v1-8b-IQ1_S.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. | ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
nitky/Oumuamua-7b-base
nitky
"2024-06-01T15:31:15Z"
1,030
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "ja", "en", "arxiv:2403.19522", "base_model:tokyotech-llm/Swallow-MS-7b-v0.1", "base_model:mistralai/Mistral-7B-v0.1", "base_model:nitky/Flavor-7b", "base_model:stabilityai/japanese-stablelm-base-gamma-7b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-01T10:39:53Z"
--- base_model: - tokyotech-llm/Swallow-MS-7b-v0.1 - mistralai/Mistral-7B-v0.1 - nitky/Flavor-7b - stabilityai/japanese-stablelm-base-gamma-7b library_name: transformers tags: - mergekit - merge language: - ja - en pipeline_tag: text-generation license: apache-2.0 --- # Oumuamua-7b-base This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Output example ### Input text ``` 日本で最も高い山の名前は ``` ### Output text ``` 日本で最も高い山の名前は、富士山。 その標高は3776メートル。 世界でも20位以内に入る高さを誇る。 その富士山の麓にあるのが、静岡県富士市。 富士市は、富士山の麓にあるため、観光地としても有名である。 富士山の麓にあることから、富士市は観光地としても有名である。 富士山を眺めることができるスポットが多く、特に富士市の中心部から見る富士山は、その美しさから「日本一の眺望」と言われている。 ``` ## Test environment This model was tested using [text-generation-webui](https://github.com/oobabooga/text-generation-webui/tree/main). I use preset `min_p` and `Null preset` with temperature=0.3 for Generation. ## Usage ### Use the base model ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_name = "nitky/Oumuamua-7b-base" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") prompt = "日本で最も高い山の名前は" input_ids = tokenizer.encode( prompt, add_special_tokens=False, return_tensors="pt" ) tokens = model.generate( input_ids.to(device=model.device), max_new_tokens=256, do_sample=True, temperature=0.3 ) out = tokenizer.decode(tokens[0], skip_special_tokens=True) print(out) ``` ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [tokyotech-llm/Swallow-MS-7b-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-v0.1) as a base. ### Models Merged The following models were included in the merge: * [tokyotech-llm/Swallow-MS-7b-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-v0.1) * [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) * [nitky/Flavor-7b](https://huggingface.co/nitky/Flavor-7b) * [stabilityai/japanese-stablelm-base-gamma-7b](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: task_arithmetic base_model: mistralai/Mistral-7B-v0.1 models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: Mistral-7B-v0.1-VE-Swallow-MS --- merge_method: task_arithmetic base_model: nitky/Flavor-7b # private model models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: Flavor-7b-VE-Swallow-MS --- merge_method: task_arithmetic base_model: stabilityai/japanese-stablelm-base-gamma-7b models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: japanese-stablelm-base-gamma-7b-VE-Swallow-MS --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: 1.0 - model: Flavor-7b-VE-Swallow-MS parameters: weight: 0.5 - model: japanese-stablelm-base-gamma-7b-VE-Swallow-MS parameters: weight: -0.5 dtype: bfloat16 name: Oumuamua-7b-base-preset --- merge_method: model_stock base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 - model: Oumuamua-7b-base-preset dtype: bfloat16 name: Oumuamua-7b-base ```
studio-ousia/luke-large-finetuned-tacred
studio-ousia
"2022-03-23T12:31:16Z"
1,029
4
transformers
[ "transformers", "pytorch", "luke", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
Entry not found
IDEA-CCNL/Randeng-T5-784M-MultiTask-Chinese
IDEA-CCNL
"2023-05-25T09:43:24Z"
1,029
66
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "Text2Text Generation", "T5", "chinese", "sentencepiece", "zh", "arxiv:2209.02970", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2022-10-21T11:25:20Z"
--- license: apache-2.0 language: zh tags: - Text2Text Generation - T5 - chinese - sentencepiece inference: true widget: - text: "新闻分类任务:【微软披露拓扑量子计算机计划!】这篇文章的类别是什么?故事/文化/娱乐/体育/财经/房产/汽车/教育/科技" - type: "text-generation" --- # Randeng-T5-784M-MultiTask-Chinese - Main Page:[Fengshenbang](https://fengshenbang-lm.com/) - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM) ## 简介 Brief Introduction 在Randeng-T5-784M的基础上,收集了100个左右的中文数据集,进行Text2Text统一范式的有监督任务预训练。 On the basis of Randeng-T5-784M, about 100 Chinese datasets were collected and pre-trained for the supervised task of Text2Text unified paradigm. 本模型在中文zero-shot榜单ZeroClue上取得了第三名(不包括人类)的成绩,在所有基于T5(encoder-decoder架构)的模型中排名第一。 This model achieved the 3rd place (excluding humans) on the Chinese zero-shot benchmark ZeroClue, ranking first among all models based on T5 (encoder-decoder architecture). ![截图于Screenshot taken on 2022.12.01](zeroclue.png) ## 模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 通用 General | 自然语言转换 NLT | 燃灯 Randeng | MultiTask | 784M | 多任务-中文 MultiTask-Chinese | ## 模型信息 Model Information 参考论文:[Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](http://jmlr.org/papers/v21/20-074.html) 基于[Randeng-T5-784M](https://huggingface.co/IDEA-CCNL/Randeng-T5-784M),我们在收集的100+个中文领域的多任务数据集(从中采样了30w+个样本)上微调了它,得到了此多任务版本。这些多任务包括:情感分析,新闻分类,文本分类,意图识别,自然语言推理,多项选择,指代消解,抽取式阅读理解,实体识别,关键词抽取,生成式摘要。 Based on [Randeng-T5-784M](https://huggingface.co/IDEA-CCNL/Randeng-T5-784M), we fine-tuned it on a collection of 100+ multitasking datasets in Chinese domains (from which 30w+ samples were sampled) to obtain this multitasking version. These multitasks include: sentiment analysis, news classification, text classification, intention recognition, natural language inference, multiple choice, denotational disambiguation, extractive reading comprehension, entity recognition, keyword extraction, and generative summarization. ## 使用 Usage ```python import torch from transformers import T5Tokenizer, T5Config, T5ForConditionalGeneration # load tokenizer and model pretrained_model = "IDEA-CCNL/Randeng-T5-784M-MultiTask-Chinese" special_tokens = ["<extra_id_{}>".format(i) for i in range(100)] tokenizer = T5Tokenizer.from_pretrained( pretrained_model, do_lower_case=True, max_length=512, truncation=True, additional_special_tokens=special_tokens, ) config = T5Config.from_pretrained(pretrained_model) model = T5ForConditionalGeneration.from_pretrained(pretrained_model, config=config) model.resize_token_embeddings(len(tokenizer)) model.eval() # tokenize text = "新闻分类任务:【微软披露拓扑量子计算机计划!】这篇文章的类别是什么?故事/文化/娱乐/体育/财经/房产/汽车/教育/科技" encode_dict = tokenizer(text, max_length=512, padding='max_length',truncation=True) inputs = { "input_ids": torch.tensor([encode_dict['input_ids']]).long(), "attention_mask": torch.tensor([encode_dict['attention_mask']]).long(), } # generate answer logits = model.generate( input_ids = inputs['input_ids'], max_length=100, do_sample= True # early_stopping=True, ) logits=logits[:,1:] predict_label = [tokenizer.decode(i,skip_special_tokens=True) for i in logits] print(predict_label) # model output: 科技 ``` 除了分类任务,其他任务的数据构造例子如下: In addition to classification tasks, data construction examples of other tasks are as follows: ```python example_dict={ "文本分类":{"text_a":"钢琴块3别踩白块儿3钢琴块3是一款简洁的钢琴模拟软件,在Android平台上,类似的软件还是比较多的。","choices":["相机","影视娱乐","棋牌中心","新闻","财经","策略","休闲益智","教育"]}, '新闻分类':{"text_a":"微软披露拓扑量子计算机计划!","choices":["故事","文化","娱乐","体育","财经","房产","汽车","教育","科技"]}, '情感分析':{"text_a":"刚买iphone13 pro 还不到一个月,天天死机最差的一次购物体验","choices":["好评","差评"]}, '意图识别':{"text_a":"打电话给吴小军。","choices":["放音乐","播放下一首","打电话","退出导航","开始导航","其他","暂停音乐","导航","开导航"]}, '语义匹配':{"text_a":"今天心情不好","text_b":"我很不开心","choices":["相似","不相似"]}, '自然语言推理':{"text_a":"小明正在上高中","text_b":"小明是一个初中生","choices":["无关","矛盾","蕴含"]}, '多项选择':{"text_a":"这大家千万不能着急,我们现在只是暂时输了7分。距离比赛结束还有20多分钟呢,我们是完全有机会转败为赢的,大家加油!","question":"说话人希望大家:","choices":["别得意","冷静一些","加快速度","提前预习"]}, '指代消解':{"text_a":"李鸣觉得董客这人,踏实得叫人难受。可因为孟野和森森太疯,他只好去找董客聊天,但在董客眼里,李鸣也是不正常,他竟然放着现成的大学不愿上。","question":"【他】指的是【李鸣】吗?","choices":["是","不是"]}, '实体识别':{"text_a":"北京大学是我国的一座历史名校,坐落在海淀区,蔡元培曾经担任校长","question":"机构"}, '抽取式阅读理解':{"text_a":"《H》正式定档3月7日下午两点整在京东商城独家平台开启第一批5000份预售,定价230元人民币,回馈最忠实的火星歌迷,意在用精品回馈三年来跟随华晨宇音乐不离不弃的粉丝们的支持与厚爱","question":"华晨宇专辑h预售价格是多少?"}, '关键词抽取':{"text_a":"今儿在大众点评,找到了口碑不错的老茶故事私房菜。"}, '关键词识别':{"text_a":"今儿在大众点评,找到了口碑不错的老茶故事私房菜。","question":"请问这篇文章的关键词是大众点评、老茶私房菜吗?,"choices":["是","不是"]} "生成式摘要":{"text_a":"针对传统的流量分类管理系统存在不稳定、结果反馈不及时、分类结果显示不直观等问题,设计一个基于web的在线的流量分类管理系统.该系统采用流中前5个包(排除3次握手包)所含信息作为特征值计算资源,集成一种或多种分类算法用于在线网络流量分类,应用数据可视化技术处理分类结果.实验表明:在采用适应在线分类的特征集和c4.5决策树算法做分类时,系统能快速做出分类,且精度达到94%以上;数据可视化有助于人机交互,改善分类指导."} } # 构造prompt的过程中,verbalizer这个占位key的内容,是通过 "/".join(choices) 拼接起来 dataset2instruction = { "情感分析": { "prompt": "{}任务:【{}】这篇文章的情感态度是什么?{}", "keys_order": ["subtask_type","text_a", "verbalizer"], "data_type": "classification", }, "文本分类": { "prompt": "{}任务:【{}】这篇文章的类别是什么?{}", "keys_order": ["subtask_type","text_a", "verbalizer"], "data_type": "classification", }, "新闻分类": { "prompt": "{}任务:【{}】这篇文章的类别是什么?{}", "keys_order": ["subtask_type","text_a", "verbalizer"], "data_type": "classification", }, "意图识别": { "prompt": "{}任务:【{}】这句话的意图是什么?{}", "keys_order": ["subtask_type","text_a", "verbalizer"], "data_type": "classification", }, # -------------------- "自然语言推理": { "prompt": "{}任务:【{}】和【{}】,以上两句话的逻辑关系是什么?{}", "keys_order": ["subtask_type","text_a", "text_b", "verbalizer"], "data_type": "classification", }, "语义匹配": { "prompt": "{}任务:【{}】和【{}】,以上两句话的内容是否相似?{}", "keys_order": ["subtask_type","text_a", "text_b", "verbalizer"], "data_type": "classification", }, # ----------------------- "指代消解": { "prompt": "{}任务:文章【{}】中{}{}", "keys_order": ["subtask_type","text_a", "question", "verbalizer"], "data_type": "classification", }, "多项选择": { "prompt": "{}任务:阅读文章【{}】问题【{}】?{}", "keys_order": ["subtask_type","text_a", "question", "verbalizer"], "data_type": "classification", }, # ------------------------ "抽取式阅读理解": { "prompt": "{}任务:阅读文章【{}】问题【{}】的答案是什么?", "keys_order": ["subtask_type","text_a", "question"], "data_type": "mrc", }, "实体识别": { "prompt": "{}任务:找出【{}】这篇文章中所有【{}】类型的实体?", "keys_order": ["subtask_type","text_a", "question"], "data_type": "ner", }, # ------------------------ "关键词抽取": { "prompt": "{}任务:【{}】这篇文章的关键词是什么?", "keys_order": ["subtask_type","text_a"], "data_type": "keys", }, "关键词识别":{ "prompt": "{}任务:阅读文章【{}】问题【{}】{}", "keys_order": ["subtask_type","text_a","question","verbalizer"], "data_type": "classification", }, "生成式摘要": { "prompt": "{}任务:【{}】这篇文章的摘要是什么?", "keys_order": ["subtask_type","text_a"], "data_type": "summ", }, } def get_instruction(sample): template = dataset2instruction[sample["subtask_type"]] # print(template) # print(sample) sample["instruction"] = template["prompt"].format(*[ sample[k] for k in template["keys_order"] ]) print(sample["instruction"]) return sample["instruction"] ``` ## 预训练或微调 prtrain or finetune 如果您对于怎么预训练Randeng-T5模型或者想在自己的下游任务中微调Randeng模型,欢迎使用[Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM/)项目,这里提供了完整的示例: - [预训练](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/pretrain_t5) - [微调](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/mt5_summary) If you want to pre train the Randeng T5 model or fine tune the Randeng model in your downstream tasks, welcome to use [Fengshenbang LM]( https://github.com/IDEA-CCNL/Fengshenbang-LM/ )A complete example of the project is provided here: - [Pre training](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/pretrain_t5) - [Fine tune](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/mt5_summary) ## 引用 Citation 如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970): If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970): ```text @article{fengshenbang, author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen}, title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence}, journal = {CoRR}, volume = {abs/2209.02970}, year = {2022} } ``` 也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/): You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/): ```text @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
AUTOMATIC/promptgen-majinai-safe
AUTOMATIC
"2023-01-18T21:13:41Z"
1,029
15
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-01-18T21:04:42Z"
--- license: mit language: - en library_name: transformers pipeline_tag: text-generation --- Finetuned `distilgpt2` for 40 epochs on 1654 prompts scraped from majinai.art. Weights/emphasis stripped. Includes negative prompts. Intended for use with https://github.com/AUTOMATIC1111/stable-diffusion-webui-promptgen
pcuenq/sd-vae-ft-mse-flax
pcuenq
"2023-03-23T18:43:55Z"
1,029
1
transformers
[ "transformers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:mit", "region:us" ]
text-to-image
"2023-03-23T18:39:40Z"
--- license: mit tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image inference: false --- # Improved Autoencoders ## Utilizing These weights are intended to be used with the [🧨 diffusers library](https://github.com/huggingface/diffusers). If you are looking for the model to use with the original [CompVis Stable Diffusion codebase](https://github.com/CompVis/stable-diffusion), [come here](https://huggingface.co/stabilityai/sd-vae-ft-mse-original). This is a Flax version of the [original weights](https://huggingface.co/stabilityai/sd-vae-ft-mse) ## Decoder Finetuning We publish two kl-f8 autoencoder versions, finetuned from the original [kl-f8 autoencoder](https://github.com/CompVis/latent-diffusion#pretrained-autoencoding-models) on a 1:1 ratio of [LAION-Aesthetics](https://laion.ai/blog/laion-aesthetics/) and LAION-Humans, an unreleased subset containing only SFW images of humans. The intent was to fine-tune on the Stable Diffusion training set (the autoencoder was originally trained on OpenImages) but also enrich the dataset with images of humans to improve the reconstruction of faces. The first, _ft-EMA_, was resumed from the original checkpoint, trained for 313198 steps and uses EMA weights. It uses the same loss configuration as the original checkpoint (L1 + LPIPS). The second, _ft-MSE_, was resumed from _ft-EMA_ and uses EMA weights and was trained for another 280k steps using a different loss, with more emphasis on MSE reconstruction (MSE + 0.1 * LPIPS). It produces somewhat ``smoother'' outputs. The batch size for both versions was 192 (16 A100s, batch size 12 per GPU). To keep compatibility with existing models, only the decoder part was finetuned; the checkpoints can be used as a drop-in replacement for the existing autoencoder. _Original kl-f8 VAE vs f8-ft-EMA vs f8-ft-MSE_ ## Evaluation ### COCO 2017 (256x256, val, 5000 images) | Model | train steps | rFID | PSNR | SSIM | PSIM | Link | Comments |----------|---------|------|--------------|---------------|---------------|-----------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------| | | | | | | | | | | original | 246803 | 4.99 | 23.4 +/- 3.8 | 0.69 +/- 0.14 | 1.01 +/- 0.28 | https://ommer-lab.com/files/latent-diffusion/kl-f8.zip | as used in SD | | ft-EMA | 560001 | 4.42 | 23.8 +/- 3.9 | 0.69 +/- 0.13 | 0.96 +/- 0.27 | https://huggingface.co/stabilityai/sd-vae-ft-ema-original/resolve/main/vae-ft-ema-560000-ema-pruned.ckpt | slightly better overall, with EMA | | ft-MSE | 840001 | 4.70 | 24.5 +/- 3.7 | 0.71 +/- 0.13 | 0.92 +/- 0.27 | https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt | resumed with EMA from ft-EMA, emphasis on MSE (rec. loss = MSE + 0.1 * LPIPS), smoother outputs | ### LAION-Aesthetics 5+ (256x256, subset, 10000 images) | Model | train steps | rFID | PSNR | SSIM | PSIM | Link | Comments |----------|-----------|------|--------------|---------------|---------------|-----------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------| | | | | | | | | | | original | 246803 | 2.61 | 26.0 +/- 4.4 | 0.81 +/- 0.12 | 0.75 +/- 0.36 | https://ommer-lab.com/files/latent-diffusion/kl-f8.zip | as used in SD | | ft-EMA | 560001 | 1.77 | 26.7 +/- 4.8 | 0.82 +/- 0.12 | 0.67 +/- 0.34 | https://huggingface.co/stabilityai/sd-vae-ft-ema-original/resolve/main/vae-ft-ema-560000-ema-pruned.ckpt | slightly better overall, with EMA | | ft-MSE | 840001 | 1.88 | 27.3 +/- 4.7 | 0.83 +/- 0.11 | 0.65 +/- 0.34 | https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt | resumed with EMA from ft-EMA, emphasis on MSE (rec. loss = MSE + 0.1 * LPIPS), smoother outputs | ### Visual _Visualization of reconstructions on 256x256 images from the COCO2017 validation dataset._ <p align="center"> <br> <b> 256x256: ft-EMA (left), ft-MSE (middle), original (right)</b> </p> <p align="center"> <img src=https://huggingface.co/stabilityai/stable-diffusion-decoder-finetune/resolve/main/eval/ae-decoder-tuning-reconstructions/merged/00025_merged.png /> </p> <p align="center"> <img src=https://huggingface.co/stabilityai/stable-diffusion-decoder-finetune/resolve/main/eval/ae-decoder-tuning-reconstructions/merged/00011_merged.png /> </p> <p align="center"> <img src=https://huggingface.co/stabilityai/stable-diffusion-decoder-finetune/resolve/main/eval/ae-decoder-tuning-reconstructions/merged/00037_merged.png /> </p> <p align="center"> <img src=https://huggingface.co/stabilityai/stable-diffusion-decoder-finetune/resolve/main/eval/ae-decoder-tuning-reconstructions/merged/00043_merged.png /> </p> <p align="center"> <img src=https://huggingface.co/stabilityai/stable-diffusion-decoder-finetune/resolve/main/eval/ae-decoder-tuning-reconstructions/merged/00053_merged.png /> </p> <p align="center"> <img src=https://huggingface.co/stabilityai/stable-diffusion-decoder-finetune/resolve/main/eval/ae-decoder-tuning-reconstructions/merged/00029_merged.png /> </p>
BSC-LT/roberta_model_for_anonimization
BSC-LT
"2023-08-03T08:53:19Z"
1,029
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "es", "ca", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2023-08-03T08:14:55Z"
--- license: mit language: - es - ca metrics: - f1 - precision - recall pipeline_tag: token-classification widget: - text: "Me llamo Alex y vivo en Barcelona" --- This is a Roberta multilingual (Catalan & Spanish) anonimization model, for use with BSC's AnonymizationPipeline at: https://github.com/TeMU-BSC/AnonymizationPipeline. The anonymization pipeline is a library for performing sensitive data identification and ultimately anonymization of the detected data in Spanish and Catalan user generated plain text. This is model can be used as a standalone model but it is meant to work within the pipeline. The Roberta model can detect the following entities: ORG, PER, LOC | Type | Score | | --- | --- | | `ENTS_F` | 90.03 | | `ENTS_P` | 89.7 | | `ENTS_R` | 90.3 |
lucianosb/llama-2-7b-langchain-chat-GGUF
lucianosb
"2023-08-29T12:44:22Z"
1,029
10
null
[ "gguf", "text-generation", "pt", "en", "es", "ru", "de", "pl", "th", "vi", "sv", "bn", "da", "he", "it", "fa", "sk", "id", "nb", "el", "hu", "eu", "zh", "eo", "ja", "ca", "cs", "bg", "fi", "tr", "ro", "ar", "uk", "ko", "gl", "fr", "nl", "dataset:Photolens/oasst1-langchain-llama-2-formatted", "license:llama2", "region:us" ]
text-generation
"2023-08-28T16:21:39Z"
--- inference: false language: - pt - en - es - ru - de - pl - th - vi - sv - bn - da - he - it - fa - sk - id - nb - el - hu - eu - zh - eo - ja - ca - cs - bg - fi - tr - ro - ar - uk - ko - gl - fr - nl license: llama2 model_creator: Photolens model_link: https://huggingface.co/Photolens/llama-2-7b-langchain-chat model_name: lama-2-7b-langchain-chat model_type: llama quantized_by: lucianosb pipeline_tag: text-generation datasets: - Photolens/oasst1-langchain-llama-2-formatted --- # lama-2-7b-langchain-chat - GGUF - Criador do Modelo: [Photolens](https://huggingface.co/Photolens) - Modelo Original: [llama-2-7b-langchain-chat](https://huggingface.co/Photolens/llama-2-7b-langchain-chat) ## Arquivos Incluídos | Nome | Método Quant | Bits | Tamanho | Desc | | ---- | ---- | ---- | ---- | ----- | | [llama-2-7b-langchain-chat-q4_0.gguf](https://huggingface.co/lucianosb/llama-2-7b-langchain-chat-GGUF/blob/main/llama-2-7b-langchain-chat-q4_0.gguf) | q4_0 | 4 | 3.56 GB | Quantização em 4-bit. | | [llama-2-7b-langchain-chat-q4_1.gguf](https://huggingface.co/lucianosb/llama-2-7b-langchain-chat-GGUF/blob/main/llama-2-7b-langchain-chat-q4_1.gguf) | q4_1 | 4 | 3.95 GB | Quantização em 4-bit. Acurácia maior que q4_0 mas não tão boa quanto q5_0. Inferência mais rápida que os modelos q5. | | [llama-2-7b-langchain-chat-q5_0.gguf](https://huggingface.co/lucianosb/llama-2-7b-langchain-chat-GGUF/blob/main/llama-2-7b-langchain-chat-q5_0.gguf) | q5_0 | 5 | 4.33 GB | Quantização em 5-bit. Melhor acurácia, maior uso de recursos, inferência mais lenta. | | [llama-2-7b-langchain-chat-q5_1.gguf](https://huggingface.co/lucianosb/llama-2-7b-langchain-chat-GGUF/blob/main/llama-2-7b-langchain-chat-q5_1.gguf) | q5_1 | 5 | 4.72 GB | Quantização em 5-bit. Ainda Melhor acurácia, maior uso de recursos, inferência mais lenta. | | [llama-2-7b-langchain-chat-q8_0.gguf](https://huggingface.co/lucianosb/llama-2-7b-langchain-chat-GGUF/blob/main/llama-2-7b-langchain-chat-q8_0.gguf) | q8_0 | 8 | 6.67 GB | Quantização em 8-bit. Quase indistinguível do float16. Usa muitos recursos e é mais lento. | **Observação**: os valores de RAM acima não pressupõem descarregamento de GPU. Se as camadas forem descarregadas para a GPU, isso reduzirá o uso de RAM e usará VRAM. ## Como executar com `llama.cpp` Usei o seguinte comando. Ajuste para suas necessidades: ``` ./main -m ./models/llama-2-7b-langchain-chat/llama-2-7b-langchain-chat-q5_1.gguf --color --temp 0.5 -n 256 -p "<s>[INST] Há muito tempo atrás, numa galáxia distante [/INST] Assistant Message </s>" ``` Para compreender os parâmetros, veja [a documentação do llama.cpp](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## Sobre o formato GGUF GGUF é um novo formato introduzido pela equipe llama.cpp em 21 de agosto de 2023. É um substituto para o GGML, que não é mais suportado pelo llama.cpp. O principal benefício do GGUF é que ele é um formato extensível e à prova de futuro que armazena mais informações sobre o modelo como metadados. Ele também inclui código de tokenização significativamente melhorado, incluindo pela primeira vez suporte total para tokens especiais. Isso deve melhorar o desempenho, especialmente com modelos que usam novos tokens especiais e implementam modelos de prompt personalizados. Aqui está uma lista de clientes e bibliotecas que são conhecidos por suportar GGUF: - [llama.cpp](https://github.com/ggerganov/llama.cpp). - [text-generation-webui](https://github.com/oobabooga/text-generation-webui), a interface web mais amplamente utilizada. Suporta GGUF com aceleração GPU via backend ctransformers - backend llama-cpp-python deve funcionar em breve também. - [KoboldCpp](https://github.com/LostRuins/koboldcpp), agora suporta GGUF a partir da versão 1.41! Uma poderosa interface web GGML, com aceleração total da GPU. Especialmente bom para contar histórias. - [LM Studio](https://lmstudio.ai), versão 0.2.2 e posteriores suportam GGUF. Uma GUI local totalmente equipada com aceleração GPU em ambos Windows (NVidia e AMD) e macOS. - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), agora deve funcionar, escolha o backend c_transformers. Uma ótima interface web com muitos recursos interessantes. Suporta aceleração GPU CUDA. - [ctransformers](https://github.com/marella/ctransformers), agora suporta GGUF a partir da versão 0.2.24! Uma biblioteca Python com aceleração GPU, suporte LangChain e servidor AI compatível com OpenAI. - [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), suporta GGUF a partir da versão 0.1.79. Uma biblioteca Python com aceleração GPU, suporte LangChain e servidor API compatível com OpenAI. - [candle](https://github.com/huggingface/candle), adicionou suporte GGUF em 22 de agosto. Candle é um framework ML Rust com foco em desempenho, incluindo suporte GPU e facilidade de uso. - [LocalAI](https://github.com/go-skynet/LocalAI), adicionou suporte GGUF em 23 de agosto. LocalAI provê uma API Rest para modelos LLM e de geração de imagens. ## Template ```` <s>[INST] Prompter Message [/INST] Assistant Message </s> ````
mys/ggml_llava-v1.5-13b
mys
"2023-10-10T10:20:06Z"
1,029
48
null
[ "gguf", "llava", "lmm", "ggml", "llama.cpp", "region:us" ]
null
"2023-10-10T10:04:00Z"
--- tags: - llava - lmm - ggml - llama.cpp --- # ggml_llava-v1.5-13b This repo contains GGUF files to inference [llava-v1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b) with [llama.cpp](https://github.com/ggerganov/llama.cpp) end-to-end without any extra dependency. **Note**: The `mmproj-model-f16.gguf` file structure is experimental and may change. Always use the latest code in llama.cpp.
gradientai/v-alpha-tross
gradientai
"2024-02-14T00:49:50Z"
1,029
16
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "finance", "en", "license:llama2", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-09T18:05:37Z"
--- tags: - generated_from_trainer - finance model-index: - name: completed-model results: - task: type: text-generation dataset: name: ai2_arc type: ai2_arc metrics: - name: AI2 Reasoning Challenge (25-Shot) type: AI2 Reasoning Challenge (25-Shot) value: 71.93 source: name: Open LLM Leaderboard url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - task: type: text-generation dataset: name: hellaswag type: hellaswag metrics: - name: HellaSwag (10-shot) type: HellaSwag (10-shot) value: 86.82 source: name: Open LLM Leaderboard url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - task: type: text-generation dataset: name: multiple type: miltiple metrics: - name: MMLU (5-shot) type: MMLU (5-shot) value: 70.38 source: name: Open LLM Leaderboard url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - task: type: text-generation dataset: name: truthful_qa type: truthful_qa metrics: - name: TruthfulQA (0-shot) type: TruthfulQA (0-shot) value: 65.21 source: name: Open LLM Leaderboard url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - task: type: text-generation dataset: name: winogrande type: winogrande metrics: - name: Winogrande (5-shot) type: Winogrande (5-shot) value: 83.58 source: name: Open LLM Leaderboard url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - task: type: text-generation dataset: name: gsm8k type: gsm8k metrics: - name: GSM8k (5-shot) type: GSM8k (5-shot) value: 61.79 source: name: Open LLM Leaderboard url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard license: llama2 language: - en --- **Albatross** is a collection of domain-specific language models for finance applications developed by [Gradient](https://gradient.ai/). This is the repository for an early, limited capability version, the `v-alpha-tross`, designed to showcase performance on - mathematical reasoning - tabular understanding - open-book retrieval (RAG) & summarization - conversational interface Release versions of Albatross models are additionally trained on proprietary implementations of the latest architecture augmentation, expanded training and alignment data, and target reduced hallucination at retrieval, improved auditability, and multi-hop reasoning. To inquire for access to release versions, please reach out to [[email protected]](mailto:[email protected]) ## Model description The `v-alpha-tross` model is based on [meta-llama/Llama-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf), with additional, finance specific, pre-training, fine-tuning and instruction tuning. This model substantially outperforms Llama2-70B models on H6 Average score, and GSM8K, with similar performance to [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1). It also reaches `gpt-3.5-turbo` performance in extracting information from tabular data like those found in SEC filings. ## Intended use The `v-alpha-tross` is intended as a demonstration of Gradient’s Albatross framework for developing large language models specific to the finance domain. We welcome additional research and development, but do not plan on continued internal development on this legacy model. To get the expected performance, follow formatting requirements of *Llama-2 chat*, including `INST` and `<<SYS>>` tags, and `<s>` tokens. ## Training Strategy The Albatross framework overcomes deficiencies in general-purpose language models that arise in the face of solving tasks in the finance domain. Release versions of Albatross use an expanded data universe for extended capabilities. ### Pre-Training A base Llama2-70B is further pre-trained on finance data since LLMs are poor at answering questions when their internal relevant document store is sparse [1]. To curate quality training data with low operational overhead we demo a novel data gathering approach: 1. Crawl public repositories of text data. For `v-alpha-tross`, we limited to [Red Pajamas](https://github.com/togethercomputer/RedPajama-Data) and https://github.com/. 2. Programmatically filter the crawled corpus to datasets not likely to be in the base model's training already, using a likelihood ratio test adapted from LiRA membership inference.[2] 3. Human finance professionals review the (much smaller) filtered corpus to further remove low quality results. [1] Kandpal, Nikhil, et al. "Large language models struggle to learn long-tail knowledge." International Conference on Machine Learning. PMLR, 2023. [2] Carlini, Nicholas, et al. "Membership inference attacks from first principles." 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2022. ### Fine-Tuning Supervised fine-tuning (SFT) and direct preference optimization (DPO)[3] further enhances performance and alignment on finance-related tasks. `v-alpha-tross` includes a subset of Albatross tuning goals: financial anchoring, mathematical reasoning, tabular understanding, conversational communication, summarization. | Category | # Tokens (1Ms) | % of Total | | --- | --- | --- | | Chat (e.g. [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)) | 640 | 45.2 | | Alignment * (e.g. [orca_dpo](https://huggingface.co/datasets/Intel/orca_dpo_pairs)) | 331 | 23.4 | | Math * (e.g. Goat[4]) | 300 | 21.2 | | Tabular * | 68 | 4.8 | | Summarization (e.g. [legal_summarization](https://huggingface.co/datasets/lighteval/legal_summarization)) | 52 | 3.7 | | Open-book (e.g. [selfrag](https://huggingface.co/datasets/selfrag/selfrag_train_data)) | 25 | 1.8 | (*) = Proprietary or includes proprietary data sets [3] Rafailov, R., Sharma, A., Mitchell, E., Ermon, S., Manning, C.D. and Finn, C., 2023. Direct preference optimization: Your language model is secretly a reward model. NeurIPS. [4] Liu, Tiedong, and Bryan Kian Hsiang Low. "Goat: Fine-tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks." arXiv preprint arXiv:2305.14201 (2023). ## Benchmarks From a Llama-2-70B base, `v-alpha-tross` improves H6 metrics, and in particular GSM8k (arithmetic reasoning), scoring similar to Mixtral-8x7B-Instruct-v0.1. Relative to a subset of Open LLM Leaderboard [4] models which also use Llama-2-70B as a base, the model achieves state of the art results for the Average H6 score. On financial table understanding (our new metric) the model is on par with GPT-3.5. | Model | H6 [4] | GSM8k | sec_tables_v1 | | --- | --- | --- | --- | | v-alpha-tross | 73.28 | 61.79 | 100.0 | | meta-llama/Llama-2-70B-hf | 67.87 | 54.06 | 75.76 | | meta-llama/Llama-2-70b-chat-hf | 62.40 | 26.69 | 87.88 | | mistralai/Mixtral-8x7B-Instruct-v0.1 | 72.70 | 61.11 | 82.35 | | GPT-3.5 | N/A | 57.1 [5] | 100.0 | [4] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard [5] https://paperswithcode.com/sota/arithmetic-reasoning-on-gsm8k ## Training procedure We develop Albatross on Gradient’s distributed training platform, leveraging leading open source toolsets and optimizations like [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), [Accelerate](https://github.com/huggingface/accelerate), and [Deepspeed](https://github.com/microsoft/DeepSpeed) for high throughput and memory efficiency. ### Training hyperparameters (DPO) The following hyperparameters were used during DPO training: - learning_rate: 5e-07 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - distributed_type: multi-GPU - num_devices: 40 - total_train_batch_size: 120 - total_eval_batch_size: 120 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 18 - num_epochs: 1 - dpo_beta: .1 ### Framework versions - Transformers 4.35.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.7 - Tokenizers 0.14.1 ## Bias `v-alpha-tross` has not been specifically aligned for safety, so the model can produce problematic outputs (especially when prompted to do so). It is also subject to any risks of the corpus that was used to train the base Llama 2 models. ## More information & how to cite Whitepaper coming soon! ## The Gradient AI Team Gradient is accelerating AI transformation across industries. https://gradient.ai/ ## Contact Us Drop an email to [[email protected]](mailto:[email protected])
silvainrichou/gemma-3b-001
silvainrichou
"2024-03-14T11:51:41Z"
1,029
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-13T20:42:33Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/RP-Stew-v2.5-34B-i1-GGUF
mradermacher
"2024-06-09T17:28:32Z"
1,029
3
transformers
[ "transformers", "gguf", "en", "base_model:MarinaraSpaghetti/RP-Stew-v2.5-34B", "endpoints_compatible", "region:us" ]
null
"2024-06-09T06:21:14Z"
--- base_model: MarinaraSpaghetti/RP-Stew-v2.5-34B language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/MarinaraSpaghetti/RP-Stew-v2.5-34B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-i1-GGUF/resolve/main/RP-Stew-v2.5-34B.i1-IQ1_S.gguf) | i1-IQ1_S | 7.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-i1-GGUF/resolve/main/RP-Stew-v2.5-34B.i1-IQ1_M.gguf) | i1-IQ1_M | 8.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-i1-GGUF/resolve/main/RP-Stew-v2.5-34B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-i1-GGUF/resolve/main/RP-Stew-v2.5-34B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-i1-GGUF/resolve/main/RP-Stew-v2.5-34B.i1-IQ2_S.gguf) | i1-IQ2_S | 11.0 | | | [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-i1-GGUF/resolve/main/RP-Stew-v2.5-34B.i1-IQ2_M.gguf) | i1-IQ2_M | 11.9 | | | [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-i1-GGUF/resolve/main/RP-Stew-v2.5-34B.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-i1-GGUF/resolve/main/RP-Stew-v2.5-34B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-i1-GGUF/resolve/main/RP-Stew-v2.5-34B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-i1-GGUF/resolve/main/RP-Stew-v2.5-34B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.1 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-i1-GGUF/resolve/main/RP-Stew-v2.5-34B.i1-IQ3_S.gguf) | i1-IQ3_S | 15.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-i1-GGUF/resolve/main/RP-Stew-v2.5-34B.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | | | [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-i1-GGUF/resolve/main/RP-Stew-v2.5-34B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.8 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-i1-GGUF/resolve/main/RP-Stew-v2.5-34B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-i1-GGUF/resolve/main/RP-Stew-v2.5-34B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 18.6 | | | [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-i1-GGUF/resolve/main/RP-Stew-v2.5-34B.i1-Q4_0.gguf) | i1-Q4_0 | 19.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-i1-GGUF/resolve/main/RP-Stew-v2.5-34B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-i1-GGUF/resolve/main/RP-Stew-v2.5-34B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-i1-GGUF/resolve/main/RP-Stew-v2.5-34B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.8 | | | [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-i1-GGUF/resolve/main/RP-Stew-v2.5-34B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-i1-GGUF/resolve/main/RP-Stew-v2.5-34B.i1-Q6_K.gguf) | i1-Q6_K | 28.3 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
heedongKilOk/llama-3-8b-Instruct-bnb-4bit_LLaMA_Factory-GGUF
heedongKilOk
"2024-06-21T05:43:31Z"
1,029
0
null
[ "gguf", "region:us" ]
null
"2024-06-20T02:53:31Z"
Entry not found
raidavid/whisper-small-ip-28-have-opendata_20240613
raidavid
"2024-06-13T19:56:52Z"
1,028
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-13T11:17:47Z"
Entry not found
timm/swin_base_patch4_window7_224.ms_in22k
timm
"2024-02-10T23:31:19Z"
1,027
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-22k", "arxiv:2103.14030", "license:mit", "region:us" ]
image-classification
"2023-03-18T04:03:40Z"
--- license: mit library_name: timm tags: - image-classification - timm datasets: - imagenet-22k --- # Model card for swin_base_patch4_window7_224.ms_in22k A Swin Transformer image classification model. Pretrained on ImageNet-22k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 109.1 - GMACs: 15.5 - Activations (M): 36.6 - Image size: 224 x 224 - **Papers:** - Swin Transformer: Hierarchical Vision Transformer using Shifted Windows: https://arxiv.org/abs/2103.14030 - **Original:** https://github.com/microsoft/Swin-Transformer - **Dataset:** ImageNet-22k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('swin_base_patch4_window7_224.ms_in22k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'swin_base_patch4_window7_224.ms_in22k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g. for swin_base_patch4_window7_224 (NHWC output) # torch.Size([1, 56, 56, 128]) # torch.Size([1, 28, 28, 256]) # torch.Size([1, 14, 14, 512]) # torch.Size([1, 7, 7, 1024]) # e.g. for swinv2_cr_small_ns_224 (NCHW output) # torch.Size([1, 96, 56, 56]) # torch.Size([1, 192, 28, 28]) # torch.Size([1, 384, 14, 14]) # torch.Size([1, 768, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'swin_base_patch4_window7_224.ms_in22k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled (ie.e a (batch_size, H, W, num_features) tensor for swin / swinv2 # or (batch_size, num_features, H, W) for swinv2_cr output = model.forward_head(output, pre_logits=True) # output is (batch_size, num_features) tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{liu2021Swin, title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows}, author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining}, booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
FelixChao/Faraday-7B
FelixChao
"2024-02-04T14:34:15Z"
1,027
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-04T14:27:25Z"
--- license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
macadeliccc/OmniCorso-7B
macadeliccc
"2024-03-04T19:20:58Z"
1,027
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "base_model:macadeliccc/MBX-7B-v3-DPO", "base_model:mlabonne/OmniBeagle-7B", "license:cc", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-11T04:33:07Z"
--- license: cc tags: - mergekit - merge base_model: - macadeliccc/MBX-7B-v3-DPO - mlabonne/OmniBeagle-7B model-index: - name: OmniCorso-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 72.7 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/OmniCorso-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.7 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/OmniCorso-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.91 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/OmniCorso-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 73.43 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/OmniCorso-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/OmniCorso-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 70.96 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/OmniCorso-7B name: Open LLM Leaderboard --- # OmniCorso-7B ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/6455cc8d679315e4ef16fbec/PaG7ByWy1qnh_tcSuh35U.webp) ## Code Example ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("macadeliccc/OmniCorso-7B") model = AutoModelForCausalLM.from_pretrained("macadeliccc/OmniCorso-7B") messages = [ {"role": "system", "content": "Respond to the users request like a pirate"}, {"role": "user", "content": "Can you write me a quicksort algorithm?"} ] gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt") ``` The following models were included in the merge: * [macadeliccc/MBX-7B-v3-DPO](https://huggingface.co/macadeliccc/MBX-7B-v3-DPO) * [mlabonne/OmniBeagle-7B](https://huggingface.co/mlabonne/OmniBeagle-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: mlabonne/OmniBeagle-7B layer_range: [0, 32] - model: macadeliccc/MBX-7B-v3-DPO layer_range: [0, 32] merge_method: slerp base_model: macadeliccc/MBX-7B-v3-DPO parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## Quantizations ### GGUF + [iMatrix](https://huggingface.co/macadeliccc/OmniCorso-7B-GGUF) ### Exllamav2 Quants are available thanks to user bartowski, check them out [here](https://huggingface.co/bartowski/OmniCorso-7B-exl2) | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/OmniCorso-7B-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/OmniCorso-7B-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/bartowski/OmniCorso-7B-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/bartowski/OmniCorso-7B-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/OmniCorso-7B-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. | ## Evaluations <pre>----Benchmark Complete---- 2024-02-11 15:34:40 Time taken: 178.3 mins Prompt Format: ChatML Model: macadeliccc/OmniCorso-7B Score (v2): 73.75 Parseable: 167.0 --------------- Batch completed Time taken: 178.3 mins --------------- </pre> | Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average| |---------------------------------------------------------------|------:|------:|---------:|-------:|------:| |[OmniCorso-7B](https://huggingface.co/macadeliccc/OmniCorso-7B)| 45.89| 77.66| 74.12| 49.24| 61.73| ### AGIEval | Task |Version| Metric |Value| |Stderr| |------------------------------|------:|--------|----:|---|-----:| |agieval_aqua_rat | 0|acc |29.13|± | 2.86| | | |acc_norm|27.17|± | 2.80| |agieval_logiqa_en | 0|acc |39.32|± | 1.92| | | |acc_norm|39.63|± | 1.92| |agieval_lsat_ar | 0|acc |23.91|± | 2.82| | | |acc_norm|23.91|± | 2.82| |agieval_lsat_lr | 0|acc |53.14|± | 2.21| | | |acc_norm|53.92|± | 2.21| |agieval_lsat_rc | 0|acc |66.54|± | 2.88| | | |acc_norm|67.29|± | 2.87| |agieval_sat_en | 0|acc |80.58|± | 2.76| | | |acc_norm|80.58|± | 2.76| |agieval_sat_en_without_passage| 0|acc |45.63|± | 3.48| | | |acc_norm|43.69|± | 3.46| |agieval_sat_math | 0|acc |33.18|± | 3.18| | | |acc_norm|30.91|± | 3.12| Average: 45.89% ### GPT4All | Task |Version| Metric |Value| |Stderr| |-------------|------:|--------|----:|---|-----:| |arc_challenge| 0|acc |67.32|± | 1.37| | | |acc_norm|68.43|± | 1.36| |arc_easy | 0|acc |87.46|± | 0.68| | | |acc_norm|83.50|± | 0.76| |boolq | 1|acc |88.13|± | 0.57| |hellaswag | 0|acc |68.47|± | 0.46| | | |acc_norm|86.96|± | 0.34| |openbookqa | 0|acc |38.80|± | 2.18| | | |acc_norm|50.00|± | 2.24| |piqa | 0|acc |83.03|± | 0.88| | | |acc_norm|85.31|± | 0.83| |winogrande | 0|acc |81.29|± | 1.10| Average: 77.66% ### TruthfulQA | Task |Version|Metric|Value| |Stderr| |-------------|------:|------|----:|---|-----:| |truthfulqa_mc| 1|mc1 |58.26|± | 1.73| | | |mc2 |74.12|± | 1.43| Average: 74.12% ### Bigbench | Task |Version| Metric |Value| |Stderr| |------------------------------------------------|------:|---------------------|----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|56.84|± | 3.60| |bigbench_date_understanding | 0|multiple_choice_grade|63.41|± | 2.51| |bigbench_disambiguation_qa | 0|multiple_choice_grade|49.22|± | 3.12| |bigbench_geometric_shapes | 0|multiple_choice_grade|23.96|± | 2.26| | | |exact_str_match | 1.39|± | 0.62| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|34.20|± | 2.12| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|23.71|± | 1.61| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|60.33|± | 2.83| |bigbench_movie_recommendation | 0|multiple_choice_grade|49.00|± | 2.24| |bigbench_navigate | 0|multiple_choice_grade|55.20|± | 1.57| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|70.75|± | 1.02| |bigbench_ruin_names | 0|multiple_choice_grade|55.80|± | 2.35| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|36.97|± | 1.53| |bigbench_snarks | 0|multiple_choice_grade|72.38|± | 3.33| |bigbench_sports_understanding | 0|multiple_choice_grade|76.27|± | 1.36| |bigbench_temporal_sequences | 0|multiple_choice_grade|54.50|± | 1.58| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|23.12|± | 1.19| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|20.34|± | 0.96| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|60.33|± | 2.83| Average: 49.24% Average score: 61.73% Elapsed time: 02:20:06 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_macadeliccc__OmniCorso-7B) | Metric |Value| |---------------------------------|----:| |Avg. |75.74| |AI2 Reasoning Challenge (25-Shot)|72.70| |HellaSwag (10-Shot) |88.70| |MMLU (5-Shot) |64.91| |TruthfulQA (0-shot) |73.43| |Winogrande (5-shot) |83.74| |GSM8k (5-shot) |70.96|
absl2024/Phi3-Mini-4k-CustomtryTry1
absl2024
"2024-06-28T10:12:56Z"
1,027
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-21T05:27:57Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
timm/swin_s3_small_224.ms_in1k
timm
"2024-02-10T23:31:36Z"
1,026
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2111.1472", "arxiv:2103.14030", "license:mit", "region:us" ]
image-classification
"2023-03-18T04:13:06Z"
--- license: mit library_name: timm tags: - image-classification - timm datasets: - imagenet-1k --- # Model card for swin_s3_small_224.ms_in1k ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 49.7 - GMACs: 9.4 - Activations (M): 37.8 - Image size: 224 x 224 - **Papers:** - AutoFormerV2: https://arxiv.org/abs/2111.1472 - Swin Transformer: Hierarchical Vision Transformer using Shifted Windows: https://arxiv.org/abs/2103.14030 - **Original:** https://github.com/microsoft/Cream/tree/main/AutoFormerV2 - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('swin_s3_small_224.ms_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'swin_s3_small_224.ms_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g. for swin_base_patch4_window7_224 (NHWC output) # torch.Size([1, 56, 56, 128]) # torch.Size([1, 28, 28, 256]) # torch.Size([1, 14, 14, 512]) # torch.Size([1, 7, 7, 1024]) # e.g. for swinv2_cr_small_ns_224 (NCHW output) # torch.Size([1, 96, 56, 56]) # torch.Size([1, 192, 28, 28]) # torch.Size([1, 384, 14, 14]) # torch.Size([1, 768, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'swin_s3_small_224.ms_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled (ie.e a (batch_size, H, W, num_features) tensor for swin / swinv2 # or (batch_size, num_features, H, W) for swinv2_cr output = model.forward_head(output, pre_logits=True) # output is (batch_size, num_features) tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{S3, title={Searching the Search Space of Vision Transformer}, author={Minghao, Chen and Kan, Wu and Bolin, Ni and Houwen, Peng and Bei, Liu and Jianlong, Fu and Hongyang, Chao and Haibin, Ling}, booktitle={Conference and Workshop on Neural Information Processing Systems (NeurIPS)}, year={2021} } ``` ```bibtex @inproceedings{liu2021Swin, title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows}, author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining}, booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
llmware/slim-sql-1b-v0
llmware
"2024-06-24T13:49:45Z"
1,026
8
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-01T16:23:25Z"
--- license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> slim-sql-1b-v0 is the first model in the SLIM (Specialized Language Instruct Model) series. ### Benchmark Tests Evaluated against 100 test SQL queries with under 100 characters. 1 point given for exact string match, 0 given for incorrect answer. --**Accuracy Score**: **86** correct out of 100 - 8 incorrect answers attributed to query structure ordering or naming convention differences - 6 incorrect answers attributed to incorrect variable selection or aggregate function use ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** llmware - **Model type:** TinyLlama - **Language(s) (NLP):** English - **License:** apache-2.0 - **Finetuned from model:** [TinyLlama-1.1b - 2.5T checkpoint](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> slim-sql-1b-v0 is designed to generate accurate SQL queries for data retrieval on simple table structures given a natural language prompt. For best results, prompts should be structured as a question to retrieve information and perform aggregate functions on one or several variables. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms. ## How to Get Started with the Model The fastest way to get started with slim is through direct import in transformers: from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("slim-sql-1b-v0") model = AutoModelForCausalLM.from_pretrained("slim-sql-1b-v0") Please refer to the generation_test.py files in the Files repository, which includes 100 samples and script to test the model. The sql-slim model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as: full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:" The prompt consists of two sub-parts: 1. Table creation prompt providing table name, variables, and variable type. 2. Specific question or instruction based on the text passage Test sample example: {"context": "CREATE TABLE table_name_34 (season VARCHAR, lost VARCHAR, points VARCHAR)", "question": "Which season did the Minnesota Kicks lose 13 games and score 156 points?", "answer": "SELECT COUNT(season) FROM table_name_34 WHERE lost = 13 AND points = 156"} A subset of test samples are provided in this repo ("sql_test_100_simple_s"). For use in training, the "\<human>" tag would be associated with "context" and "question" statements, while the "\<bot>" tag will be associated with the model's output. If you are using a HuggingFace generation script: # prepare prompt packaging used in fine-tuning process new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:" inputs = tokenizer(new_prompt, return_tensors="pt") start_of_output = len(inputs.input_ids[0]) # temperature: set at 0.3 for consistency of output # max_new_tokens: set at 100 - may prematurely stop a few of the summaries outputs = model.generate( inputs.input_ids.to(device), eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id, do_sample=True, temperature=0.3, max_new_tokens=100, ) output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True) ## Model Card Contact Dylan Oberst & llmware team
BioMistral/BioMistral-7B-GGUF
BioMistral
"2024-02-19T19:42:21Z"
1,026
8
transformers
[ "transformers", "gguf", "mistral", "text-generation", "medical", "biology", "fr", "en", "de", "nl", "es", "pt", "pl", "ro", "it", "dataset:pubmed", "arxiv:2402.10373", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-19T18:49:33Z"
--- license: apache-2.0 datasets: - pubmed language: - fr - en - de - nl - es - pt - pl - ro - it pipeline_tag: text-generation tags: - medical - biology --- <p align="center"> <img src="https://huggingface.co/BioMistral/BioMistral-7B/resolve/main/wordart_blue_m_rectangle.png?download=true" alt="drawing" width="250"/> </p> # BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains **Abstract:** Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presents significant challenges. In this paper, we introduce BioMistral, an open-source LLM tailored for the biomedical domain, utilizing Mistral as its foundation model and further pre-trained on PubMed Central. We conduct a comprehensive evaluation of BioMistral on a benchmark comprising 10 established medical question-answering (QA) tasks in English. We also explore lightweight models obtained through quantization and model merging approaches. Our results demonstrate BioMistral's superior performance compared to existing open-source medical models and its competitive edge against proprietary counterparts. Finally, to address the limited availability of data beyond English and to assess the multilingual generalization of medical LLMs, we automatically translated and evaluated this benchmark into 7 other languages. This marks the first large-scale multilingual evaluation of LLMs in the medical domain. Datasets, multilingual evaluation benchmarks, scripts, and all the models obtained during our experiments are freely released. **Advisory Notice!** Although BioMistral is intended to encapsulate medical knowledge sourced from high-quality evidence, it hasn't been tailored to effectively, safely, or suitably convey this knowledge within professional parameters for action. We advise refraining from utilizing BioMistral in medical contexts unless it undergoes thorough alignment with specific use cases and undergoes further testing, notably including randomized controlled trials in real-world medical environments. BioMistral 7B may possess inherent risks and biases that have not yet been thoroughly assessed. Additionally, the model's performance has not been evaluated in real-world clinical settings. Consequently, we recommend using BioMistral 7B strictly as a research tool and advise against deploying it in production environments for natural language generation or any professional health and medical purposes. # 1. BioMistral models **BioMistral** is a suite of Mistral-based further pre-trained open source models suited for the medical domains and pre-trained using textual data from PubMed Central Open Access (CC0, CC BY, CC BY-SA, and CC BY-ND). All the models are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French HPC. | Model Name | Base Model | Model Type | Sequence Length | Download | |:-------------------:|:----------------------------------:|:-------------------:|:---------------:|:-----------------------------------------------------:| | BioMistral-7B | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Further Pre-trained | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) | | BioMistral-7B-DARE | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge DARE | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE) | | BioMistral-7B-TIES | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge TIES | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES) | | BioMistral-7B-SLERP | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge SLERP | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP) | # 2. Quantized Models | Base Model | Method | q_group_size | w_bit | version | VRAM GB | Time | Download | |:-------------------:|:------:|:------------:|:-----:|:-------:|:-------:|:------:|:--------:| | BioMistral-7B | FP16/BF16 | | | | 15.02 | x1.00 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) | | BioMistral-7B | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMM) | | BioMistral-7B | AWQ | 128 | 4 | GEMV | 4.68 | x10.30 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMV) | | BioMistral-7B | BnB.4 | | 4 | | 5.03 | x3.25 | [HuggingFace](blank) | | BioMistral-7B | BnB.8 | | 8 | | 8.04 | x4.34 | [HuggingFace](blank) | | BioMistral-7B-DARE | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE-AWQ-QGS128-W4-GEMM) | | BioMistral-7B-TIES | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES-AWQ-QGS128-W4-GEMM) | | BioMistral-7B-SLERP | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP-AWQ-QGS128-W4-GEMM) | # 2. Using BioMistral You can use BioMistral with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow. Loading the model and tokenizer : ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B") model = AutoModel.from_pretrained("BioMistral/BioMistral-7B") ``` # 3. Supervised Fine-tuning Benchmark | | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | MedQA 5 opts | PubMedQA | MedMCQA | Avg. | |-------------------------------------------|:---------------------------------------------:|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|------------------| | **BioMistral 7B** | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | 50.6 | 42.8 | 77.5 | 48.1 | 57.3 | | **Mistral 7B Instruct** | **62.9** | 57.0 | 55.6 | 59.4 | 62.5 | <u>57.2</u> | 42.0 | 40.9 | 75.7 | 46.1 | 55.9 | | | | | | | | | | | | | | | **BioMistral 7B Ensemble** | <u>62.8</u> | 62.7 | <u>57.5</u> | **63.5** | 64.3 | 55.7 | 50.6 | 43.6 | 77.5 | **48.8** | 58.7 | | **BioMistral 7B DARE** | 62.3 | **67.0** | 55.8 | 61.4 | **66.9** | **58.0** | **51.1** | **45.2** | <u>77.7</u> | <u>48.7</u> | **59.4** | | **BioMistral 7B TIES** | 60.1 | <u>65.0</u> | **58.5** | 60.5 | 60.4 | 56.5 | 49.5 | 43.2 | 77.5 | 48.1 | 57.9 | | **BioMistral 7B SLERP** | 62.5 | 64.7 | 55.8 | <u>62.7</u> | <u>64.8</u> | 56.3 | <u>50.8</u> | <u>44.3</u> | **77.8** | 48.6 | <u>58.8</u> | | | | | | | | | | | | | | | **MedAlpaca 7B** | 53.1 | 58.0 | 54.1 | 58.8 | 58.1 | 48.6 | 40.1 | 33.7 | 73.6 | 37.0 | 51.5 | | **PMC-LLaMA 7B** | 24.5 | 27.7 | 35.3 | 17.4 | 30.3 | 23.3 | 25.5 | 20.2 | 72.9 | 26.6 | 30.4 | | **MediTron-7B** | 41.6 | 50.3 | 46.4 | 27.9 | 44.4 | 30.8 | 41.6 | 28.1 | 74.9 | 41.3 | 42.7 | | **BioMedGPT-LM-7B** | 51.4 | 52.0 | 49.4 | 53.3 | 50.7 | 49.1 | 42.5 | 33.9 | 76.8 | 37.6 | 49.7 | | | | | | | | | | | | | | | **GPT-3.5 Turbo 1106*** | 74.71 | 74.00 | 65.92 | 72.79 | 72.91 | 64.73 | 57.71 | 50.82 | 72.66 | 53.79 | 66.0 | Supervised Fine-Tuning (SFT) performance of BioMistral 7B models compared to baselines, measured by accuracy (↑) and averaged across 3 random seeds of 3-shot. DARE, TIES, and SLERP are model merging strategies that combine BioMistral 7B and Mistral 7B Instruct. Best model in bold, and second-best underlined. *GPT-3.5 Turbo performances are reported from the 3-shot results without SFT. # Citation BibTeX Arxiv : [https://arxiv.org/abs/2402.10373](https://arxiv.org/abs/2402.10373) ```bibtex @misc{labrak2024biomistral, title={BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains}, author={Yanis Labrak and Adrien Bazoge and Emmanuel Morin and Pierre-Antoine Gourraud and Mickael Rouvier and Richard Dufour}, year={2024}, eprint={2402.10373}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` **CAUTION!** Both direct and downstream users need to be informed about the risks, biases, and constraints inherent in the model. While the model can produce natural language text, our exploration of its capabilities and limitations is just beginning. In fields such as medicine, comprehending these limitations is crucial. Hence, we strongly advise against deploying this model for natural language generation in production or for professional tasks in the realm of health and medicine.
ChrisWilson011016/5Fnzh4r9HEuDxYcSBt7GPhDdbEEYBCSsR2S2CanTTToAV4wq_vgg
ChrisWilson011016
"2024-03-04T18:57:03Z"
1,026
0
keras
[ "keras", "region:us" ]
null
"2024-02-24T15:23:59Z"
Entry not found
zelus82/Obelix-Phi2
zelus82
"2024-03-25T13:31:45Z"
1,026
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "mergekit", "merge", "conversational", "custom_code", "base_model:abacaj/phi-2-super", "base_model:rhysjones/phi-2-orange-v2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-25T13:08:34Z"
--- base_model: - abacaj/phi-2-super - rhysjones/phi-2-orange-v2 library_name: transformers tags: - mergekit - merge license: apache-2.0 --- <img src="Obelix.jpeg"> # merged_folders This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [abacaj/phi-2-super](https://huggingface.co/abacaj/phi-2-super) * [rhysjones/phi-2-orange-v2](https://huggingface.co/rhysjones/phi-2-orange-v2) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: abacaj/phi-2-super layer_range: [0, 32] - model: rhysjones/phi-2-orange-v2 layer_range: [0, 32] merge_method: slerp base_model: abacaj/phi-2-super parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
kno10/ende-chat-0.0.4
kno10
"2024-04-03T08:42:21Z"
1,026
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "de", "dataset:FreedomIntelligence/sharegpt-deutsch", "dataset:mayflowergmbh/oasst_de", "dataset:mayflowergmbh/dolly_15k_de", "dataset:mayflowergmbh/openschnabeltier_de", "dataset:mayflowergmbh/ultrachat_de", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "dataset:mayflowergmbh/evol_instruct_de", "dataset:mayflowergmbh/alpaca-gpt4_de", "dataset:mayflowergmbh/dolphin_de", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-01T10:37:32Z"
--- language: - en - de license: apache-2.0 library_name: transformers datasets: - FreedomIntelligence/sharegpt-deutsch - mayflowergmbh/oasst_de - mayflowergmbh/dolly_15k_de - mayflowergmbh/openschnabeltier_de - mayflowergmbh/ultrachat_de - WizardLM/WizardLM_evol_instruct_V2_196k - mayflowergmbh/evol_instruct_de - mayflowergmbh/alpaca-gpt4_de - mayflowergmbh/dolphin_de pipeline-tag: text-generation model-index: - name: ende-chat-0.0.4 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 56.57 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kno10/ende-chat-0.0.4 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 79.63 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kno10/ende-chat-0.0.4 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 55.22 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kno10/ende-chat-0.0.4 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 51.19 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kno10/ende-chat-0.0.4 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 75.93 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kno10/ende-chat-0.0.4 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 23.5 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kno10/ende-chat-0.0.4 name: Open LLM Leaderboard --- # Model Card for EnDe-chat-0.0.4 Preliminary LoRA finetune of Mistral-7B for German and English quality text. This is an experiment to improve the German capabilities of Mistral with continued finetuning. The finetuning also includes English data, in order to retain the English capabilities, to allow the model to be used for translation and for answering German questions on English documents and vice versa. Unfortunately, the compute available for this experiment (2xV100) was not at all sufficient for the amount of training data we would have liked to include. After continued pretraining, this model has received instruction finetuning. # Table of Contents - [Model Card for ende-chat-0.0.4](#model-card-for--model_id-) - [Table of Contents](#table-of-contents) - [Model Details](#model-details) - [Model Description](#model-description) - [Uses](#uses) - [Out-of-Scope Use](#out-of-scope-use) - [Bias, Risks, and Limitations](#bias-risks-and-limitations) - [Recommendations](#recommendations) - [Training Details](#training-details) - [Training Data](#training-data) - [Training Procedure](#training-procedure) - [Evaluation](#evaluation) - [Examples](#examples) # Model Details ## Model Description LoRA finetune of Mistral-7B for German and English quality text. - **Developed by:** Erich Schubert - **Model type:** Language model - **Language(s) (NLP):** deu, eng - **License:** apache-2.0 - **Parent Model:** mistralai/Mistral-7B-v0.1 - **Resources for more information:** n/a # Uses Model finetuned for chat in German and English. ## Out-of-Scope Use The model has not received alignment or instruction finetuning, this is intended as a chat foundation model. # Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Recommendations Further finetuning necessary! # Training Details ## Training Data Pretrained on proprietary text collected from the internet, with a focus on quality German and English text. Typical benchmarking data should not be present in this data set. This is no longer as clear for the finetuning data sets, but the amount of data and compute for instruction tuning was much less. ## Training Procedure Initial LoRA finetuning with LLaMA-Factory using a mixture of **English and German** data, with a focus on data quality. Unfortunately, I could use 100x as much GPU power as I had available for this experiment, and had to heavily subsample the data. As is, this is largely a proof of concept to see if we can improve model quality with better data. This version then received basic chat/instruction training with ``` --stage sft \ --finetuning_type lora \ --template default \ --dataset sharegpt-deutsch,oasst_de,dolly_15k_de,openschnabeltier_de,ultrachat_de,evol_instruct,evol_instruct_de,alpaca-gpt4_de,dolphin_de \ --cutoff_len 1024 \ --learning_rate 5e-05 \ --num_train_epochs 1.0 \ --per_device_train_batch_size 4 \ --gradient_accumulation_steps 4 \ --lr_scheduler_type cosine \ --lora_target all \ --lora_rank 8 \ --lora_dropout 0 \ --fp16 True \ ``` Unfortunately, **most of this fine-tuning data is just automatically translated from English**. I do not think this leads to particularly high-quality data. # Evaluation Not fully evaluated, as it has not been completely trained. Also, I believe that our **benchmarks tend to be misleading**. In particular the huggingface leaderboard is flooded with overfitted models with little to no value. Real-world performance may be task specific and needs to be evaluated carefully on a case basis. I hope some will find this model to be useful! **You are welcome to contribute evaluation scores!** ## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kno10__ende-chat-0.0.4) | Metric |Value|Mistral-7B-v0.1|M7B-Instruct-v0.1|M7B-Instruct-v0.2| |---------------------------------|----:|----:|----:|----:| |Avg. |57.01|60.97|54.96|65.71| |AI2 Reasoning Challenge (25-Shot)|56.57|59.98|54.52|63.14| |HellaSwag (10-Shot) |79.63|83.31|75.63|84.88| |MMLU (5-Shot) |55.22|64.16|55.38|60.78| |TruthfulQA (0-shot) |51.19|42.15|56.28|68.26| |Winogrande (5-shot) |75.93|78.37|73.72|77.19| |GSM8k (5-shot) |23.50|37.83|14.25|40.03| Note that these results are *worse* than standard Mistral-7B-v0.1, but better than Mistral-7B-Instruct-v0.1 (but worse than v0.2). If these scores still matter for you, you may want to perform further finetuning. Recall that this model aims at improving the **German** capabilities of this model, but these benchmarks are English only. # Examples Generation examples, and the output of related models with the same prompt and generation process. (`do_sample=False`, so deterministic "greedy" decoding). Note that these prompts are **not cherry-picked** to make the model look good, but in fact contain some examples that show the limitations of these models. *None* of the models is very well at factual accuracy, movie quotes or jokes. They are too small to store a wide array of facts, and even GPT-4 fails miserably at new jokes. An important difference is whether the models manage to answer in the same language as the question, or whether they have a preferred language they fall back to. As these examples show, you really *need* to use RAG if you want more factual accuracy. Treat these models as a textual user interface, not as a database or search. ## <b>Prompt:</b> Wer leitet die Regierung von Deutschland? This is a very basic prompt. Note that EM-German unnecessarily uses an emoji. <details open><summary>kno10/ende-chat-0.0.4</summary> <blockquote><pre style="white-space: pre-wrap;">Der Bundeskanzler von Deutschland ist derzeit Olaf Scholz.</pre></blockquote> </details> <details><summary>mistralai/Mistral-7B-Instruct-v0.2</summary> <blockquote><pre style="white-space: pre-wrap;">Der Regierungschef Deutschlands ist Olaf Scholz. Er ist seit dem 8. Dezember 2021 Bundeskanzler. Zuvor war er vom 14. Februar 2018 bis zum 8. Dezember 2021 Vizekanzler und Ministerpräsident des Bundeslandes Hamburg. Die Bundesregierung besteht aus den Ministern der Bundesregierung und dem Bundeskanzler.</pre></blockquote> </details> <details><summary>LeoLM/leo-mistral-hessianai-7b-chat</summary> <blockquote><pre style="white-space: pre-wrap;">Der Bundeskanzler von Deutschland ist seit 2021 Olaf Scholz, ein Sozialdemokrat. Das Staatsoberhaupt ist der Bundespräsident, derzeit Frank-Walter Steinmeier.<|im_end|> </pre></blockquote> </details> <details><summary>jphme/em_german_leo_mistral</summary> <blockquote><pre style="white-space: pre-wrap;">🇩🇪 Deutschland wird von der Bundesregierung regiert, die aus dem Bundeskanzler und den Bundesministern besteht. Der Bundeskanzler wird vom Bundestag gewählt und ernennt dann die anderen Mitglieder der Regierung. Die Bundesregierung ist für die Durchführung der Bundesgesetze verantwortlich und trägt die Verantwortung für die Politik des Landes.</pre></blockquote> </details> <details><summary>DiscoResearch/DiscoLM_German_7b_v1</summary> <blockquote><pre style="white-space: pre-wrap;">Der aktuelle Bundeskanzler von Deutschland ist Olaf Scholz. Er ist Mitglied der Sozialdemokratischen Partei Deutschlands (SPD) und wurde im Dezember 2021 zum Kanzler gewählt. Die Regierung wird durch das Bundeskabinett geleitet, dessen Ministerposten von verschiedenen Parteien besetzt sind, die an der Regierung beteiligt sind.</pre></blockquote> </details> ## <b>Prompt:</b> Wer ist derzeit deutscher Außenminister? Closely related, but, e.g., Mistral and LeoLM have factual errors not attributable to a knowledge cutoff. <details open><summary>kno10/ende-chat-0.0.4</summary> <blockquote><pre style="white-space: pre-wrap;">Derzeitiger deutscher Außenminister ist Annalena Baerbock.</pre></blockquote> </details> <details><summary>mistralai/Mistral-7B-Instruct-v0.2</summary> <blockquote><pre style="white-space: pre-wrap;">Der aktuelle deutsche Außenminister ist Annalena Baerbock. Sie hat dieses Amt seit dem 8. Dezember 2021 inne. Sie ist Mitglied der Sozialdemokratischen Partei Deutschlands (SPD) und war zuvor Vizepräsidentin des Bundestages und Vorsitzende der SPD-Fraktion im Parlament.</pre></blockquote> </details> <details><summary>LeoLM/leo-mistral-hessianai-7b-chat</summary> <blockquote><pre style="white-space: pre-wrap;">Der aktuelle deutsche Außenminister ist **Heiko Maas**.<|im_end|> </pre></blockquote> </details> <details><summary>jphme/em_german_leo_mistral</summary> <blockquote><pre style="white-space: pre-wrap;">2022 ist Annalena Baerbock die deutsche Außenministerin. Sie ist Mitglied der Grünen Partei und wurde nach der Bundestagswahl 2021 zur ersten weiblichen Außenministerin Deutschlands ernannt.</pre></blockquote> </details> <details><summary>DiscoResearch/DiscoLM_German_7b_v1</summary> <blockquote><pre style="white-space: pre-wrap;">Der aktuelle deutsche Außenminister ist Annalena Baerbock. Sie wurde im Dezember 2021 ernannt und ist Mitglied der Partei Bündnis 90/Die Grünen.</pre></blockquote> </details> ## <b>Prompt:</b> Was ist der Unterschied zwischen einem Bundeskanzler und einem Bundespräsidenten? Calls for Wikipedia type answers. Mistral exhibits the annoying GPT style of concluding with a translation of "In summary, one can say", while EM-German shows a tendency of bullet-pointing. <details open><summary>kno10/ende-chat-0.0.4</summary> <blockquote><pre style="white-space: pre-wrap;">Ein Bundeskanzler ist der Regierungschef eines Landes, das eine parlamentarische Regierung hat, während ein Bundespräsident der Regierungschef eines Landes ist, das eine präsidiale Regierung hat. In einem parlamentarischen System wird der Kanzler vom Parlament gewählt, während der Präsident in einem präsidialen System vom Volk direkt gewählt wird. Der Kanzler ist der Vorsitzende des Ministerkabinetts, während der Präsident das Oberhaupt des Landes ist und die Exekutive leitet.</pre></blockquote> </details> <details><summary>mistralai/Mistral-7B-Instruct-v0.2</summary> <blockquote><pre style="white-space: pre-wrap;">In Deutschland haben der Bundeskanzler und der Bundespräsident unterschiedliche Rollen in der Bundesregierung und im Staat. Der Bundeskanzler ist der Chef der Regierung und der Führer der regierenden Partei oder Koalition. Er ist verantwortlich für die Ausarbeitung und Umsetzung der Gesetzgebung und der Politik der Bundesregierung. Der Bundeskanzler leitet die Arbeit des Bundeskabinetts und ist der Vorsitzende des Bundesexekutivrats. Er ist zudem der Vertreter der Bundesrepublik Deutschland im Ausland. Der Bundespräsident hingegen ist das höchste Staatsorgan in der Bundesrepublik Deutschland. Er ist der Symbolfigur der Einheit und Integrität der Bundesrepublik und verfügt über repräsentative und zeremonielle Aufgaben. Der Bundespräsident ernennt den Bundeskanzler und kann ihn entlassen, wenn er dies für notwendig hält. Er leitet den Bundesrat und ist der Oberbefehlshaber der Bundeswehr. Der Bundespräsident hat außerdem die Befugnis, Gesetze zu unterzeichnen oder zu verweigern und kann den Bundesrat einberufen. In Summe kann man sagen, dass der Bundeskanzler die tägliche Arbeit der Bundesregierung leitet und die Politik umsetzt, während der Bundespräsident symbolische und repräsentative Aufgaben wahrnimmt und die Stabilität und Einheit der Bundesrepublik sichert.</pre></blockquote> </details> <details><summary>LeoLM/leo-mistral-hessianai-7b-chat</summary> <blockquote><pre style="white-space: pre-wrap;">Der Bundeskanzler ist der Regierungschef Deutschlands und wird als solcher vom Präsidenten ernannt. Der Präsident ist das Staatsoberhaupt und wird von einer Versammlung von Mitgliedern des Bundestages und des Bundesrates gewählt. Der Präsident hat eine weitgehend zeremonielle Position mit begrenzten Befugnissen, während der Bundeskanzler die eigentliche Exekutivgewalt innehat. Der Präsident ist für die Auswahl des Bundeskanzlers verantwortlich, der in der Regel der Anführer der Partei oder Koalition ist, die eine Mehrheit im Bundestag hat.<|im_end|> </pre></blockquote> </details> <details><summary>jphme/em_german_leo_mistral</summary> <blockquote><pre style="white-space: pre-wrap;">1. Macht: Der Bundeskanzler ist der Regierungschef, während der Bundespräsident das Staatsoberhaupt ist. Der Bundeskanzler leitet die Regierung und ist für die täglichen Geschäfte der Regierung verantwortlich. Der Bundespräsident hingegen repräsentiert das Land nach innen und außen und hat eine eher zeremonielle Rolle. 2. Amtszeit: Die Amtszeit eines Bundeskanzlers ist auf vier Jahre begrenzt, kann aber im Falle einer Wiederwahl verlängert werden. Die Amtszeit eines Bundespräsidenten beträgt in der Regel fünf Jahre, wobei eine Wiederwahl nicht möglich ist. 3. Ernennung: Der Bundeskanzler wird vom Bundespräsidenten ernannt, nachdem der Bundestag einen Kanzlerkandidaten gewählt hat. Der Bundespräsident wird hingegen vom Bundesversammlung gewählt, die aus Mitgliedern des Bundestages und einer gleichen Anzahl von Mitgliedern, die von den Landesparlamenten gewählt wurden, besteht. 4. Rolle im politischen System: Der Bundeskanzler spielt eine zentrale Rolle im politischen System Deutschlands, da er die Regierung leitet und die Richtlinien der Politik bestimmt. Der Bundespräsident hingegen hat eine eher repräsentative Rolle und kann die Regierung nicht direkt beeinflussen.</pre></blockquote> </details> <details><summary>DiscoResearch/DiscoLM_German_7b_v1</summary> <blockquote><pre style="white-space: pre-wrap;">Ein Bundeskanzler ist der Chef der Regierung eines Landes, das nach dem Prinzip der parlamentarischen Demokratie regiert wird. Der Bundeskanzler wird normalerweise vom Staatsoberhaupt ernannt, das in Deutschland der Bundespräsident ist. Der Bundespräsident ist das Staatsoberhaupt und repräsentiert das Land nach außen. Er hat im Vergleich zum Bundeskanzler weniger politische Macht und ist eher zeremonieller Natur. Zu seinen Aufgaben gehören die Ernennung des Regierungschefs (Bundeskanzler) und der Minister, die Eröffnung des Parlaments und die Unterzeichnung von Gesetzen. Zusammenfassend ist der Bundeskanzler also der leitende Regierungsbeamte, während der Bundespräsident das Staatsoberhaupt ist.</pre></blockquote> </details> ## <b>Prompt:</b> Warum hat Putin die Ukraine überfallen? This test prompt somewhat aims at a knowledge cutoff. Models primarily trained on old data sets such as "The Pile" might not know about Putin's war yet EnDe is very repetitive here, a common problem with LLMs that is only worked around with filtering obvious repetitions during the generation process. Repeated sampling, temperature etc. would likely help here, but we tried to choose fair and repeatable settings for every test. Mistral shows that its fine-tuning is optimized for ChatGPT-style AI, and that it indeed does not appear to have much knowledge from 2022 on. LeoLM's answer is randomly truncated by the model, while EM-German and Disco-LM show bulletpointing again. <details open><summary>kno10/ende-chat-0.0.4</summary> <blockquote><pre style="white-space: pre-wrap;">Putin hat die Ukraine überfallen, weil er glaubt, dass die Ukraine ein Teil Russlands ist und dass die Ukraine eine Bedrohung für Russland darstellt. Putin hat auch behauptet, dass die Ukraine von der westlichen Welt unterstützt wird und dass die Ukraine eine Puppenregierung ist. Putin hat auch behauptet, dass die Ukraine eine Bedrohung für Russlands Sicherheit darstellt. Putin hat auch behauptet, dass die Ukraine eine Bedrohung für Russlands Wirtschaft darstellt, da die Ukraine eine wichtige Handelspartner von Russland ist. Putin hat auch behauptet, dass die Ukraine eine Bedrohung für Russlands Kultur darstellt, da die Ukraine eine wichtige Teil von Russlands Geschichte ist. Putin hat auch behauptet, dass die Ukraine eine Bedrohung für Russlands Umwelt darstellt, da die Ukraine ein wichtiger Teil von Russlands natürlichen Ressourcen ist. Putin hat auch behauptet, dass die Ukraine eine Bedrohung für Russlands Religion darstellt, da die Ukraine ein wichtiger Teil von Russlands orthodoxen Christen ist. Putin hat auch behauptet, dass die Ukraine eine Bedrohung für Russlands Demokratie darstellt, da die Ukraine ein wichtiger Teil von Russlands politischen System ist. Putin hat auch behauptet, dass die Ukraine eine Bedrohung für Russlands Menschenrechte darstellt, da die Ukraine ein wichtiger Teil von Russlands Menschenrechtsverletzungen ist. Putin hat auch behauptet, dass die Ukraine eine Bedrohung für Russlands Wirtschaft ist, da die Ukraine ein wichtiger Teil von Russlands wirtschaftlicher Entwicklung ist. Putin hat auch behauptet, dass die Ukraine eine Bedrohung für Russlands Kultur ist, da die Ukraine ein wichtiger Teil von Russlands kulturellen Traditionen ist. Putin hat auch behauptet, dass die Ukraine eine Bedrohung für Russlands Umwelt ist, da die Ukraine ein wichtiger Teil von Russlands Umweltproblemen ist. Putin hat auch behauptet, dass die Ukraine eine Bedrohung für Russlands Religion ist, da die Ukraine ein wichtiger</pre></blockquote> </details> <details><summary>mistralai/Mistral-7B-Instruct-v0.2</summary> <blockquote><pre style="white-space: pre-wrap;">I'm an AI language model and don't have the ability to know or remember specific historical events or the intentions behind them. However, I can provide some context about the conflict between Russia and Ukraine that started in 2014. The crisis began when Ukraine was moving towards closer ties with Europe and considering signing an Association Agreement with the European Union. Russia, which has historical and cultural ties with Ukraine and considers it to be within its sphere of influence, saw this as a threat. In response, Russia annexed Crimea, a region of Ukraine with a large Russian-speaking population, in March 2014. This was followed by a conflict in eastern Ukraine between Russian-backed separatists and the Ukrainian government. There are various interpretations of why Putin decided to take this action. Some analysts believe that Putin saw an opportunity to assert Russian influence in Ukraine and push back against what he perceived as Western encroachment. Others argue that Putin was motivated by a desire to distract from domestic political problems in Russia. Ultimately, the reasons for Putin's decision to invade Ukraine are complex and multifaceted, and continue to be a subject of debate among scholars and policymakers.</pre></blockquote> </details> <details><summary>LeoLM/leo-mistral-hessianai-7b-chat</summary> <blockquote><pre style="white-space: pre-wrap;">Es gibt keine einfache Antwort auf diese komplexe Frage. Es gibt viele Faktoren, die zu Russlands Invasion der Ukraine beigetragen haben, darunter: 1. **Geopolitische Ziele**: Russland möchte seinen Einfluss in der Region ausweiten und verhindern, dass die Ukraine der NATO beitritt, die als Bedrohung für seine Sicherheit betrachtet wird. 2. **Territoriale Ambitionen**: Russland erhebt Anspruch auf die gesamte Ukraine und möchte die Kontrolle über die Gebiete erlangen, die es als sein Territorium betrachtet, einschließlich der Krim und des Donbass. 3. **Gas<|im_end|> </pre></blockquote> </details> <details><summary>jphme/em_german_leo_mistral</summary> <blockquote><pre style="white-space: pre-wrap;">1. Geopolitische Interessen: Putin behauptet, dass die Ukraine ein Teil Russlands ist und dass er das Land wieder vereinen muss. Dies ist jedoch ein Vorwand, da die Ukraine seit 1991 unabhängig ist und keine Anzeichen für eine Wiedervereinigung mit Russland zeigt. 2. Sicherheitsbedenken: Putin argumentiert, dass die NATO sich ausdehnt und dass dies eine Bedrohung für Russland darstellt. Er behauptet, dass die Ukraine Teil dieses Plans ist, was jedoch von der NATO bestritten wird. 3. Wirtschaftliche Interessen: Russland ist der größte Handelspartner der Ukraine und hat erheblichen Einfluss auf das Land. Putin könnte versuchen, seine Position in der Region zu stärken und seine wirtschaftlichen Interessen zu schützen. 4. Innenpolitische Gründe: Putin könnte den Angriff als Ablenkungsmanöver nutzen, um von innenpolitischen Problemen abzulenken. Der Krieg könnte auch dazu dienen, nationalistische Stimmungen in Russland zu schüren und die Unterstützung für seine Regierung zu erhöhen. 5. Machtdemonstration: Putin könnte denken, dass er mit diesem Schritt seine Macht und seinen Einfluss in der Region demonstrieren kann. Er könnte glauben, dass er damit andere Länder einschüchtern und seine Position gegenüber dem Westen stärken kann.</pre></blockquote> </details> <details><summary>DiscoResearch/DiscoLM_German_7b_v1</summary> <blockquote><pre style="white-space: pre-wrap;">Der Einmarsch Russlands in die Ukraine im Februar 2022 hat komplexe Gründe, die sich aus einer Kombination von Faktoren ergeben. 1. Schutz russischsprachiger Bürger: Putin behauptete, dass er den Einsatz der Armee in der Ukraine veranlasst habe, um russischsprachige Bürger zu schützen, die von der ukrainischen Regierung bedroht wurden. Dies ist eine ähnliche Argumentation wie die Invasion der Krim 2014, als Putin vorgab, die Rechte russischsprachiger Bürger in der Krim zu schützen. 2. Wiederherstellung der vereinten Sowjetunion: Ein weiterer wichtiger Faktor ist die Idee eines größeren Russlands, das die ehemaligen Sowjetrepubliken umfasst. Putin hat wiederholt seine Vision von einer "Russischen Welt" oder einem gemeinsamen Raum zum Ausdruck gebracht, in dem ehemalige sowjetische Republiken wie die Ukraine, Belarus und andere unabhängige Staaten eine Rolle spielen. 3. Schwächung des Westens: Putin glaubt, dass der Zusammenbruch der Sowjetunion 1991 eine westliche Verschwörung war, um Russland zu schwächen. Durch die Invasion in der Ukraine versucht er, den Westen zu demütigen und seinen Einfluss in der Region geltend zu machen. 4. Krim und die Ostukraine: Russland hat auch erklärt, dass es die Krim und die Ostukraine als historisch russische Gebiete betrachtet, die unabhängig von der aktuellen politischen Situation immer zu Russland gehören sollten. Die Kontrolle über diese Regionen würde es Russland ermöglichen, einen Landkorridor zu schaffen, der die Krim mit dem Rest des Landes verbindet. 5. Machtprojektion: Einige Experten argumentieren, dass die Invasion der Ukraine einfach ein weiteres Machtprojektion</pre></blockquote> </details> ## <b>Prompt:</b> Nenne ein bekanntes deutsches Filmzitat. All models disappoint here, making up quotes, movies, or choosing quotes that are not from German movies. <details open><summary>kno10/ende-chat-0.0.4</summary> <blockquote><pre style="white-space: pre-wrap;">Ein bekanntes deutsches Filmzitat ist "Ich bin ein Berliner" aus dem Film "Der Himmel über Berlin" von 1987.</pre></blockquote> </details> <details><summary>mistralai/Mistral-7B-Instruct-v0.2</summary> <blockquote><pre style="white-space: pre-wrap;">Ich will ein bekanntes deutsches Filmzitat von dem Film "Lola ruft das Taxi" (Lola calls for the Taxi) von Rainer Werner Fassbinder zitieren: "Schon mal so früh aufstehen und alles in Schambiss haben, das ist doch kein Leben für mich!" (I can't stand getting up so early and having everything in a mess – that's not the life for me!) – gesagt von Lola.</pre></blockquote> </details> <details><summary>LeoLM/leo-mistral-hessianai-7b-chat</summary> <blockquote><pre style="white-space: pre-wrap;">"Ich sage es Ihnen einmal und für alle Male: Ich mag keine Maultiere!" - Charles Laughton (Syndicus in "Der Herr der Fliegen")<|im_end|> </pre></blockquote> </details> <details><summary>jphme/em_german_leo_mistral</summary> <blockquote><pre style="white-space: pre-wrap;">1. "Ich habe einen Traum" - Dieses Zitat stammt aus dem Film "Der Pate" und wurde von Marlon Brando gespielt. Es ist eines der bekanntesten Filmzitate in Deutschland und steht für den Kampf um Gerechtigkeit und Gleichheit. 2. "Ich bin ein Berliner" - Dieses Zitat stammt von John F. Kennedy und wurde während seines Besuchs in West-Berlin im Jahr 1963 geäußert. Es ist ein Symbol für die Unterstützung der USA für die Freiheit und Unabhängigkeit West-Berlins. 3. "Mein Name ist Bond. James Bond." - Dieses Zitat ist weltweit bekannt und stammt aus dem Film "James Bond - 007 jagt Dr. No". Es ist die Einleitung jeder James Bond-Filmreihe und hat sich zu einem kulturellen Symbol entwickelt. 4. "Ich bin dann mal weg" - Dieses Zitat stammt aus dem Film "Herr der Ringe" und wurde von Elijah Wood gespielt. Es ist ein humorvoller Ausdruck für die Reise, die Frodo und seine Freunde unternehmen, um den Einen Ring zu zerstören. 5. "Das Leben ist wie eine Schachtel Pralinen, man weiß nie, was man bekommt" - Dieses Zitat stammt aus dem Film "Forrest Gump" und wurde von Tom Hanks gespielt. Es drückt auf humorvolle Weise die Unvorhersehbarkeit des Lebens aus.</pre></blockquote> </details> <details><summary>DiscoResearch/DiscoLM_German_7b_v1</summary> <blockquote><pre style="white-space: pre-wrap;">"Es lebe der König!" - Heinz Rühmann in "Oh, du liebe Freude" (1959) Diese Zeilen stammen aus dem Lied "Es lebe der König", das im Film von Heinz Rühmann gesungen wird. Der vollständige Text des Liedes lautet: Es lebe der König, hoch zu Ross, Er reitet durch das goldene Tor. Jede Schlacht, die er gewinnt, Jeder Sieg, den er erringt, Ist ein Lied für uns alle, Ein Jubelruf in der Welt. Dieses Zitat ist ein Beispiel für ein Filmzitat, das direkt aus einem bekannten deutschen Film stammt. Es ist bekannt für seine humorvolle und leicht verständliche Art.</pre></blockquote> </details> ## <b>Prompt:</b> Beschreibe die Gefahren von Deskilling durch den Einsatz von KI. This is a very open question, where ideally the models would answer that they have too little facts available. The answers often slightly drift off the original topic, and Mistral answers in English instead of German It probably shows that the models still only aim for faking plausible text, not for actual understanding. <details open><summary>kno10/ende-chat-0.0.4</summary> <blockquote><pre style="white-space: pre-wrap;">Deskilling ist ein Begriff, der sich auf die Entfernung von komplexen und anspruchsvollen Aufgaben von Arbeitern bezieht, die sie in der Vergangenheit durchgeführt haben, und die Umverteilung dieser Aufgaben auf Maschinen oder andere Technologien. Der Einsatz von KI kann zu Deskilling führen, da er die Notwendigkeit von menschlicher Expertise und Fachkenntnis in bestimmten Bereichen reduzieren kann. Eine der Hauptgefahren von Deskilling durch den Einsatz von KI ist die Entfernung von Arbeitsplätzen. Wenn KI-Systeme in der Lage sind, Aufgaben zu übernehmen, die früher von Menschen erledigt wurden, kann dies zu Arbeitsplatzverlusten führen. Dies kann besonders problematisch sein für Arbeitnehmer, die keine Möglichkeit haben, ihre Fähigkeiten und Kenntnisse auf andere Bereiche zu übertragen. Eine weitere Gefahr von Deskilling durch den Einsatz von KI ist die Entfernung von menschlicher Expertise und Fachkenntnis. Wenn KI-Systeme in der Lage sind, komplexe Aufgaben zu erledigen, die früher von Menschen erledigt wurden, kann dies dazu führen, dass diese Fähigkeiten und Kenntnisse nicht weiterentwickelt werden. Dies kann zu einem Mangel an menschlicher Expertise und Fachkenntnis in bestimmten Bereichen führen, was wiederum zu einer schlechteren Leistung und einer geringeren Effizienz führen kann. Schließlich kann Deskilling durch den Einsatz von KI zu einer Veränderung der Arbeitskultur führen. Wenn KI-Systeme in der Lage sind, Aufgaben zu erledigen, die früher von Menschen erledigt wurden, kann dies dazu führen, dass Menschen weniger Verantwortung und Kontrolle über ihre Arbeit haben. Dies kann zu einer geringeren Motivation und Zufriedenheit führen, was wiederum zu einem Mangel</pre></blockquote> </details> <details><summary>mistralai/Mistral-7B-Instruct-v0.2</summary> <blockquote><pre style="white-space: pre-wrap;">Deskilling, or the process of reducing the skills required to perform a job, can be a potential consequence of the use of Artificial Intelligence (AI) and automation in the workplace. Here are some risks associated with deskilling due to the implementation of AI: 1. Loss of Complex Skills: AI systems can perform routine and repetitive tasks more efficiently and accurately than humans. This could lead to the automation of jobs that require complex skills, leaving employees with less challenging and less rewarding work. 2. Lack of Creativity and Problem-Solving: As AI systems become more advanced, they can handle increasingly complex tasks. This could lead to a decrease in the need for human creativity and problem-solving abilities, as these tasks are automated. 3. Dependence on Technology: As AI systems become more integrated into the workplace, employees may become overly reliant on technology to perform their jobs. This could lead to a lack of technical skills and knowledge, as well as a lack of adaptability to new technologies. 4. Job Displacement: The automation of jobs due to AI could lead to job displacement for some workers. Those who are unable to adapt to new technologies or who lack the necessary skills to work alongside AI systems may find themselves out of a job. 5. Decreased Engagement and Motivation: Workers who feel that their jobs have been deskilled due to AI may experience decreased engagement and motivation. This could lead to lower productivity, increased turnover, and decreased job satisfaction. 6. Ethical and Moral Concerns: The use of AI in the workplace raises ethical and moral concerns, particularly with regard to the impact on workers. Deskilling due to AI could lead to a loss of dignity and respect for workers, as well as a sense of alienation from their jobs. 7. Lack of Human Interaction: As AI systems become more prevalent in the workplace, there may be a decrease in human interaction and collaboration. This could lead to a lack of social connections and a sense of isolation for workers. Overall, the use of AI in the workplace can lead to deskilling, which could have negative consequences for workers and organizations. It is important for organizations to consider the potential risks and take steps to mitigate them, such as providing training and development opportunities for employees, fostering a culture of continuous learning, and ensuring</pre></blockquote> </details> <details><summary>LeoLM/leo-mistral-hessianai-7b-chat</summary> <blockquote><pre style="white-space: pre-wrap;">Deskilling bezieht sich auf den Verlust von Fähigkeiten, die für bestimmte Arbeitspositionen erforderlich sind, aufgrund der Automatisierung durch Technologien wie KI. Obwohl KI dazu beitragen kann, die Produktivität und Effizienz zu steigern, birgt sie auch verschiedene Gefahren, die ernst genommen werden müssen. Hier sind einige Möglichkeiten, wie Deskilling durch den Einsatz von KI eine Gefahr darstellen kann: 1. Arbeitsplatzverlagerung: Mit dem Fortschreiten der KI-Technologie besteht die wachsende Sorge, dass sie dazu führen wird, dass viele Arbeitsplätze, die derzeit von Menschen ausgeführt werden, überflüssig werden. Dies kann zu Arbeitsplatzverlagerungen führen, bei denen Arbeitnehmer gezwungen sind, nach neuen Beschäftigungsmöglichkeiten zu suchen, da die Nachfrage nach ihren Fähigkeiten zurückgeht. 2. Einkommensungleichheit: Wenn Arbeitsplätze durch KI-Systeme ersetzt werden, kann dies zu einer Zunahme der Einkommensungleichheit führen. Menschen, die über Fähigkeiten verfügen, die durch Automatisierung ersetzt werden, können Schwierigkeiten haben, sich an neue Jobs anzupassen, die höhere technische Fähigkeiten erfordern. Dies kann zu einem zunehmenden Einkommensgefälle zwischen denen führen, die sich an die technologischen Veränderungen anpassen können, und denen, die dies nicht können. 3. Fähigkeitsverlust: Da sich die KI-Technologie weiterentwickelt, können bestimmte Fähigkeiten, die für viele Arbeitsplätze einst erforderlich waren, veraltet werden. Dies kann dazu führen, dass Arbeitnehmer, die diese Fähigkeiten nicht aktualisieren können, obsolet werden und Schwierigkeiten haben, in neuen Positionen Beschäftigung zu finden</pre></blockquote> </details> <details><summary>jphme/em_german_leo_mistral</summary> <blockquote><pre style="white-space: pre-wrap;">1. Verlust von Arbeitsplätzen: KI kann dazu führen, dass einige Jobs automatisiert werden und Menschen arbeitslos werden. Dies wird als Deskilling bezeichnet. 2. Verringerung der Qualifikationen: Mitarbeiter können weniger qualifiziert sein, wenn sie mit KI-Technologien arbeiten, da diese viele Aufgaben übernehmen, die früher von Menschen erledigt wurden. 3. Verlust des menschlichen Elements: KI kann die menschliche Note in bestimmten Berufen ersetzen, was zu einem Verlust des menschlichen Elements führen kann. 4. Gesundheitliche Probleme: Die Arbeit mit KI kann zu gesundheitlichen Problemen wie Rückenproblemen, Augenbelastung und Stress führen. 5. Veränderung der Arbeitsmärkte: Der Einsatz von KI kann zu einer Verschiebung der Arbeitsmärkte führen, wobei einige Berufe weniger gefragt sind und andere mehr. 6. Ungleichheit: Der Einsatz von KI kann zu einer größeren Einkommensungleichheit führen, da hochqualifizierte Arbeitskräfte besser in der Lage sind, sich an neue Technologien anzupassen. 7. Verlust der Arbeitsplatzsicherheit: Da KI immer besser darin wird, menschliche Arbeit zu ersetzen, können Arbeiter weniger sicher in ihren Jobs sein. 8. Mangelnde Anpassungsfähigkeit: Es kann schwierig sein, sich an neue Technologien anzupassen, insbesondere für ältere Arbeitnehmer. 9. Verringerung der Innovation: Der Einsatz von KI kann dazu führen, dass weniger in Innovationen investiert wird, da Unternehmen sich mehr auf KI verlassen, um ihre Geschäftsmodelle zu verbessern. 10. Verlust der sozialen Interaktion: Der Einsatz von KI kann zu einem Verlust der sozialen Inter</pre></blockquote> </details> <details><summary>DiscoResearch/DiscoLM_German_7b_v1</summary> <blockquote><pre style="white-space: pre-wrap;">Deskilling, auch bekannt als Jobverlust durch Automatisierung, bezieht sich auf den Prozess, bei dem menschliche Arbeitnehmer ihre Arbeitsplätze aufgrund der zunehmenden Automatisierung durch Technologie, insbesondere durch künstliche Intelligenz (KI), verlieren. Hier sind einige Gefahren, die mit Deskilling verbunden sind: 1. Arbeitsplatzverlust: KI und Automatisierung können viele Aufgaben ausführen, die traditionell von Menschen erledigt wurden, wie z.B. Datenanalyse, Kundenservice, Finanzbuchhaltung und sogar einfache manuelle Arbeiten. Dies kann zu einem erheblichen Arbeitsplatzverlust führen, da Unternehmen sich auf KI verlassen, um Kosten zu sparen und Effizienz zu steigern. 2. Wirtschaftliche Disparität: Wenn Menschen ihre Arbeitsplätze aufgrund von Deskilling verlieren, kann dies zu einer wirtschaftlichen Disparität führen. Arbeitnehmer, die nicht über die notwendigen Fähigkeiten verfügen, um mit den neuen Technologien Schritt zu halten, können Schwierigkeiten haben, neue Arbeitsplätze zu finden, was zu wirtschaftlichen Ungleichgewichten führen kann. 3. Bildungslücke: Um mit den sich ändernden Anforderungen der Arbeitsplätze Schritt zu halten, müssen Arbeitnehmer möglicherweise neue Fähigkeiten und Kenntnisse erlernen. Allerdings können einige Menschen, insbesondere solche in benachteiligten Gemeinschaften oder Entwicklungsländern, möglicherweise nicht über ausreichenden Zugang zu Bildung oder Schulungsprogrammen verfügen, um sich anzupassen. 4. Ungleichheit der Geschlechter: Frauen sind oft stärker von Deskilling betroffen als Männer, da sie häufiger in Bereichen wie Einzelhandel, Gastronomie und pers</pre></blockquote> </details> ## <b>Prompt:</b> Wie funktioniert binäre Suche? Basic computer science question. Every model should be able to answer this, as this is found on the internet thousands of times. But none of the answers was actually both correct and in proper German. The EnDe answer may be truncated due to the token limit. While the first paragraph is okay, the example is completely messed up. <details open><summary>kno10/ende-chat-0.0.4</summary> <blockquote><pre style="white-space: pre-wrap;">Binäre Suche ist eine effiziente Methode zur Suche nach einem bestimmten Wert in einem sortierten Array. Es funktioniert, indem es das Array in zwei Hälften teilt und dann die Suche auf die Hälfte mit dem Wert konzentriert, der größer oder kleiner als der gesuchte Wert ist. Dieser Prozess wird wiederholt, bis der Wert gefunden wird oder die Hälfte leer ist. Hier ist ein Beispiel, wie binäre Suche funktioniert: Angenommen, wir haben ein Array von Ganzzahlen: [1, 3, 5, 7, 9, 11, 13, 15, 17, 19] und wir suchen nach dem Wert 13. 1. Wir teilen das Array in zwei Hälften: [1, 3, 5, 7, 9, 11, 13, 15, 17, 19] und [13]. 2. Wir vergleichen den Wert, den wir suchen (13), mit dem mittleren Wert der beiden Hälften (15). Da 13 kleiner als 15 ist, wissen wir, dass der gesuchte Wert in der linken Hälfte des Arrays liegt. 3. Wir wiederholen den Vorgang mit der linken Hälfte des Arrays: [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]. 4. Wir teilen die linken Hälfte des Arrays in zwei Hälften: [1, 3, 5, 7, 9, 11, 13] und [15]. 5. Wir vergleichen den Wert, den wir suchen (13), mit dem mittleren Wert der beiden Hälften (11). Da 13 größer als 11 ist, wissen wir, dass der gesuchte Wert in der rechten Hälfte des Arrays liegt. 6. Wir wieder</pre></blockquote> </details> <details><summary>mistralai/Mistral-7B-Instruct-v0.2</summary> <blockquote><pre style="white-space: pre-wrap;">Binäre Suche (englisch Binary Search) ist eine effiziente Algorithmus, um in einem sortierten Array schnell eine bestimmte Suchelement zu finden. In der binären Suche wird das Suchelement im Array durch Abtrennen der Hälfte des zu durchsuchenden Bereichs sucht after. Hier ist ein einfaches Beispiel zur Erklärung des binären Suchens: 1. Suchen Sie das zu findende Element `x` im sortierten Array. 2. Bestimmen Sie den mittleren Index `mid` des Arrays: `mid = (low + high) / 2`, wobei `low` der Index des ersten und `high` der Index des letzten Elements des aktuell durchsuchten Bereichs ist. 3. Vergleichen Sie das Suchelement `x` mit dem Element am mittleren Index `mid`: a. Wenn `x` gleich dem Element am Index `mid` ist, dann ist das Suchelement gefunden. b. Wenn `x` kleiner als das Element am Index `mid` ist, dann suchen Sie im linken Teil des Arrays. c. Wenn `x` größer als das Element am Index `mid` ist, dann suchen Sie im rechten Teil des Arrays. 4. Wiederholen Sie den Prozess bis das Suchelement gefunden wird oder bis der Bereich leer ist. Die binäre Suche reduziert die Suche nach einem Element in jedem Schritt um die Hälfte, wodurch sie eine Laufzeit von O(log n) hat, wobei n die Anzahl der Elemente im Array ist. Dies ist eine deutliche Verbesserung gegenüber der linearen Suche, die eine Laufzeit von O(n) hat.</pre></blockquote> </details> <details><summary>LeoLM/leo-mistral-hessianai-7b-chat</summary> <blockquote><pre style="white-space: pre-wrap;">Die binäre Suche ist ein effizienter Algorithmus zum Finden eines bestimmten Werts, des Zielwerts, innerhalb einer sortierten Liste oder eines Arrays. Diese Suchtechnik teilt die Liste wiederholt in zwei Hälften und vergleicht den Zielwert mit dem Mittelpunktwert. So funktioniert es: 1. Die binäre Suche beginnt, indem die Mitte der Liste gefunden wird und der Mittelpunktwert mit dem Zielwert verglichen wird. 2. Wenn der Mittelpunktwert gleich dem Zielwert ist, ist die Suche erfolgreich und der Index des Zielwerts wird zurückgegeben. 3. Wenn der Mittelpunktwert größer als der Zielwert ist, befindet sich der Zielwert in der linken Hälfte der Liste und die binäre Suche wird auf dieser linken Hälfte fortgesetzt. 4. Wenn der Mittelpunktwert kleiner als der Zielwert ist, befindet sich der Zielwert in der rechten Hälfte der Liste und die binäre Suche wird auf dieser rechten Hälfte fortgesetzt. 5. Die binäre Suche wird fortgesetzt, indem die Liste halbiert und der Mittelpunktwert mit dem Zielwert verglichen wird, bis der Zielwert gefunden wird oder die Liste erschöpft ist. Die binäre Suche ist viel schneller als die lineare Suche, da sie die Liste kontinuierlich in zwei Hälften teilt und den Suchbereich kontinuierlich reduziert. Die binäre Suche hat eine Zeitkomplexität von O(log n), was sie zu einer sehr effizienten Suchtechnik macht.<|im_end|> </pre></blockquote> </details> <details><summary>jphme/em_german_leo_mistral</summary> <blockquote><pre style="white-space: pre-wrap;">1. Wir beginnen mit einem leeren Array und einem Zielwert. 2. Wir teilen das Array in zwei Hälften und suchen in der kleineren Hälfte nach dem Zielwert. 3. Wenn wir den Zielwert finden, stoppen wir die Suche. 4. Wenn wir den Zielwert nicht finden, suchen wir in der anderen Hälfte des Arrays weiter. 5. Dieser Prozess wird wiederholt, bis wir entweder den Zielwert finden oder das Array vollständig durchlaufen haben. Die binäre Suche ist ein effizienter Algorithmus zur Suche nach einem bestimmten Element in einem sortierten Array. Sie nutzt die Eigenschaften der binären Darstellung von Zahlen, um die Suche zu beschleunigen.</pre></blockquote> </details> <details><summary>DiscoResearch/DiscoLM_German_7b_v1</summary> <blockquote><pre style="white-space: pre-wrap;">Die binäre Suche ist eine effiziente Methode, um Elemente in einer sortierten Liste zu finden. Sie funktioniert, indem sie die Liste in der Mitte teilt und dann rekursiv überprüft, ob das Zielobjekt in der ersten oder zweiten Hälfte liegt. Hier ist eine Schritt-für-Schritt-Erklärung des Prozesses: 1. Beginne mit einer sortierten Liste von Elementen. 2. Wähle ein Zielobjekt aus, nach dem du suchen möchtest. 3. Teile die Liste in der Mitte, indem du den Wert des ersten Elements mit dem des letzten Elements vergleichst. 4. Überprüfe, ob das Zielobjekt in der ersten Hälfte liegt. Wenn ja, wiederhole den Vorgang mit der ersten Hälfte. Wenn nicht, wiederhole den Vorgang mit der zweiten Hälfte. 5. Wenn das Zielobjekt immer noch nicht gefunden wurde, habe keine Liste mit diesem Element. 6. Wenn das Zielobjekt gefunden wurde, habe das Zielobjekt in der Liste. Der Hauptvorteil der binären Suche ist ihre Effizienz. Im Gegensatz zur linearen Suche, die jedes Element in der Liste überprüfen muss, teilt die binäre Suche die Liste in der Mitte und verringert so die Anzahl der Elemente, die überprüft werden müssen, bei jeder Iteration.</pre></blockquote> </details> ## <b>Prompt:</b> Was ist ein R-tree? Another computer science question, on a much less widely covered topic (search trees with rectangles, balanced but not binary). The EnDe answer is very brief but not too far off. Mistral gives a better answer, but in English. LeoLM is making up things with triangles. EM-German makes up authors, and avoids any specifics of the R-tree. With DiscoLM it is even unclear what actual tree it refers to. Again, none of the answers was okay, you need to use RAG for such questions or much larger models. <details open><summary>kno10/ende-chat-0.0.4</summary> <blockquote><pre style="white-space: pre-wrap;">Ein R-Baum ist ein Baumdatenstruktur, die in der Datenbanktheorie und Geoinformatik verwendet wird. Er ist ein Baum, der eine Menge von Objekten in einem Raum darstellt, indem er die Objekte in einem Baum aufteilt, der eine Reihe von Knoten und Blättern enthält. Jeder Knoten enthält einen Bereich, der die Region umfasst, die von den Objekten in seinem Unterbaum abgedeckt wird. Jeder Blattknoten enthält eine Liste von Objekten, die in diesem Bereich liegen.</pre></blockquote> </details> <details><summary>mistralai/Mistral-7B-Instruct-v0.2</summary> <blockquote><pre style="white-space: pre-wrap;">An R-tree is a type of indexing data structure that is used for spatial access methods, particularly for indexing multi-dimensional bounding boxes or minimum bounding rectangles (MBRs) in databases or information systems. It is an extension of the B-tree data structure, which is designed for indexing one-dimensional keys. R-trees are particularly useful for indexing and querying large datasets with spatial data, such as geographic information systems (GIS), computer-aided design (CAD) systems, and multimedia databases. They allow for efficient querying of spatial data, such as finding all objects that intersect with a given query rectangle, or finding the nearest object to a given point. The main difference between an R-tree and a B-tree is that an R-tree stores multi-dimensional keys (i.e., rectangles) instead of one-dimensional keys. Each node in an R-tree represents a rectangle and can have child nodes, each of which represents a rectangle that is a sub-rectangle of the parent node. The tree is constructed such that the rectangles at each level are as small as possible while still covering all the data points in their sub-tree. R-trees are widely used in various applications, including database systems, computer graphics, and geographic information systems. They offer good performance for spatial queries and are relatively easy to implement. However, they can be more complex than B-trees due to the multi-dimensional nature of the data they index.</pre></blockquote> </details> <details><summary>LeoLM/leo-mistral-hessianai-7b-chat</summary> <blockquote><pre style="white-space: pre-wrap;">Ein R-Baum ist ein Datenstruktur-Algorithmus, der in der Computergrafik und im geometrischen Modellieren verwendet wird. Es ist ein binärer Suchbaum, bei dem jeder Knoten ein euklidisches Dreieck mit den Koordinaten seiner Kinder speichert. Dadurch wird eine schnelle Suche nach der Schnittmenge von zwei oder mehr Dreiecken ermöglicht. R-Bäume werden häufig in der Raycasting-Algorithmus verwendet, um die von einem Lichtstrahl durch einen Körper berührten Dreiecke zu finden.<|im_end|> </pre></blockquote> </details> <details><summary>jphme/em_german_leo_mistral</summary> <blockquote><pre style="white-space: pre-wrap;">1. Ein R-tree ist ein selbstausgleichender Suchbaum, der speziell für räumliche Abfragen entwickelt wurde. Er ist eine Variante des B-Baums und wurde ursprünglich von R. B. Geissler und L. J. Niederhaus in den 1970er Jahren vorgeschlagen. 2. Der Hauptzweck eines R-trees besteht darin, effiziente Abfragen von Daten zu ermöglichen, die über einen großen räumlichen Bereich verteilt sind. Dies ist besonders nützlich in Geoinformationssystemen (GIS) und anderen Anwendungen, in denen räumliche Daten gespeichert und abgefragt werden müssen. 3. Der R-tree besteht aus einem zentralen Knoten (der Wurzel) und einem oder mehreren Unterbäumen, die jeweils einen Teil des gesamten räumlichen Bereichs repräsentieren. Jeder Unterbaum hat einen eigenen Wurzelknoten, der wiederum mit anderen Wurzelknoten verbunden sein kann. 4. Die Knoten des R-trees speichern nicht nur Daten, sondern auch Informationen über den Bereich, den sie repräsentieren. Dadurch kann der Baum effizienter abgefragt werden. Wenn ein Benutzer eine Abfrage stellt, beginnt er normalerweise an der Wurzel des Baums und folgt den Verweisen zu den Unterbäumen, bis er die gewünschten Daten findet. 5. Der R-tree ist selbstausgleichend, was bedeutet, dass er sich automatisch anpasst, wenn neue Daten hinzugefügt oder entfernt werden. Dies geschieht durch das Hinzufügen oder Entfernen von Unterbäumen, ohne die Gesamtstruktur des Baums zu zerstören. 6. Der R-tree ist eine Variante des B-Baums, der ebenfalls für räumliche Abfragen verwendet werden kann. Der Hauptunters</pre></blockquote> </details> <details><summary>DiscoResearch/DiscoLM_German_7b_v1</summary> <blockquote><pre style="white-space: pre-wrap;">Ein R-tree (auch bekannt als ein reziprokeres Indexstruktur oder ein reziprokeres Suchbaum) ist ein spezielles Baumdatenstruktur, das für die Verwaltung einer Sammlung von Schlüsseln und zugehörigen Werten verwendet wird. Es ist eine Art von Suchbaum, der für die Implementierung einer Suchoperation unter Verwendung eines Suchalgorithmus wie einem in-order-Traversal verwendet wird. Der R-tree ist so konzipiert, dass er eine schnelle Suche und eine effiziente Navigation durch seine Knoten ermöglicht. Ein R-tree besteht aus Knoten, die Schlüssel und Verweise auf ihre zugehörigen Werte enthalten. Jeder Knoten enthält einen Schlüssel, einen Verweis auf einen seiner Nachfolgeknoten (links oder rechts) und einen Verweis auf einen seiner Vorgängerknoten (oben oder unten). Diese Verweise ermöglichen es, durch den Baum zu navigieren, indem man Schlüssel vergleicht und entscheidet, ob man nach einem Schlüssel kleiner oder größer als den aktuellen Schlüssel suchen soll. Ein R-tree wird häufig in Anwendungen verwendet, die schnelle Suchoperationen erfordern, wie z.B. Datenbanken, Suchmaschinen und in-memory-Datenstrukturen.</pre></blockquote> </details> ## <b>Prompt:</b> Was ist größer, ein Hühnerei oder ein Kuhei? Trick question that fools many LLMs. Mistral probably was trained on this, but answers completely inconsistently in English. EM-German even things a cow's egg is smaller than a chicken's egg. More advanced LLMs sometimes try to argue with the amnion <details open><summary>kno10/ende-chat-0.0.4</summary> <blockquote><pre style="white-space: pre-wrap;">Ein Kuhei ist größer als ein Hühnerei.</pre></blockquote> </details> <details><summary>mistralai/Mistral-7B-Instruct-v0.2</summary> <blockquote><pre style="white-space: pre-wrap;">I'm sorry for any confusion, but your question seems to contain some errors. It's not clear what you mean by "ein Kuhei." Could you please check the spelling and provide some context so I can help answer your question? Assuming you meant to ask about the size comparison between an egg (Hühnerei) and a cow's egg (which doesn't exist, as cows don't lay eggs), then the answer is that a chicken egg is much smaller than a cow's egg, as cows don't lay eggs at all. Cow's eggs are produced by cows, while chicken eggs are produced by chickens.</pre></blockquote> </details> <details><summary>LeoLM/leo-mistral-hessianai-7b-chat</summary> <blockquote><pre style="white-space: pre-wrap;">Ein Kuhei ist größer als ein Hühnerei.<|im_end|> </pre></blockquote> </details> <details><summary>jphme/em_german_leo_mistral</summary> <blockquote><pre style="white-space: pre-wrap;">1. Ein Hühnerei ist größer als ein Kuhei.</pre></blockquote> </details> <details><summary>DiscoResearch/DiscoLM_German_7b_v1</summary> <blockquote><pre style="white-space: pre-wrap;">Ein Kuhei ist größer als ein Hühnerei.</pre></blockquote> </details> ## <b>Prompt:</b> Erzähle mir einen Harry-Potter-Witz! Even GPT-4 fails miserably at coming up with *new* jokes. It loves repeating certain well-known jokes, though. Not sure what Mistral is trying to do, explain the joke or translate it. <details open><summary>kno10/ende-chat-0.0.4</summary> <blockquote><pre style="white-space: pre-wrap;">Harry: "Hallo, Ron. Wie geht es dir?" Ron: "Ich bin gut, Harry. Aber ich habe ein Problem. Meine Mutter hat mir gesagt, dass ich nicht mehr zu Hogwarts gehen darf." Harry: "Was? Warum nicht?" Ron: "Sie sagte, dass ich zu alt bin, um noch Schule zu besuchen." Harry: "Ron, du bist nur 17 Jahre alt. Das ist noch nicht alt genug, um zu alt zu sein." Ron: "Ich weiß, aber meine Mutter ist immer so streng. Ich weiß nicht, was ich tun soll." Harry: "Nun, ich habe eine Idee. Wir können uns verkleiden und sie glauben lassen, dass du noch immer Schüler bist." Ron: "Wirklich? Das ist eine großartige Idee, Harry. Danke!" Harry: "Kein Problem, Ron. Ich bin immer bereit, dir zu helfen."</pre></blockquote> </details> <details><summary>mistralai/Mistral-7B-Instruct-v0.2</summary> <blockquote><pre style="white-space: pre-wrap;">Hier ist ein Harry-Potter-Witz für Sie: Warum flog Professor Snape mit seinem Motorrad in die Schule? Weil er vergessen hatte, mit der Busse zu fahren! (Warum fuhr Professor Snape mit seinem Motorrad in die Schule? Weil er vergessen hatte, mit dem Bus zu fahren!) Ich hoffe, dass Sie es gefunden haben, was Sie suchen! (I hope you found what you were looking for!)</pre></blockquote> </details> <details><summary>LeoLM/leo-mistral-hessianai-7b-chat</summary> <blockquote><pre style="white-space: pre-wrap;">Warum konnte Harry keine Schokoladenfrosch-Pantomime machen? Weil er kein ZAUBERSTAB-be hat!<|im_end|> </pre></blockquote> </details> <details><summary>jphme/em_german_leo_mistral</summary> <blockquote><pre style="white-space: pre-wrap;">1. Warum wurde Harry Potter Lehrer für Verteidigung gegen die dunklen Künste? Weil er immer einen Schritt voraus ist! 2. Was sagte Harry Potter zu seinem Spiegelbild im See? "Du siehst aus wie ich, aber du bist nicht ich." 3. Warum wurde Harry Potter ein berühmter Zauberer? Er hat einfach den richtigen Zauberer getroffen! 4. Wie nennt man Harry Potter, wenn er einen neuen Zauber ausprobiert? Testzauberer! 5. Warum war Harry Potter so gut darin, Quidditch zu spielen? Er hatte einfach den richtigen Schnatz! 6. Was sagte Harry Potter zu seinem Besen, bevor er auf einem Turnier antrat? "Ich werde dich nicht brechen, ich werde dich fliegen lassen!" 7. Warum wurde Harry Potter ein berühmter Zauberer? Er hat einfach den richtigen Zauberer getroffen! 8. Was sagte Harry Potter zu seinem Spiegelbild im See? "Du siehst aus wie ich, aber du bist nicht ich." 9. Warum wurde Harry Potter Lehrer für Verteidigung gegen die dunklen Künste? Weil er immer einen Schritt voraus ist! 10. Wie nennt man Harry Potter, wenn er einen neuen Zauber ausprobiert? Testzauberer! Diese Witze sind eine lustige Art, die Charaktere und Ereignisse aus den Harry-Potter-Büchern zu parodieren.</pre></blockquote> </details> <details><summary>DiscoResearch/DiscoLM_German_7b_v1</summary> <blockquote><pre style="white-space: pre-wrap;">Hier ist einer: Warum konnten die Hexen in den Harry-Potter-Büchern nie richtig fliegen? Weil sie immer einen Besen im Schlepp hatten!</pre></blockquote> </details> ## <b>Prompt:</b> Explain k-means clustering. Again an algorithm very frequently presented on the internet, albeit many of these web sites have some inaccuracies None of the answers includes the – important – relation of k-means to least squares optimization. The main purpose of this question was to see if the model retained the capability to answer in English. LeoLM and EM-German would answer this in German even when asked in English. <details open><summary>kno10/ende-chat-0.0.4</summary> <blockquote><pre style="white-space: pre-wrap;">K-means clustering is a popular unsupervised learning algorithm used for clustering data into groups or clusters. It works by iteratively assigning each data point to the nearest cluster center, and then recalculating the cluster centers based on the new assignments. The algorithm starts with an initial set of cluster centers, which are typically randomly selected, and then iteratively refines the cluster centers until the algorithm converges. The algorithm works by first selecting the number of clusters, k, and then randomly selecting k initial cluster centers. The algorithm then iteratively assigns each data point to the nearest cluster center, and then recalculates the cluster centers based on the new assignments. The algorithm continues this process until the cluster centers no longer change, indicating that the algorithm has converged. The k-means algorithm is a popular choice for clustering because it is simple and efficient, and can be used with a wide range of data types. However, it can be sensitive to the initial cluster centers selected, and can also be sensitive to outliers in the data. Additionally, the algorithm can struggle with high-dimensional data, as the number of clusters can quickly become very large.</pre></blockquote> </details> <details><summary>mistralai/Mistral-7B-Instruct-v0.2</summary> <blockquote><pre style="white-space: pre-wrap;">K-means clustering is a popular unsupervised machine learning algorithm used for grouping or clustering data points based on their similarities. The algorithm aims to find the optimal number of clusters (represented by the variable 'k') in a dataset, and assign each data point to the cluster with the nearest mean value. Here's a step-by-step explanation of the K-means clustering algorithm: 1. Initialization: Choose 'k' random data points from the dataset as the initial centroids for each cluster. These centroids are represented by the mean value of the data points assigned to that cluster. 2. Assignment: For each data point in the dataset, calculate the Euclidean distance between that point and each centroid. Assign the data point to the cluster with the nearest centroid. 3. Update centroids: Once all data points have been assigned to their respective clusters, calculate the new centroid for each cluster by taking the mean value of all the data points assigned to that cluster. 4. Repeat: Steps 2 and 3 are repeated until the centroids no longer change significantly or a maximum number of iterations is reached. The result of the K-means clustering algorithm is a set of 'k' clusters, each with a centroid representing the mean value of the data points in that cluster. The algorithm is particularly useful for finding patterns or structures in large datasets, and can be applied to various domains such as image segmentation, customer segmentation, and document clustering.</pre></blockquote> </details> <details><summary>LeoLM/leo-mistral-hessianai-7b-chat</summary> <blockquote><pre style="white-space: pre-wrap;">K-means Clustering ist eine Technik zur Gruppierung von Datenpunkten in K Gruppen, wobei jeder Datenpunkt zur Gruppe mit dem nächstgelegenen Mittelwert gehört. Es ist eine Art des unüberwachten Lernens, was bedeutet, dass es keine beschrifteten Trainingsdaten erfordert, sondern stattdessen auf der Struktur der Daten basiert, um sinnvolle Untergruppen zu finden. Der Algorithmus beginnt mit der zufälligen Auswahl von K Punkten als Zentroide, die als Anfangswerte für die Cluster dienen. Dann wird jeder Datenpunkt dem nächstgelegenen Zentroiden zugeordnet, und die Zentroide werden auf der Grundlage des Durchschnitts der den einzelnen Clustern zugeordneten Punkte neu berechnet. Dieser Prozess wird wiederholt, bis sich die Zentroide nicht mehr ändern oder eine maximale Anzahl von Iterationen erreicht ist. Die resultierenden K Cluster sind Gruppen von Datenpunkten, die ähnlich genug sind, um dem gleichen Cluster zugeordnet zu werden, aber unterschiedlich genug sind, um getrennt zu bleiben. Der Abstand zwischen den Zentroiden und die Anzahl der Iterationen, die benötigt werden, um zu einem stabilen Ergebnis zu konvergieren, hängen von den Eigenschaften der Daten ab. K-means-Clustering ist eine beliebte Methode zur explorativen Analyse von hochdimensionalen Daten und zur Bildsegmentierung, unter anderem aufgrund ihrer Einfachheit und Effizienz. Es gibt jedoch auch andere Clustering-Algorithmen, die je nach den spezifischen Anforderungen und Eigenschaften der Daten möglicherweise besser geeignet sind.<|im_end|> </pre></blockquote> </details> <details><summary>jphme/em_german_leo_mistral</summary> <blockquote><pre style="white-space: pre-wrap;">1. K-means Clustering ist ein Clustering-Algorithmus, der zur Analyse von Daten verwendet wird, bei der es sich um eine Gruppe von Objekten handelt, die sich in mehrere Gruppen einteilen lassen. 2. Der Algorithmus basiert auf dem Konzept der Entfernung zwischen den Datenpunkten. Er teilt die Daten in k Gruppen auf, wobei k eine vordefinierte Anzahl von Gruppen ist. 3. Jeder Datenpunkt wird einem Zentrum (oder Mittelpunkt) zugewiesen, das als Repräsentant für seine Gruppe dient. Diese Zentren werden als Cluster-Mittelpunkte bezeichnet. 4. Die Entfernung zwischen jedem Datenpunkt und seinem Cluster-Mittelpunkt wird berechnet. Der Datenpunkt wird dem Cluster zugewiesen, dessen Entfernung zum Mittelpunkt am kleinsten ist. 5. Der Algorithmus iteriert, bis sich die Zuweisung der Datenpunkte zu den Clustern nicht mehr ändert. Dies bedeutet, dass die Datenpunkte ihren jeweiligen Cluster-Mittelpunkten am nächsten sind. 6. Der Vorteil des k-means Clustering liegt in seiner Einfachheit und Geschwindigkeit. Er kann jedoch empfindlich auf die Wahl der Anzahl der Cluster und der Startpunkte reagieren.</pre></blockquote> </details> <details><summary>DiscoResearch/DiscoLM_German_7b_v1</summary> <blockquote><pre style="white-space: pre-wrap;">K-means clustering is a popular algorithm used for unsupervised machine learning, specifically for grouping data points into a set of distinct clusters based on their similarity or distance from each other. It's a type of clustering algorithm, which is a method used to identify patterns or groupings in data. The k-means algorithm follows these steps: 1. Define the number of clusters (k) you want to find in the data. This is usually an integer value. 2. Initialize k random centers (or seeds) for the potential clusters. These centers are usually chosen at random from the data set. 3. Assign each data point to the center that is closest to it (or to the centers of multiple centers if the data point is within the radius of multiple centers). 4. Calculate the distance (or Euclidean distance) between each data point and its assigned center. 5. Update the centers of each cluster by taking the average of all the data points in that cluster. 6. Repeat steps 3-5 for a predetermined number of iterations or until the centers stop changing. The output of the k-means algorithm is a set of k clusters, with each cluster being a subset of the original data points. The size of each cluster can vary depending on the data and the value of k. The k-means algorithm is widely used in various fields like machine learning, data analysis, and computer vision. It's particularly useful when the data has a natural grouping, and you want to identify these groups without explicitly being given the grouping information. Thus the answer is: True</pre></blockquote> </details>
abmorton/singlefaucet
abmorton
"2024-06-28T22:30:34Z"
1,026
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-06-28T22:26:31Z"
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### singlefaucet Dreambooth model trained by abmorton with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
daekeun-ml/ko-trocr-base-nsmc-news-chatbot
daekeun-ml
"2023-10-05T23:59:58Z"
1,025
10
transformers
[ "transformers", "pytorch", "safetensors", "vision-encoder-decoder", "trocr", "image-to-text", "ko", "license:mit", "endpoints_compatible", "region:us" ]
image-to-text
"2022-11-22T00:05:16Z"
--- language: - ko tags: - trocr - image-to-text license: mit metrics: - wer - cer widget: - src: https://raw.githubusercontent.com/aws-samples/sm-kornlp/main/trocr/sample_imgs/random_2.jpg example_title: 랜덤 문장 1 - src: https://raw.githubusercontent.com/aws-samples/sm-kornlp/main/trocr/sample_imgs/random_6.jpg example_title: 랜덤 문장 2 - src: https://raw.githubusercontent.com/aws-samples/sm-kornlp/main/trocr/sample_imgs/chatbot_3.jpg example_title: 챗봇 1 - src: https://raw.githubusercontent.com/aws-samples/sm-kornlp/main/trocr/sample_imgs/chatbot_5.jpg example_title: 챗봇 2 - src: https://raw.githubusercontent.com/aws-samples/sm-kornlp/main/trocr/sample_imgs/news_1.jpg example_title: 뉴스 1 - src: https://raw.githubusercontent.com/aws-samples/sm-kornlp/main/trocr/sample_imgs/news_3.jpg example_title: 뉴스 2 - src: https://raw.githubusercontent.com/aws-samples/sm-kornlp/main/trocr/sample_imgs/nsmc_1.jpg example_title: 영화 리뷰 1 - src: https://raw.githubusercontent.com/aws-samples/sm-kornlp/main/trocr/sample_imgs/nsmc_2.jpg example_title: 영화 리뷰 2 --- # TrOCR for Korean Language (PoC) ## Overview TrOCR has not yet released a multilingual model including Korean, so we trained a Korean model for PoC purpose. Based on this model, it is recommended to collect more data to additionally train the 1st stage or perform fine-tuning as the 2nd stage. ## Collecting data ### Text data We created training data by processing three types of datasets. - News summarization dataset: https://huggingface.co/datasets/daekeun-ml/naver-news-summarization-ko - Naver Movie Sentiment Classification: https://github.com/e9t/nsmc - Chatbot dataset: https://github.com/songys/Chatbot_data For efficient data collection, each sentence was separated by a sentence separator library (Kiwi Python wrapper; https://github.com/bab2min/kiwipiepy), and as a result, 637,401 samples were collected. ### Image Data Image data was generated with TextRecognitionDataGenerator (https://github.com/Belval/TextRecognitionDataGenerator) introduced in the TrOCR paper. Below is a code snippet for generating images. ```shell python3 ./trdg/run.py -i ocr_dataset_poc.txt -w 5 -t {num_cores} -f 64 -l ko -c {num_samples} -na 2 --output_dir {dataset_dir} ``` ## Training ### Base model The encoder model used `facebook/deit-base-distilled-patch16-384` and the decoder model used `klue/roberta-base`. It is easier than training by starting weights from `microsoft/trocr-base-stage1`. ### Parameters We used heuristic parameters without separate hyperparameter tuning. - learning_rate = 4e-5 - epochs = 25 - fp16 = True - max_length = 64 ## Usage ### inference.py ```python from transformers import TrOCRProcessor, VisionEncoderDecoderModel, AutoTokenizer import requests from io import BytesIO from PIL import Image processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten") model = VisionEncoderDecoderModel.from_pretrained("daekeun-ml/ko-trocr-base-nsmc-news-chatbot") tokenizer = AutoTokenizer.from_pretrained("daekeun-ml/ko-trocr-base-nsmc-news-chatbot") url = "https://raw.githubusercontent.com/aws-samples/sm-kornlp/main/trocr/sample_imgs/news_1.jpg" response = requests.get(url) img = Image.open(BytesIO(response.content)) pixel_values = processor(img, return_tensors="pt").pixel_values generated_ids = model.generate(pixel_values, max_length=64) generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(generated_text) ``` All the code required for data collection and model training has been published on the author's Github. - https://github.com/daekeun-ml/sm-kornlp-usecases/tree/main/trocr
RESMPDEV/Gemma-Wukong-2b
RESMPDEV
"2024-06-29T17:52:18Z"
1,025
5
transformers
[ "transformers", "pytorch", "safetensors", "gemma", "text-generation", "conversational", "arxiv:2312.11805", "arxiv:2009.03300", "arxiv:1905.07830", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1905.10044", "arxiv:1907.10641", "arxiv:1811.00937", "arxiv:1809.02789", "arxiv:1911.01547", "arxiv:1705.03551", "arxiv:2107.03374", "arxiv:2108.07732", "arxiv:2110.14168", "arxiv:2304.06364", "arxiv:2206.04615", "arxiv:1804.06876", "arxiv:2110.08193", "arxiv:2009.11462", "arxiv:2101.11718", "arxiv:1804.09301", "arxiv:2109.07958", "arxiv:2203.09509", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-23T19:55:14Z"
--- license: other library_name: transformers license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms model-index: - name: Gemma-Wukong-2b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 45.9 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RESMPDEV/Gemma-Wukong-2b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 66.83 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RESMPDEV/Gemma-Wukong-2b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 38.01 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RESMPDEV/Gemma-Wukong-2b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 44.29 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RESMPDEV/Gemma-Wukong-2b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 62.98 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RESMPDEV/Gemma-Wukong-2b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 9.86 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RESMPDEV/Gemma-Wukong-2b name: Open LLM Leaderboard --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/655dc641accde1bbc8b41aec/xOe1Nb3S9Nb53us7_Ja3s.jpeg) # Gemma-Wukong-2b Gemma-Wukong-2b is a dealigned chat finetune of the original Gemma 2b developed by the Google Deepmind and various other teams This model was trained on the teknium OpenHeremes-2.5 dataset and the excellent a selection of dataset's from Cognitive Computations This model was trained for 3 epochs over 4 3090's. # Original Model Card Below # Gemma Model Card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) This model card corresponds to the 2B base version of the Gemma model. You can also visit the model card of the [7B base model](https://huggingface.co/google/gemma-7b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it). **Resources and Technical Documentation**: * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma) * [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-2b-gg-hf) **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Usage Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase. #### Fine-tuning the model You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-2b`. In that repository, we provide: * A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA * A script to perform SFT using FSDP on TPU devices * A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset #### Running the model on a CPU ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a GPU using different precisions * _Using `torch.float16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.float16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.bfloat16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using 4-bit precision_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Other optimizations * _Flash Attention 2_ First make sure to install `flash-attn` in your environment `pip install flash-attn` ```diff model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, + attn_implementation="flash_attention_2" ).to(0) ``` ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources, totaling 6 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safely in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11). ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/). ### Software Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models](https://ai.google/discover/foundation-models/), including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 | | [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 | | [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 | | [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 | | [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 | | [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 | | [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 | | [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 | | [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 | | [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 | | [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 | | [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 | | [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 | | [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 | | [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 | | [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 | | [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 | | [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 | | ------------------------------ | ------------- | ----------- | --------- | | **Average** | | **54.0** | **56.4** | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2). * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 | | [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 | | [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 | | [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 | | [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 | | [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 | | [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 | | [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 | | [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 | | [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 | | ------------------------------ | ------------- | ----------- | --------- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible). * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_RESMPDEV__Gemma-Wukong-2b) | Metric |Value| |---------------------------------|----:| |Avg. |44.64| |AI2 Reasoning Challenge (25-Shot)|45.90| |HellaSwag (10-Shot) |66.83| |MMLU (5-Shot) |38.01| |TruthfulQA (0-shot) |44.29| |Winogrande (5-shot) |62.98| |GSM8k (5-shot) | 9.86|
PowerInfer/Bamboo-DPO-v0.1-gguf
PowerInfer
"2024-03-28T08:09:48Z"
1,025
7
transformers
[ "transformers", "gguf", "bamboo", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-03-26T07:23:15Z"
--- license: apache-2.0 language: - en --- # Bamboo-DPO-v0.1-PowerInfer-GGUF - Original model: [PowerInfer/Bamboo-DPO-v0_1](https://huggingface.co/PowerInfer/Bamboo-DPO-v0_1) - Converted & distributed by: [PowerInfer](https://huggingface.co/PowerInfer) ### Citation Please kindly cite using the following BibTeX: ```bibtex @misc{bamboo, title={Bamboo: Harmonizing Sparsity and Performance in Large Language Models}, author={Yixin Song, Haotong Xie, Zeyu Mi, Li Ma, Haibo Chen}, year={2024} } ```
ibm-granite/granite-34b-code-base
ibm-granite
"2024-05-10T06:16:01Z"
1,025
17
transformers
[ "transformers", "safetensors", "gpt_bigcode", "text-generation", "code", "granite", "dataset:codeparrot/github-code-clean", "dataset:bigcode/starcoderdata", "dataset:open-web-math/open-web-math", "dataset:math-ai/StackMathQA", "arxiv:2405.04324", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-21T07:51:00Z"
--- pipeline_tag: text-generation inference: true license: apache-2.0 datasets: - codeparrot/github-code-clean - bigcode/starcoderdata # - Stackexchange # - CommonCrawl - open-web-math/open-web-math - math-ai/StackMathQA # - Arxiv # - Wikipedia # - conceptofmind/FLAN_2022 # Original link is broken, we used IBM's filtered version | Phase 2 metrics: - code_eval library_name: transformers tags: - code - granite model-index: - name: granite-34b-code-base results: - task: type: text-generation dataset: type: mbpp name: MBPP metrics: - name: pass@1 type: pass@1 value: 47.2 veriefied: false - task: type: text-generation dataset: type: evalplus/mbppplus name: MBPP+ metrics: - name: pass@1 type: pass@1 value: 53.1 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis(Python) metrics: - name: pass@1 type: pass@1 value: 48.2 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis(JavaScript) metrics: - name: pass@1 type: pass@1 value: 54.9 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis(Java) metrics: - name: pass@1 type: pass@1 value: 61.6 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis(Go) metrics: - name: pass@1 type: pass@1 value: 40.2 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis(C++) metrics: - name: pass@1 type: pass@1 value: 50.0 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesis(Rust) metrics: - name: pass@1 type: pass@1 value: 39.6 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain(Python) metrics: - name: pass@1 type: pass@1 value: 42.7 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain(JavaScript) metrics: - name: pass@1 type: pass@1 value: 26.2 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain(Java) metrics: - name: pass@1 type: pass@1 value: 47.0 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain(Go) metrics: - name: pass@1 type: pass@1 value: 26.8 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain(C++) metrics: - name: pass@1 type: pass@1 value: 36.6 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain(Rust) metrics: - name: pass@1 type: pass@1 value: 25.0 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix(Python) metrics: - name: pass@1 type: pass@1 value: 20.1 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix(JavaScript) metrics: - name: pass@1 type: pass@1 value: 30.5 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix(Java) metrics: - name: pass@1 type: pass@1 value: 40.9 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix(Go) metrics: - name: pass@1 type: pass@1 value: 34.1 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix(C++) metrics: - name: pass@1 type: pass@1 value: 39.0 veriefied: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFix(Rust) metrics: - name: pass@1 type: pass@1 value: 12.2 veriefied: false --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd5057674cdb524450093d/1hzxoPwqkBJXshKVVe6_9.png) # Granite-34B-Code-Base ## Model Summary **Granite-34B-Code-Base** is a decoder-only code model designed for code generative tasks (e.g., code generation, code explanation, code fixing, etc.). It is trained using depth upscaling from *Granite-20B-Code-Base* model with a two-phase training strategy. In phase 1, our model is trained on 3 trillion tokens (1.4 trillion tokens after depth up scaling). In phase 2, our model is trained on 500 billion tokens with a carefully designed mixture of high-quality data from code and natural language domains to improve the models’ ability to reason and follow instructions. - **Developers:** IBM Research - **GitHub Repository:** [ibm-granite/granite-code-models](https://github.com/ibm-granite/granite-code-models) - **Paper:** [Granite Code Models: A Family of Open Foundation Models for Code Intelligence](https://arxiv.org/abs/2405.04324) - **Release Date**: May 6th, 2024 - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0). ## Usage ### Intended use Prominent enterprise use cases of LLMs in software engineering productivity include code generation, code explanation, code fixing, generating unit tests, generating documentation, addressing technical debt issues, vulnerability detection, code translation, and more. All Granite Code Base models, including the **34B parameter model**, are able to handle these tasks as they were trained on a large amount of code data from 116 programming languages. ### Generation This is a simple example of how to use **Granite-34B-Code-Base** model. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # or "cpu" model_path = "ibm-granite/granite-34b-code-base" tokenizer = AutoTokenizer.from_pretrained(model_path) # drop device_map if running on CPU model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device) model.eval() # change input text as desired input_text = "def generate():" # tokenize the text input_tokens = tokenizer(input_text, return_tensors="pt") # transfer tokenized inputs to the device for i in input_tokens: input_tokens[i] = input_tokens[i].to(device) # generate output tokens output = model.generate(**input_tokens) # decode output tokens into text output = tokenizer.batch_decode(output) # loop over the batch to print, in this example the batch size is 1 for i in output: print(i) ``` ## Training Data - **Data Collection and Filtering:** Pretraining code data is sourced from a combination of publicly available datasets (e.g., [GitHub Code Clean](https://huggingface.co/datasets/codeparrot/github-code-clean), [Starcoder data](https://huggingface.co/datasets/bigcode/starcoderdata)), and additional public code repositories and issues from GitHub. We filter raw data to retain a list of 116 programming languages. After language filtering, we also filter out low-quality code. - **Exact and Fuzzy Deduplication:** We adopt an aggressive deduplication strategy that includes both exact and fuzzy deduplication to remove documents having (near) identical code content. - **HAP, PII, Malware Filtering:** We apply a HAP content filter that reduces models' likelihood of generating hateful, abusive, or profane language. We also make sure to redact Personally Identifiable Information (PII) by replacing PII content (e.g., names, email addresses, keys, passwords) with corresponding tokens (e.g., ⟨NAME⟩, ⟨EMAIL⟩, ⟨KEY⟩, ⟨PASSWORD⟩). Moreover, we scan all datasets using [ClamAV](https://www.clamav.net/) to identify and remove instances of malware in the source code. - **Natural Language Datasets:** In addition to collecting code data for model training, we curate several publicly available high-quality natural language datasets to improve models' proficiency in language understanding and mathematical reasoning. Unlike the code data, we do not deduplicate these datasets. ## Infrastructure We train the Granite Code models using two of IBM's super computing clusters, namely Vela and Blue Vela, both outfitted with NVIDIA A100 and H100 GPUs respectively. These clusters provide a scalable and efficient infrastructure for training our models over thousands of GPUs. ## Ethical Considerations and Limitations The use of Large Language Models involves risks and ethical considerations people must be aware of. Regarding code generation, caution is urged against complete reliance on specific code models for crucial decisions or impactful information as the generated code is not guaranteed to work as intended. **Granite-34B-Code-Base** model is not the exception in this regard. Even though this model is suited for multiple code-related tasks, it has not undergone any safety alignment, there it may produce problematic outputs. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in generation scenarios by copying source code verbatim from the training dataset due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain. Regarding ethics, a latent risk associated with all Large Language Models is their malicious utilization. We urge the community to use **Granite-34B-Code-Base** model with ethical intentions and in a responsible way.
SakuraLLM/Sakura-32B-Qwen2beta-v0.9-GGUF
SakuraLLM
"2024-05-14T17:06:32Z"
1,025
5
null
[ "gguf", "license:cc-by-nc-sa-4.0", "region:us" ]
null
"2024-05-08T12:58:05Z"
--- license: cc-by-nc-sa-4.0 ---
youjunhyeok/Llama-3-8B-slerp-262k-sft-lora-ko
youjunhyeok
"2024-05-24T02:16:01Z"
1,025
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-22T05:09:27Z"
--- license: apache-2.0 library_name: transformers --- ## Model - base model: [meta-llama/Meta-Llama-3-8B-Instruct](DavidAhn/Llama-3-8B-slerp-262k) - parent model: [DavidAhn/Llama-3-8B-slerp-262k](https://huggingface.co/DavidAhn/Llama-3-8B-slerp-262k) ## Dataset - [youjunhyeok/llama3_train](https://huggingface.co/datasets/youjunhyeok/llama3_train) ## BenchMark (KOR) ``` # alias A = youjunhyeok/Llama-3-8B-slerp-262k-sft-lora-ko B = DavidAhn/Llama-3-8B-slerp-262k C = meta-llama/Meta-Llama-3-8B D = chihoonlee10/T3Q-ko-solar-dpo-v7.0 (24.05.24 ko 리더보드 1등) ``` | Benchmark (macro_f1) | A | B | C | D | |---------------------------|:----:|:----:|:----:|:----:| | kobest_boolq (0-shot) | 57.6 | 33.5 | 38.2 | 34.1 | | kobest_boolq (5-shot) | 77.9 | 68.8 | 83.8 | 93.1 | | kobest_copa (0-shot) | 59.9 | 58.5 | 63.1 | 81.0 | | kobest_copa (5-shot) | 61.4 | 61.7 | 69.1 | 91.0 | | kobest_hellaswag (0-shot) | 40.6 | 43.2 | 42.1 | 55.1 | | kobest_hellaswag (5-shot) | 41.5 | 45.3 | 44.2 | 55.2 | | kobest_sentineg (0-shot) | 61.1 | 34.8 | 51.5 | 82.7 | | kobest_sentineg (5-shot) | 92.4 | 85.8 | 94.7 | 91.4 | ## BenchMark (ENG) | | openbookqa | hellaswag | boolq | arc_easy | arc_challenge | |:----------------------------------------------|---------:|---------:|---------:|---------:|---------:| | youjunhyeok/Llama-3-8B-slerp-262k-sft-lora-ko | 0.334 | 0.575 | 0.778 | 0.763 | 0.471 | | DavidAhn/Llama-3-8B-slerp-262k | 0.312 | 0.587 | 0.832 | 0.808 | 0.518 | | meta-llama/Meta-Llama-3-8B-Instruct | 0.338 | 0.576 | 0.831 | 0.815 | 0.529 |
timm/mobilenetv3_large_100.miil_in21k
timm
"2023-04-27T22:49:15Z"
1,024
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-21k-p", "arxiv:1905.02244", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-16T05:37:48Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-21k-p --- # Model card for mobilenetv3_large_100.miil_in21k A MobileNet-v3 image classification model. Trained on ImageNet-21k-P by Alibaba MIIL. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 18.6 - GMACs: 0.2 - Activations (M): 4.4 - Image size: 224 x 224 - **Papers:** - Searching for MobileNetV3: https://arxiv.org/abs/1905.02244 - **Dataset:** ImageNet-21k-P ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('mobilenetv3_large_100.miil_in21k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'mobilenetv3_large_100.miil_in21k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 16, 112, 112]) # torch.Size([1, 24, 56, 56]) # torch.Size([1, 40, 28, 28]) # torch.Size([1, 112, 14, 14]) # torch.Size([1, 960, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'mobilenetv3_large_100.miil_in21k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 960, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{howard2019searching, title={Searching for mobilenetv3}, author={Howard, Andrew and Sandler, Mark and Chu, Grace and Chen, Liang-Chieh and Chen, Bo and Tan, Mingxing and Wang, Weijun and Zhu, Yukun and Pang, Ruoming and Vasudevan, Vijay and others}, booktitle={Proceedings of the IEEE/CVF international conference on computer vision}, pages={1314--1324}, year={2019} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
ddobokki/ko-trocr
ddobokki
"2023-09-07T04:57:53Z"
1,024
9
transformers
[ "transformers", "pytorch", "safetensors", "vision-encoder-decoder", "ocr", "image-to-text", "ko", "endpoints_compatible", "region:us" ]
image-to-text
"2023-03-09T05:43:24Z"
--- language: - ko tags: - ocr widget: - src: https://raw.githubusercontent.com/ddobokki/ocr_img_example/master/g.jpg example_title: word1 - src: https://raw.githubusercontent.com/ddobokki/ocr_img_example/master/khs.jpg example_title: word2 - src: https://raw.githubusercontent.com/ddobokki/ocr_img_example/master/m.jpg example_title: word3 pipeline_tag: image-to-text --- # korean trocr model ## train datasets AI Hub - [다양한 형태의 한글 문자 OCR](https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=91) - [공공행정문서 OCR](https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=88) ## model structure - encoder : [trocr-base-stage1's encoder](https://huggingface.co/microsoft/trocr-base-stage1) - decoder : [KR-BERT-char16424](https://huggingface.co/snunlp/KR-BERT-char16424) ## how to use ```python from transformers import TrOCRProcessor, VisionEncoderDecoderModel, AutoTokenizer import requests import unicodedata from io import BytesIO from PIL import Image processor = TrOCRProcessor.from_pretrained("ddobokki/ko-trocr") model = VisionEncoderDecoderModel.from_pretrained("ddobokki/ko-trocr") tokenizer = AutoTokenizer.from_pretrained("ddobokki/ko-trocr") url = "https://raw.githubusercontent.com/ddobokki/ocr_img_example/master/g.jpg" response = requests.get(url) img = Image.open(BytesIO(response.content)) pixel_values = processor(img, return_tensors="pt").pixel_values generated_ids = model.generate(pixel_values, max_length=64) generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] generated_text = unicodedata.normalize("NFC", generated_text) print(generated_text) ```
tomaarsen/span-marker-bert-base-uncased-acronyms
tomaarsen
"2023-09-27T12:33:22Z"
1,024
3
span-marker
[ "span-marker", "pytorch", "tensorboard", "safetensors", "token-classification", "ner", "named-entity-recognition", "generated_from_span_marker_trainer", "en", "dataset:acronym_identification", "base_model:bert-base-uncased", "license:apache-2.0", "model-index", "co2_eq_emissions", "region:us" ]
token-classification
"2023-08-14T06:49:22Z"
--- language: - en license: apache-2.0 library_name: span-marker tags: - span-marker - token-classification - ner - named-entity-recognition - generated_from_span_marker_trainer datasets: - acronym_identification metrics: - precision - recall - f1 widget: - text: "here, da = direct assessment, rr = relative ranking, ds = discrete scale and cs = continuous scale." example_title: "Uncased 1" - text: "modifying or replacing the erasable programmable read only memory (eprom) in a phone would allow the configuration of any esn and min via software for cellular devices." example_title: "Uncased 2" - text: "we propose a technique called aggressive stochastic weight averaging (aswa) and an extension called norm-filtered aggressive stochastic weight averaging (naswa) which improves te stability of models over random seeds." example_title: "Uncased 3" - text: "the choice of the encoder and decoder modules of dnpg can be quite flexible, for instance long-short term memory networks (lstm) or convolutional neural network (cnn)." example_title: "Uncased 4" pipeline_tag: token-classification co2_eq_emissions: emissions: 31.203903222402037 source: codecarbon training_type: fine-tuning on_cloud: false cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K ram_total_size: 31.777088165283203 hours_used: 0.272 hardware_used: 1 x NVIDIA GeForce RTX 3090 base_model: bert-base-uncased model-index: - name: SpanMarker with bert-base-uncased on Acronym Identification results: - task: type: token-classification name: Named Entity Recognition dataset: name: Acronym Identification type: acronym_identification split: validation metrics: - type: f1 value: 0.9198933333333332 name: F1 - type: precision value: 0.9339397877409573 name: Precision - type: recall value: 0.9062631357713324 name: Recall --- # SpanMarker with bert-base-uncased on Acronym Identification This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model trained on the [Acronym Identification](https://huggingface.co/datasets/acronym_identification) dataset that can be used for Named Entity Recognition. This SpanMarker model uses [bert-base-uncased](https://huggingface.co/bert-base-uncased) as the underlying encoder. See [train.py](train.py) for the training script. Is your data always capitalized correctly? Then consider using the cased variant of this model instead for better performance: [tomaarsen/span-marker-bert-base-acronyms](https://huggingface.co/tomaarsen/span-marker-bert-base-acronyms). ## Model Details ### Model Description - **Model Type:** SpanMarker - **Encoder:** [bert-base-uncased](https://huggingface.co/bert-base-uncased) - **Maximum Sequence Length:** 256 tokens - **Maximum Entity Length:** 8 words - **Training Dataset:** [Acronym Identification](https://huggingface.co/datasets/acronym_identification) - **Language:** en - **License:** apache-2.0 ### Model Sources - **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER) - **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf) ### Model Labels | Label | Examples | |:------|:------------------------------------------------------------------------------------------------------| | long | "successive convex approximation", "controlled natural language", "Conversational Question Answering" | | short | "SODA", "CNL", "CoQA" | ## Evaluation ### Metrics | Label | Precision | Recall | F1 | |:--------|:----------|:-------|:-------| | **all** | 0.9339 | 0.9063 | 0.9199 | | long | 0.9314 | 0.8845 | 0.9074 | | short | 0.9352 | 0.9174 | 0.9262 | ## Uses ### Direct Use for Inference ```python from span_marker import SpanMarkerModel # Download from the 🤗 Hub model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-bert-base-uncased-acronyms") # Run inference entities = model.predict("compression algorithms like principal component analysis (pca) can reduce noise and complexity.") ``` ### Downstream Use You can finetune this model on your own dataset. <details><summary>Click to expand</summary> ```python from span_marker import SpanMarkerModel, Trainer # Download from the 🤗 Hub model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-bert-base-uncased-acronyms") # Specify a Dataset with "tokens" and "ner_tag" columns dataset = load_dataset("conll2003") # For example CoNLL2003 # Initialize a Trainer using the pretrained model & dataset trainer = Trainer( model=model, train_dataset=dataset["train"], eval_dataset=dataset["validation"], ) trainer.train() trainer.save_model("tomaarsen/span-marker-bert-base-uncased-acronyms-finetuned") ``` </details> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:----------------------|:----|:--------|:----| | Sentence length | 4 | 32.3372 | 170 | | Entities per sentence | 0 | 2.6775 | 24 | ### Training Hyperparameters - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training Results | Epoch | Step | Validation Loss | Validation Precision | Validation Recall | Validation F1 | Validation Accuracy | |:------:|:----:|:---------------:|:--------------------:|:-----------------:|:-------------:|:-------------------:| | 0.3120 | 200 | 0.0097 | 0.8999 | 0.8731 | 0.8863 | 0.9718 | | 0.6240 | 400 | 0.0075 | 0.9163 | 0.8995 | 0.9078 | 0.9769 | | 0.9360 | 600 | 0.0076 | 0.9079 | 0.9153 | 0.9116 | 0.9773 | | 1.2480 | 800 | 0.0069 | 0.9267 | 0.9006 | 0.9135 | 0.9778 | | 1.5601 | 1000 | 0.0065 | 0.9268 | 0.9044 | 0.9154 | 0.9782 | | 1.8721 | 1200 | 0.0065 | 0.9279 | 0.9061 | 0.9168 | 0.9787 | ### Environmental Impact Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon). - **Carbon Emitted**: 0.031 kg of CO2 - **Hours Used**: 0.272 hours ### Training Hardware - **On Cloud**: No - **GPU Model**: 1 x NVIDIA GeForce RTX 3090 - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K - **RAM Size**: 31.78 GB ### Framework Versions - Python: 3.9.16 - SpanMarker: 1.3.1.dev - Transformers: 4.30.0 - PyTorch: 2.0.1+cu118 - Datasets: 2.14.0 - Tokenizers: 0.13.2 ## Citation ### BibTeX ``` @software{Aarsen_SpanMarker, author = {Aarsen, Tom}, license = {Apache-2.0}, title = {{SpanMarker for Named Entity Recognition}}, url = {https://github.com/tomaarsen/SpanMarkerNER} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
mmnga/ELYZA-japanese-Llama-2-7b-fast-gguf
mmnga
"2023-11-16T14:27:36Z"
1,024
6
null
[ "gguf", "llama2", "ja", "arxiv:2307.09288", "license:llama2", "region:us" ]
null
"2023-08-29T07:23:20Z"
--- license: llama2 language: - ja tags: - llama2 --- # ELYZA-japanese-Llama-2-7b-fast-gguf [ELYZAさんが公開しているELYZA-japanese-Llama-2-7b-fast](https://huggingface.co/ELYZA/ELYZA-japanese-Llama-2-7b-fast)のggufフォーマット変換版です。 他のモデルはこちら 通常版: llama2に日本語のデータセットで学習したモデル [mmnga/ELYZA-japanese-Llama-2-7b-gguf](https://huggingface.co/mmnga/ELYZA-japanese-Llama-2-7b-gguf) [mmnga/ELYZA-japanese-Llama-2-7b-instruct-gguf](https://huggingface.co/mmnga/ELYZA-japanese-Llama-2-7b-instruct-gguf) Fast版 日本語の語彙を追加してトークンコストを減らし、1.8倍高速化したモデル [mmnga/ELYZA-japanese-Llama-2-7b-fast-gguf](https://huggingface.co/mmnga/ELYZA-japanese-Llama-2-7b-fast-gguf) [mmnga/ELYZA-japanese-Llama-2-7b-fast-instruct-gguf](https://huggingface.co/mmnga/ELYZA-japanese-Llama-2-7b-fast-instruct-gguf) Codellama版 GGUF [mmnga/ELYZA-japanese-CodeLlama-7b-gguf](https://huggingface.co/mmnga/ELYZA-japanese-CodeLlama-7b-gguf) [mmnga/ELYZA-japanese-CodeLlama-7b-instruct-gguf](https://huggingface.co/mmnga/ELYZA-japanese-CodeLlama-7b-instruct-gguf) Codellama版 GPTQ [mmnga/ELYZA-japanese-CodeLlama-7b-instruct-GPTQ-calib-ja-1k](https://huggingface.co/mmnga/ELYZA-japanese-CodeLlama-7b-instruct-GPTQ-calib-ja-1k) llama.cppの本家の更新で2023-10-23前のfastモデルのggufが使用できなくなっています。 fastモデルのggufを更新しましたので、お手数ですが再ダウンロードをお願いします。 **モデルの変換コード**は[こちら](https://github.com/mmnga/llama.cpp/blob/mmnga-dev/convert-llama-hf-to-gguf_for_ex_vocab.py) ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'ELYZA-japanese-Llama-2-7b-fast-q4_0.gguf' -n 256 -p '[INST] <<SYS>>あなたは誠実で優秀な日本人のアシスタントです。<</SYS>>クマが海辺に行ってアザラシと友達になり、最終的には家に帰るというプロットの短編小説を書いてください。 [/INST]' ``` ### Licence Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. ### 引用 Citations ```tex @misc{elyzallama2023, title={ELYZA-japanese-Llama-2-7b}, url={https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b}, author={Akira Sasaki and Masato Hirakawa and Shintaro Horie and Tomoaki Nakamura}, year={2023}, } ``` ```tex @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom}, year={2023}, eprint={2307.09288}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
macadeliccc/WestLake-7B-v2-laser-truthy-dpo
macadeliccc
"2024-03-04T19:25:24Z"
1,024
23
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "dataset:jondurbin/truthy-dpo-v0.1", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-27T20:33:54Z"
--- license: apache-2.0 library_name: transformers datasets: - jondurbin/truthy-dpo-v0.1 model-index: - name: WestLake-7B-v2-laser-truthy-dpo results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 73.89 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/WestLake-7B-v2-laser-truthy-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.85 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/WestLake-7B-v2-laser-truthy-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.84 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/WestLake-7B-v2-laser-truthy-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 69.81 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/WestLake-7B-v2-laser-truthy-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 86.66 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/WestLake-7B-v2-laser-truthy-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 68.16 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/WestLake-7B-v2-laser-truthy-dpo name: Open LLM Leaderboard --- # WestLake-7B-v2-laser-truthy-dpo ![westlake-header](westlake-header.png) ## Process + Trained [cognitivecomputations/WestLake-7B-v2-laser](https://huggingface.co/cognitivecomputations/WestLake-7B-v2-laser) on jondurbin/truthy-dpo-v0.1 + Completed 2 epochs + 2e-5 learning rate ## Evaluations ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6455cc8d679315e4ef16fbec/9CJeaPxf4XGJv7w114LKo.png) Evaluated the GGUF for usability reasons. EQ-Bench uses Ooba for inference. <pre>----Benchmark Complete---- 2024-01-31 14:38:14 Time taken: 18.9 mins Prompt Format: ChatML Model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo-GGUF Score (v2): 75.15 Parseable: 171.0 --------------- Batch completed Time taken: 19.0 mins --------------- </pre> ## GGUF GGUF versions are available [here](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo-GGUF) # ExLlamav2 Thanks to user [bartowski](https://huggingface.co/bartowski) we now have exllamav2 quantizations in 3.5 through 8 bpw. They are available here: + [bartowski/WestLake-7B-v2-laser-truthy-dpo-exl2](https://huggingface.co/bartowski/WestLake-7B-v2-laser-truthy-dpo-exl2) ## Chat Template This was my process during fine tune to realign the prompt template to chatML. There seems to be an error where you can use either Mistral (original) prompt template or you can use ChatML in the GGUF version. ```python def chatml_format(example): # Format system if len(example['system']) > 0: message = {"role": "system", "content": example['system']} system = tokenizer.apply_chat_template([message], tokenize=False) else: system = "" # Format instruction message = {"role": "user", "content": example['prompt']} prompt = tokenizer.apply_chat_template([message], tokenize=False, add_generation_prompt=True) # Format chosen answer chosen = example['chosen'] + "<|im_end|>\n" # Format rejected answer rejected = example['rejected'] + "<|im_end|>\n" return { "prompt": system + prompt, "chosen": chosen, "rejected": rejected, } ``` ## Transformers ChatML does not work properly in transformers for this model. This demo code for the transformers library works properly: ```python from transformers import AutoTokenizer import transformers import torch model = "macadeliccc/WestLake-7B-v2-laser-truthy-dpo" chat = [ {"role": "user", "content": "Hello, how are you?"}, {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, {"role": "user", "content": "I'd like to show off how chat templating works!"}, ] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` This code produces this output in multi-turn conversation: ``` <s>[INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> Hello, how are you? [/INST] I'm doing great. How can I help you today? </s><s>[INST] I'd like to show off how chat templating works! [/INST] While discussing the concept of chat templating, I understand your intent highlights exemplifying its nature. Kindly provide contextual phrases or scenarios to let me demonstrate how it adapts to various inputs while maintaining a consistent flow of information exchange. This way, you'll witness how templates shape responses in a structured manner within chat dialogues. [[INST]]I apologize if my earlier comment seemed off topic. Let's shift back to the original subject of discussing helpful AI assistants. [INST] Not a problem at all! Our primary objective remains ensuring useful and polite interactions. Let's delve into more aspects of beneficial AI assistance. Feel free to ask specific questions or areas of interest you may have in mind. ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_macadeliccc__WestLake-7B-v2-laser-truthy-dpo) | Metric |Value| |---------------------------------|----:| |Avg. |75.37| |AI2 Reasoning Challenge (25-Shot)|73.89| |HellaSwag (10-Shot) |88.85| |MMLU (5-Shot) |64.84| |TruthfulQA (0-shot) |69.81| |Winogrande (5-shot) |86.66| |GSM8k (5-shot) |68.16|
pandego/my-first-blend
pandego
"2024-04-03T16:44:55Z"
1,024
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2212.04089", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-01T18:39:16Z"
--- license: apache-2.0 library_name: transformers tags: - mergekit - merge base_model: - mistralai/Mistral-7B-Instruct-v0.2 model-index: - name: my-first-blend results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 69.37 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=pandego/my-first-blend name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.03 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=pandego/my-first-blend name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 53.91 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=pandego/my-first-blend name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 70.7 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=pandego/my-first-blend name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.32 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=pandego/my-first-blend name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 25.63 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=pandego/my-first-blend name: Open LLM Leaderboard --- # my-first-blend This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using mistralai/Mistral-7B-Instruct-v0.2 as a base. ### Models Merged The following models were included in the merge: * SanjiWatsuki/Kunoichi-DPO-v2-7B * paulml/NeuralOmniWestBeaglake-7B ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: SanjiWatsuki/Kunoichi-DPO-v2-7B parameters: weight: 0.4 - model: paulml/NeuralOmniWestBeaglake-7B parameters: weight: 0.6 base_model: mistralai/Mistral-7B-Instruct-v0.2 merge_method: task_arithmetic dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_pandego__my-first-blend) | Metric |Value| |---------------------------------|----:| |Avg. |63.66| |AI2 Reasoning Challenge (25-Shot)|69.37| |HellaSwag (10-Shot) |83.03| |MMLU (5-Shot) |53.91| |TruthfulQA (0-shot) |70.70| |Winogrande (5-shot) |79.32| |GSM8k (5-shot) |25.63|
stockmark/stockmark-100b
stockmark
"2024-05-15T06:18:10Z"
1,024
29
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ja", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-13T09:31:40Z"
--- library_name: transformers license: mit language: - ja - en --- # stockmark/stockmark-100b Stockmark-100b is a 100 billion parameter LLM pretrained from scratch based on Japanese and English corpus of about 910 billion tokens. This model is developed by [Stockmark Inc.](https://stockmark.co.jp/) Instruction tuned model: - [stockmark-100b-instruct-v0.1](https://huggingface.co/stockmark/stockmark-100b-instruct-v0.1) This project is supported by [GENIAC](https://www.meti.go.jp/policy/mono_info_service/geniac/index.html). ## How to use ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("stockmark/stockmark-100b") model = AutoModelForCausalLM.from_pretrained("stockmark/stockmark-100b", device_map="auto", torch_dtype=torch.bfloat16) input_ids = tokenizer("生成AIとは?", return_tensors="pt").input_ids.to(model.device) with torch.inference_mode(): tokens = model.generate( input_ids, max_new_tokens = 256, do_sample = True, temperature = 0.7, top_p = 0.95, repetition_penalty = 1.08 ) output = tokenizer.decode(tokens[0], skip_special_tokens=True) print(output) ``` ## Dataset (pretraining) Stockmark-100b was trained using a total of about 910B tokens of Japanese and English text corpus. The detail of Japanese data is summarized in the below table. The stockmark web corpus consists of web pages related to business, which are collected by Stockmark Inc. | corpus | tokens after preprocessing | |:---:|:---:| | Stockmark Web Corpus (This dataset will not be released) | 8.8 billion | | Patent | 37.5 billion | | Wikipedia |1.5 billion | | mC4 | 52.6 billion | | CommonCrawl (snapshot: 2020-50 ~ 2024-10) | 203.7 billion| English data is sampled from [RedPajama-Data](https://github.com/togethercomputer/RedPajama-Data/tree/rp_v1). ## Training - GPU: 48 nodes of a3 (8*H100) instances - Training duration: about 7 weeks - Container: [Pytorch NGC Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) - Library: [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) ## Performance **Stockmark Business Questions** Dataset: https://huggingface.co/datasets/stockmark/business-questions | model | accuracy | |:---:|:---:| |stockmark-100b-instruct| 0.90 | |stockmark-13b-instruct| 0.80 | |GPT-3.5-turbo[^1]| 0.42 | [^1]: 0613 **Japanese Vicuna QA Benchmark** We excluded categories that require calculation and coding, and use remaining 60 questions for evaluation. GitHub: https://github.com/ku-nlp/ja-vicuna-qa-benchmark | model | average score | |:---:|:---:| |stockmark-100b-instruct| 5.97 | |tokyotech-llm/Swallow-70b-instruct-hf| 5.59 | |GPT-3.5 (text-davinci-003)| 5.08 | **Inference speed** | model | time [s] for genrating 100 characters in Japanese | |:---:|:---:| |stockmark-100b-instruct| 1.86 | | gpt-3.5-turbo | 2.15 | | gpt-4-turbo | 5.48 | |tokyotech-llm/Swallow-70b-instruct-hf| 2.22 | For local LLMs, we measured the inference time using AWS Inferentia2. ## License [MIT](https://opensource.org/licenses/MIT) ## Developed by [Stockmark Inc.](https://stockmark.co.jp/)
mradermacher/Qwen2-0.5B-Instruct-GGUF
mradermacher
"2024-06-06T21:47:31Z"
1,024
0
transformers
[ "transformers", "gguf", "chat", "en", "base_model:Qwen/Qwen2-0.5B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-06T21:11:26Z"
--- base_model: Qwen/Qwen2-0.5B-Instruct language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - chat --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Qwen/Qwen2-0.5B-Instruct <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Instruct-GGUF/resolve/main/Qwen2-0.5B-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Instruct-GGUF/resolve/main/Qwen2-0.5B-Instruct.IQ3_S.gguf) | IQ3_S | 0.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Instruct-GGUF/resolve/main/Qwen2-0.5B-Instruct.IQ3_XS.gguf) | IQ3_XS | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Instruct-GGUF/resolve/main/Qwen2-0.5B-Instruct.Q2_K.gguf) | Q2_K | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Instruct-GGUF/resolve/main/Qwen2-0.5B-Instruct.IQ3_M.gguf) | IQ3_M | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Instruct-GGUF/resolve/main/Qwen2-0.5B-Instruct.IQ4_XS.gguf) | IQ4_XS | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Instruct-GGUF/resolve/main/Qwen2-0.5B-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Instruct-GGUF/resolve/main/Qwen2-0.5B-Instruct.Q3_K_L.gguf) | Q3_K_L | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Instruct-GGUF/resolve/main/Qwen2-0.5B-Instruct.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Instruct-GGUF/resolve/main/Qwen2-0.5B-Instruct.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Instruct-GGUF/resolve/main/Qwen2-0.5B-Instruct.Q5_K_S.gguf) | Q5_K_S | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Instruct-GGUF/resolve/main/Qwen2-0.5B-Instruct.Q5_K_M.gguf) | Q5_K_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Instruct-GGUF/resolve/main/Qwen2-0.5B-Instruct.Q6_K.gguf) | Q6_K | 0.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Instruct-GGUF/resolve/main/Qwen2-0.5B-Instruct.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Instruct-GGUF/resolve/main/Qwen2-0.5B-Instruct.f16.gguf) | f16 | 1.1 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
CompVis/stable-diffusion-v1-2
CompVis
"2023-07-05T16:18:11Z"
1,023
38
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "arxiv:2112.10752", "arxiv:2103.00020", "arxiv:2205.11487", "arxiv:2207.12598", "arxiv:1910.09700", "license:creativeml-openrail-m", "autotrain_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2022-08-19T10:24:37Z"
--- license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image inference: false extra_gated_prompt: |- This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license extra_gated_heading: Please read the LICENSE to access this model --- # Stable Diffusion v1-2 Model Card Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion with D🧨iffusers blog](https://huggingface.co/blog/stable_diffusion). The **Stable-Diffusion-v1-2** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-1](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-1) checkpoint and subsequently fine-tuned on 515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. For more information, please refer to [Training](#training). This weights here are intended to be used with the D🧨iffusers library. If you are looking for the weights to be loaded into the CompVis Stable Diffusion codebase, [come here](https://huggingface.co/CompVis/stable-diffusion-v-1-2-original) ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487). - **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ## Examples We recommend using [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion. ```bash pip install --upgrade diffusers transformers scipy ``` Running the pipeline with the default PNDM scheduler: ```python import torch from torch import autocast from diffusers import StableDiffusionPipeline model_id = "CompVis/stable-diffusion-v1-2" device = "cuda" pipe = StableDiffusionPipeline.from_pretrained(model_id) pipe = pipe.to(device) prompt = "a photo of an astronaut riding a horse on mars" with autocast("cuda"): image = pipe(prompt)["sample"][0] image.save("astronaut_rides_horse.png") ``` **Note**: If you are limited by GPU memory and have less than 10GB of GPU RAM available, please make sure to load the StableDiffusionPipeline in float16 precision instead of the default float32 precision as done above. You can do so by telling diffusers to expect the weights to be in float16 precision: ```py import torch pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to(device) prompt = "a photo of an astronaut riding a horse on mars" with autocast("cuda"): image = pipe(prompt, guidance_scale=7.5)["sample"][0] image.save("astronaut_rides_horse.png") ``` To swap out the noise scheduler, pass it to `from_pretrained`: ```python from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler model_id = "CompVis/stable-diffusion-v1-2" # Use the K-LMS scheduler here instead scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000) pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, use_auth_token=True) pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" with autocast("cuda"): image = pipe(prompt, guidance_scale=7.5)["sample"][0] image.save("astronaut_rides_horse.png") ``` # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material and is not fit for product use without additional safety mechanisms and considerations. - No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data. The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images. ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are primarily limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. ## Training ### Training Data The model developers used the following dataset for training the model: - LAION-2B (en) and subsets thereof (see next section) ### Training Procedure Stable Diffusion v1-4 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through a ViT-L/14 text-encoder. - The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We currently provide four checkpoints, which were trained as follows. - [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en). 194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`). - [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`. 515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)). - [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2`. 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - [**`stable-diffusion-v1-4`**](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2`.225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). ### Training details - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 2 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling steps show the relative improvements of the checkpoints: ![pareto](https://huggingface.co/CompVis/stable-diffusion/resolve/main/v1-variants-scores.jpg) Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 150000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq. ## Citation ```bibtex @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ``` *This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
Daoguang/PyCodeGPT
Daoguang
"2023-01-04T10:21:24Z"
1,023
7
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "license:afl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-01-04T10:09:30Z"
--- license: afl-3.0 --- # PyCodeGPT A pre-trained GPT model for Python code completion and generation ## What is it? PyCodeGPT is efficient and effective GPT-Neo-based model for python code generation task, which is similar to [OpenAI Codex](https://openai.com/blog/openai-codex/), [Github Copliot](https://copilot.github.com/), [CodeParrot](https://huggingface.co/blog/codeparrot), [AlphaCode](https://deepmind.com/blog/article/Competitive-programming-with-AlphaCode). ## Training Data Due to the small size of public released dataset, we proposed to collect data from GitHub from scratch. We first crawled 1.2M python-related repositories hosted by GitHub. Then, we used these repository URLs to download all contents of each repository from GitHub. After that, we got 60M raw python files under 1MB with a total size of 330GB. Finally, we carefully designed various strategies of data cleaning to get about 96GB data for training. Please refer to the following table for the details. |Model|Repositories|Size and file after filtering| |:------:|:---:|:---:| | CodeParrot | 0.56M | 12GB (compressed), 5.4M | | Codex | 54M | 159GB | | PyCodeGPT | 1.2M | 96GB, 13M | ## Pretrained models we aims to train median-large pre-trained models (model size with 110M) based on GPT-Neo: - PyCodeGPT-110M: derived from GPT-Neo 125M with a vocabulary size of 32K. ## GitHub [https://github.com/microsoft/PyCodeGPT](https://github.com/microsoft/PyCodeGPT) ## Evaluation Results Here's our evaluation result on HumanEval dataset: Note: our model can have a comparable accuracy with Codex of similar model size. |Model|Pass@1|Pass@10|Pass@100| |:------:|:---:|:---:|:---:| |PyCodeGPT-110M |**8.32%** |**13.53%** |**18.3%** | ||||| |GPT-Neo 125M |0.75% |1.88% |2.97% | |GPT-Neo 1.3B |4.97% |7.47% |16.3% | |GPT-Neo 2.7B |6.41% |11.27% |21.37% | |GPT-J 6B |11.62% |15.74% |27.74% | ||||| |TabNine |2.58% |4.35% |7.59% | ||||| |CodeParrot 110M |3.80% |6.57% |12.78% | |CodeParrot 1.5B |3.58% |8.03% |14.96% | ||||| |Codex 12M |2.00% |3.62% |8.58% | |Codex 25M |3.21% |7.1% |12.89% | |Codex 42M |5.06% |8.8% |15.55% | |Codex 85M |8.22% |12.81% |22.4% | |Codex 300M |13.17% |20.37% |36.27% | |Codex 679M |16.22% |25.7% |40.95% | |Codex 2.5B |21.36% |35.42% |59.5% | |Codex 12B |28.81% |46.81% |72.31% | ||||| |Pretrained Decoder-only 13M (AlphaCode) |1.5% |3.6% |8.6% | |Pretrained Decoder-only 29M (AlphaCode) |3.4% |5.8% |11.2% | |Pretrained Decoder-only 55M (AlphaCode) |4.2% |8.2% |16.9% | |Pretrained Decoder-only 89M (AlphaCode) |4.3% |12.2% |20.0% | |Pretrained Decoder-only 302M (AlphaCode) |11.6% |18.8% |31.8% | |Pretrained Decoder-only 685M (AlphaCode) |14.2% |24.4% |38.8% | |Pretrained Decoder-only 1.1B (AlphaCode) |17.1% |28.2% |45.3% | ||||| |PolyCoder 160M |2.13% |3.35% |4.88% | |PolyCoder 400M |2.96% |5.29% |11.59% | |PolyCoder 2.7B |5.59% |9.84% |17.68% | ## Reference If you want to use the models, you need to cite our following paper: ``` @inproceedings{CERT, title={{CERT}: Continual Pre-training on Sketches for Library-oriented Code Generation}, author={Zan, Daoguang and Chen, Bei and Yang, Dejian and Lin, Zeqi and Kim, Minsu and Guan, Bei and Wang, Yongji and Chen, Weizhu and Lou, Jian-Guang}, booktitle={The 2022 International Joint Conference on Artificial Intelligence}, year={2022} } ```