modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
timestamp[us, tz=UTC]
card
stringlengths
1
901k
mradermacher/perky-70b-v0.1-i1-GGUF
mradermacher
2024-05-10T16:46:42Z
355
1
transformers
[ "transformers", "gguf", "not-for-all-audiences", "en", "base_model:Dracones/perky-70b-v0.1", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-03-06T20:23:00Z
--- base_model: Dracones/perky-70b-v0.1 language: - en library_name: transformers license: llama2 quantized_by: mradermacher tags: - not-for-all-audiences --- ## About weighted/imatrix quants of https://huggingface.co/Dracones/perky-70b-v0.1 <!-- provided-files --> ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/perky-70b-v0.1-i1-GGUF/resolve/main/perky-70b-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 15.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/perky-70b-v0.1-i1-GGUF/resolve/main/perky-70b-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/perky-70b-v0.1-i1-GGUF/resolve/main/perky-70b-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.7 | | | [GGUF](https://huggingface.co/mradermacher/perky-70b-v0.1-i1-GGUF/resolve/main/perky-70b-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.8 | | | [GGUF](https://huggingface.co/mradermacher/perky-70b-v0.1-i1-GGUF/resolve/main/perky-70b-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 21.8 | | | [GGUF](https://huggingface.co/mradermacher/perky-70b-v0.1-i1-GGUF/resolve/main/perky-70b-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 23.7 | | | [GGUF](https://huggingface.co/mradermacher/perky-70b-v0.1-i1-GGUF/resolve/main/perky-70b-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 25.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/perky-70b-v0.1-i1-GGUF/resolve/main/perky-70b-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/perky-70b-v0.1-i1-GGUF/resolve/main/perky-70b-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.6 | | | [GGUF](https://huggingface.co/mradermacher/perky-70b-v0.1-i1-GGUF/resolve/main/perky-70b-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 30.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/perky-70b-v0.1-i1-GGUF/resolve/main/perky-70b-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/perky-70b-v0.1-i1-GGUF/resolve/main/perky-70b-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 31.4 | | | [GGUF](https://huggingface.co/mradermacher/perky-70b-v0.1-i1-GGUF/resolve/main/perky-70b-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.7 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/perky-70b-v0.1-i1-GGUF/resolve/main/perky-70b-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/perky-70b-v0.1-i1-GGUF/resolve/main/perky-70b-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | | | [GGUF](https://huggingface.co/mradermacher/perky-70b-v0.1-i1-GGUF/resolve/main/perky-70b-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/perky-70b-v0.1-i1-GGUF/resolve/main/perky-70b-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/perky-70b-v0.1-i1-GGUF/resolve/main/perky-70b-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/perky-70b-v0.1-i1-GGUF/resolve/main/perky-70b-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.9 | | | [GGUF](https://huggingface.co/mradermacher/perky-70b-v0.1-i1-GGUF/resolve/main/perky-70b-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 49.2 | | | [PART 1](https://huggingface.co/mradermacher/perky-70b-v0.1-i1-GGUF/resolve/main/perky-70b-v0.1.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/perky-70b-v0.1-i1-GGUF/resolve/main/perky-70b-v0.1.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 57.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
bartowski/Hermes-2-Pro-Mistral-7B-GGUF
bartowski
2024-03-14T00:19:00Z
355
3
null
[ "gguf", "Mistral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "text-generation", "en", "dataset:teknium/OpenHermes-2.5", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
text-generation
2024-03-14T00:07:40Z
--- base_model: mistralai/Mistral-7B-v0.1 tags: - Mistral - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation - function calling - json mode model-index: - name: Hermes-2-Pro-Mistral-7B results: [] license: apache-2.0 language: - en datasets: - teknium/OpenHermes-2.5 widget: - example_title: Hermes 2 Pro messages: - role: system content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me. - role: user content: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world. quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp Quantizations of Hermes-2-Pro-Mistral-7B Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2405">b2405</a> for quantization. Original model: https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Hermes-2-Pro-Mistral-7B-Q8_0.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-7B-GGUF/blob/main/Hermes-2-Pro-Mistral-7B-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. | | [Hermes-2-Pro-Mistral-7B-Q6_K.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-7B-GGUF/blob/main/Hermes-2-Pro-Mistral-7B-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. | | [Hermes-2-Pro-Mistral-7B-Q5_K_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-7B-GGUF/blob/main/Hermes-2-Pro-Mistral-7B-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, very usable. | | [Hermes-2-Pro-Mistral-7B-Q5_K_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-7B-GGUF/blob/main/Hermes-2-Pro-Mistral-7B-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, very usable. | | [Hermes-2-Pro-Mistral-7B-Q5_0.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-7B-GGUF/blob/main/Hermes-2-Pro-Mistral-7B-Q5_0.gguf) | Q5_0 | 4.99GB | High quality, older format, generally not recommended. | | [Hermes-2-Pro-Mistral-7B-Q4_K_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-7B-GGUF/blob/main/Hermes-2-Pro-Mistral-7B-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, similar to 4.25 bpw. | | [Hermes-2-Pro-Mistral-7B-Q4_K_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-7B-GGUF/blob/main/Hermes-2-Pro-Mistral-7B-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with small space savings. | | [Hermes-2-Pro-Mistral-7B-Q4_0.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-7B-GGUF/blob/main/Hermes-2-Pro-Mistral-7B-Q4_0.gguf) | Q4_0 | 4.10GB | Decent quality, older format, generally not recommended. | | [Hermes-2-Pro-Mistral-7B-Q3_K_L.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-7B-GGUF/blob/main/Hermes-2-Pro-Mistral-7B-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. | | [Hermes-2-Pro-Mistral-7B-Q3_K_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-7B-GGUF/blob/main/Hermes-2-Pro-Mistral-7B-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. | | [Hermes-2-Pro-Mistral-7B-Q3_K_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-7B-GGUF/blob/main/Hermes-2-Pro-Mistral-7B-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. | | [Hermes-2-Pro-Mistral-7B-Q2_K.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-7B-GGUF/blob/main/Hermes-2-Pro-Mistral-7B-Q2_K.gguf) | Q2_K | 2.71GB | Extremely low quality, *not* recommended. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
nDimensional/Necron-SDXL-LoRA-Warhammer-40k
nDimensional
2024-06-22T14:55:43Z
355
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "SDXL", "LoRA", "en", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-03-30T00:53:49Z
--- license: creativeml-openrail-m language: - en library_name: diffusers tags: - text-to-image - stable-diffusion - stable-diffusion-xl - SDXL - LoRA --- <html> <head> <link href="https://fonts.googleapis.com/css2?family=Skranji&display=swap" rel="stylesheet"> </head> <style> .title-container { display: flex; justify-content: center; align-items: center; } .title { font-size: 2.5em; text-align: center; color: #1d1d1d!important; font-family: 'Skranji'; text-transform: uppercase; letter-spacing: 0.1em; padding: 0.5em 0; background: transparent; } .title span { background: -webkit-linear-gradient(45deg, #458c9d, #54c190); -webkit-background-clip: text; -webkit-text-fill-color: transparent; } .icon { margin-right: 10px; } .icon img { width: 50px; height: auto; padding-bottom: 12px; } img { max-width: 100%; height: auto; } .center { display: block; margin-left: auto; margin-right: auto; width: 1344; } figcaption { display: block; font-style: italic; color: white; padding: 2px; text-align: center; } </style> <div class="title-container"> <div class="icon"> <img src="assets/necron-icon.svg" atl="Necron Icon"/> </div> <h1 class="title"> <span>Necron</span> </h1> </div> <figure> <img src="assets/sample_image (2).png" alt="sample image" class="center"/> <figcaption>Generated with Epoch 08</figcaption> </figure> </html> <!-- START SUPPORT --> <a href='https://ko-fi.com/M4M3ZBOIT' target='_blank'> <img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi3.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /> </a> <!-- END SUPPORT --> ### Epoch08 recommended! ### Samples images available in [assets dir](https://huggingface.co/nDimensional/Necron-SDXL-LoRA-Warhammer-40K/tree/main/assets). <!-- START Model Info --> <div class="model-info-container" style='font-family:Monospace'> <table id="model-info-table"> <tr> <th></th> <th></th> </tr> <tr> <td>Base Model</td> <td>SDXL v1.0</td> </tr> <tr> <td>VAE</td> <td><a href="https://huggingface.co/madebyollin/sdxl-vae-fp16-fix">SDXL-VAE-FP16-FIX</a></td> </tr> <tr> <td>Type</td> <td>LoRA</td> </tr> </table> </div> <!-- END Model Info -->
MaziyarPanahi/ShadowExperiment24-7B-GGUF
MaziyarPanahi
2024-04-01T14:25:49Z
355
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:yam-peleg/Experiment24-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:automerger/ShadowExperiment24-7B" ]
text-generation
2024-04-01T14:02:55Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - merge - mergekit - lazymergekit - automerger - base_model:yam-peleg/Experiment24-7B - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: ShadowExperiment24-7B-GGUF base_model: automerger/ShadowExperiment24-7B inference: false model_creator: automerger pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/ShadowExperiment24-7B-GGUF](https://huggingface.co/MaziyarPanahi/ShadowExperiment24-7B-GGUF) - Model creator: [automerger](https://huggingface.co/automerger) - Original model: [automerger/ShadowExperiment24-7B](https://huggingface.co/automerger/ShadowExperiment24-7B) ## Description [MaziyarPanahi/ShadowExperiment24-7B-GGUF](https://huggingface.co/MaziyarPanahi/ShadowExperiment24-7B-GGUF) contains GGUF format model files for [automerger/ShadowExperiment24-7B](https://huggingface.co/automerger/ShadowExperiment24-7B). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/ShadowExperiment24-7B-GGUF](https://huggingface.co/MaziyarPanahi/ShadowExperiment24-7B-GGUF) and below it, a specific filename to download, such as: ShadowExperiment24-7B-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/ShadowExperiment24-7B-GGUF ShadowExperiment24-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/ShadowExperiment24-7B-GGUF](https://huggingface.co/MaziyarPanahi/ShadowExperiment24-7B-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/ShadowExperiment24-7B-GGUF ShadowExperiment24-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m ShadowExperiment24-7B.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://github.com/abetlen/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./ShadowExperiment24-7B.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./ShadowExperiment24-7B.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
stablediffusionapi/limitlessvision
stablediffusionapi
2024-04-13T15:50:30Z
355
0
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-04-13T15:48:07Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # LimitlessVision API Inference ![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/11807205221713023218.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "limitlessvision" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs) Try model for free: [Generate Images](https://modelslab.com/models/limitlessvision) Model link: [View model](https://modelslab.com/models/limitlessvision) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "limitlessvision", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
ArneKreuz/starcoderbase-finetuned-thestack
ArneKreuz
2024-04-18T04:29:13Z
355
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:bigcode/starcoderbase-3b", "license:bigcode-openrail-m", "region:us" ]
null
2024-04-18T04:29:05Z
--- license: bigcode-openrail-m library_name: peft tags: - generated_from_trainer base_model: bigcode/starcoderbase-3b model-index: - name: starcoderbase-finetuned-thestack results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # starcoderbase-finetuned-thestack This model is a fine-tuned version of [bigcode/starcoderbase-3b](https://huggingface.co/bigcode/starcoderbase-3b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9183 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 0.8487 | 1.0 | 83729 | 0.9170 | | 0.8132 | 2.0 | 167458 | 0.9169 | | 0.788 | 3.0 | 251187 | 0.9183 | ### Framework versions - PEFT 0.10.0 - Transformers 4.38.1 - Pytorch 2.1.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
RichardErkhov/deepseek-ai_-_deepseek-coder-7b-instruct-v1.5-gguf
RichardErkhov
2024-05-03T15:08:50Z
355
0
null
[ "gguf", "region:us" ]
null
2024-05-03T13:15:35Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) deepseek-coder-7b-instruct-v1.5 - GGUF - Model creator: https://huggingface.co/deepseek-ai/ - Original model: https://huggingface.co/deepseek-ai/deepseek-coder-7b-instruct-v1.5/ | Name | Quant method | Size | | ---- | ---- | ---- | | [deepseek-coder-7b-instruct-v1.5.Q2_K.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-7b-instruct-v1.5-gguf/blob/main/deepseek-coder-7b-instruct-v1.5.Q2_K.gguf) | Q2_K | 2.53GB | | [deepseek-coder-7b-instruct-v1.5.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-7b-instruct-v1.5-gguf/blob/main/deepseek-coder-7b-instruct-v1.5.IQ3_XS.gguf) | IQ3_XS | 2.79GB | | [deepseek-coder-7b-instruct-v1.5.IQ3_S.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-7b-instruct-v1.5-gguf/blob/main/deepseek-coder-7b-instruct-v1.5.IQ3_S.gguf) | IQ3_S | 2.92GB | | [deepseek-coder-7b-instruct-v1.5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-7b-instruct-v1.5-gguf/blob/main/deepseek-coder-7b-instruct-v1.5.Q3_K_S.gguf) | Q3_K_S | 2.92GB | | [deepseek-coder-7b-instruct-v1.5.IQ3_M.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-7b-instruct-v1.5-gguf/blob/main/deepseek-coder-7b-instruct-v1.5.IQ3_M.gguf) | IQ3_M | 3.06GB | | [deepseek-coder-7b-instruct-v1.5.Q3_K.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-7b-instruct-v1.5-gguf/blob/main/deepseek-coder-7b-instruct-v1.5.Q3_K.gguf) | Q3_K | 3.22GB | | [deepseek-coder-7b-instruct-v1.5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-7b-instruct-v1.5-gguf/blob/main/deepseek-coder-7b-instruct-v1.5.Q3_K_M.gguf) | Q3_K_M | 3.22GB | | [deepseek-coder-7b-instruct-v1.5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-7b-instruct-v1.5-gguf/blob/main/deepseek-coder-7b-instruct-v1.5.Q3_K_L.gguf) | Q3_K_L | 3.49GB | | [deepseek-coder-7b-instruct-v1.5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-7b-instruct-v1.5-gguf/blob/main/deepseek-coder-7b-instruct-v1.5.IQ4_XS.gguf) | IQ4_XS | 3.56GB | | [deepseek-coder-7b-instruct-v1.5.Q4_0.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-7b-instruct-v1.5-gguf/blob/main/deepseek-coder-7b-instruct-v1.5.Q4_0.gguf) | Q4_0 | 3.73GB | | [deepseek-coder-7b-instruct-v1.5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-7b-instruct-v1.5-gguf/blob/main/deepseek-coder-7b-instruct-v1.5.IQ4_NL.gguf) | IQ4_NL | 3.74GB | | [deepseek-coder-7b-instruct-v1.5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-7b-instruct-v1.5-gguf/blob/main/deepseek-coder-7b-instruct-v1.5.Q4_K_S.gguf) | Q4_K_S | 3.75GB | | [deepseek-coder-7b-instruct-v1.5.Q4_K.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-7b-instruct-v1.5-gguf/blob/main/deepseek-coder-7b-instruct-v1.5.Q4_K.gguf) | Q4_K | 3.93GB | | [deepseek-coder-7b-instruct-v1.5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-7b-instruct-v1.5-gguf/blob/main/deepseek-coder-7b-instruct-v1.5.Q4_K_M.gguf) | Q4_K_M | 3.93GB | | [deepseek-coder-7b-instruct-v1.5.Q4_1.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-7b-instruct-v1.5-gguf/blob/main/deepseek-coder-7b-instruct-v1.5.Q4_1.gguf) | Q4_1 | 4.1GB | | [deepseek-coder-7b-instruct-v1.5.Q5_0.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-7b-instruct-v1.5-gguf/blob/main/deepseek-coder-7b-instruct-v1.5.Q5_0.gguf) | Q5_0 | 4.48GB | | [deepseek-coder-7b-instruct-v1.5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-7b-instruct-v1.5-gguf/blob/main/deepseek-coder-7b-instruct-v1.5.Q5_K_S.gguf) | Q5_K_S | 4.48GB | | [deepseek-coder-7b-instruct-v1.5.Q5_K.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-7b-instruct-v1.5-gguf/blob/main/deepseek-coder-7b-instruct-v1.5.Q5_K.gguf) | Q5_K | 4.59GB | | [deepseek-coder-7b-instruct-v1.5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-7b-instruct-v1.5-gguf/blob/main/deepseek-coder-7b-instruct-v1.5.Q5_K_M.gguf) | Q5_K_M | 4.59GB | | [deepseek-coder-7b-instruct-v1.5.Q5_1.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-7b-instruct-v1.5-gguf/blob/main/deepseek-coder-7b-instruct-v1.5.Q5_1.gguf) | Q5_1 | 4.86GB | | [deepseek-coder-7b-instruct-v1.5.Q6_K.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-7b-instruct-v1.5-gguf/blob/main/deepseek-coder-7b-instruct-v1.5.Q6_K.gguf) | Q6_K | 5.28GB | Original model description: --- license: other license_name: deepseek license_link: LICENSE --- <p align="center"> <img width="1000px" alt="DeepSeek Coder" src="https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/pictures/logo.png?raw=true"> </p> <p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://coder.deepseek.com/">[🤖 Chat with DeepSeek Coder]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/guoday/assert/blob/main/QR.png?raw=true">[Wechat(微信)]</a> </p> <hr> ### 1. Introduction of Deepseek-Coder-7B-Instruct v1.5 Deepseek-Coder-7B-Instruct-v1.5 is continue pre-trained from Deepseek-LLM 7B on 2T tokens by employing a window size of 4K and next token prediction objective, and then fine-tuned on 2B tokens of instruction data. - **Home Page:** [DeepSeek](https://deepseek.com/) - **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder) - **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/) ### 2. Evaluation Results <img width="1000px" alt="DeepSeek Coder" src="https://cdn-uploads.huggingface.co/production/uploads/6538815d1bdb3c40db94fbfa/xOtCTW5xdoLCKY4FR6tri.png"> ### 3. How to Use Here give some examples of how to use our model. #### Chat Model Inference ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-7b-instruct-v1.5", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-7b-instruct-v1.5", trust_remote_code=True).cuda() messages=[ { 'role': 'user', 'content': "write a quick sort algorithm in python."} ] inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device) outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ### 4. License This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use. See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details. ### 5. Contact If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
nDimensional/Assassinorum-SDXL-LoRA-Warhammer-40k
nDimensional
2024-06-22T14:56:37Z
355
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "SDXL", "LoRA", "en", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-05-04T18:52:11Z
--- license: creativeml-openrail-m language: - en library_name: diffusers tags: - text-to-image - stable-diffusion - stable-diffusion-xl - SDXL - LoRA --- <html> <head> <link href="https://fonts.googleapis.com/css2?family=Skranji&display=swap" rel="stylesheet"> </head> <style> .title-container { display: flex; justify-content: center; align-items: center; } .title { font-size: 2.5em; text-align: center; color: #1d1d1d!important; font-family: 'Skranji'; text-transform: uppercase; letter-spacing: 0.1em; padding: 0.5em 0; background: transparent; } .title span { background: -webkit-linear-gradient(45deg, #fcb900, #ff6900); -webkit-background-clip: text; -webkit-text-fill-color: transparent; } .icon { margin-right: 10px; } .icon img { width: 50px; height: auto; padding-bottom: 12px; } img { max-width: 100%; height: auto; } .center { display: block; margin-left: auto; margin-right: auto; width: 1344; } figcaption { display: block; font-style: italic; color: white; padding: 2px; text-align: center; } </style> <div class="title-container"> <div class="icon"> <img src="assets/assassinorum.svg" atl="Assassinorum SVG"/> </div> <h1 class="title"> <span>Officio Assassinorum</span> </h1> </div> <figure> <img src="assets/sample_image.png" alt="sample image" class="center"/> <figcaption>Generated with Epoch 09</figcaption> </figure> </html> <!-- START SUPPORT --> <a href='https://ko-fi.com/M4M3ZBOIT' target='_blank'> <img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi3.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /> </a> <!-- END SUPPORT --> ### Epoch09 is recommended! <!-- START Model Info --> <div class="model-info-container" style='font-family:Monospace'> <table id="model-info-table"> <tr> <th></th> <th></th> </tr> <tr> <td>Base Model</td> <td>SDXL v1.0</td> </tr> <tr> <td>VAE</td> <td><a href="https://huggingface.co/madebyollin/sdxl-vae-fp16-fix">SDXL-VAE-FP16-FIX</a></td> </tr> <tr> <td>Type</td> <td>LoRA</td> </tr> </table> </div> <!-- END Model Info -->
mirlab/AkaLlama-llama3-70b-v0.1-GGUF
mirlab
2024-05-08T14:37:39Z
355
11
transformers
[ "transformers", "gguf", "meta", "llama", "llama-3", "akallama", "text-generation", "ko", "en", "arxiv:2403.07691", "license:other", "region:us" ]
text-generation
2024-05-04T21:41:39Z
--- libray_name: transformers pipeline_tag: text-generation license: other license_name: llama3 license_link: LICENSE language: - ko - en tags: - meta - llama - llama-3 - akallama library_name: transformers inference: false --- <a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd"> <img src="https://github.com/0110tpwls/project/blob/master/image_720.png?raw=true" width="40%"/> </a> # AKALLAMA AkaLlama is a series of Korean language models designed for practical usability across a wide range of tasks. The initial model, AkaLlama-v0.1, is a fine-tuned version of Meta-Llama-3-70b-Instruct. It has been trained on a custom mix of publicly available datasets curated by the MIR Lab. Our goal is to explore cost-effective ways to adapt high-performing LLMs for specific use cases, such as different languages (e.g., Korean) or domains (e.g., organization-specific chatbots). For details, check out [our project page](https://yonsei-mir.github.io/AkaLLaMA-page). ### Model Description This is the model card of a GGUF model that has been pushed on the Hub. - **Developed by:** [Yonsei MIRLab](https://mirlab.yonsei.ac.kr/) - **Language(s) (NLP):** Korean, English - **License:** llama3 - **Finetuned from model:** [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) - **Quantized from model:** [mirlab/AkaLlama-llama3-70b-v0.1](https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1) ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## How to use This repo provides gguf weight files for AkaLlama-70B-v0.1. # Use with llama.cpp.python See the snippet below for usage with llama.cpp.python: ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./AkaLlama-llama3-70b-v0.1.Q4_K_M.gguf", # Download the model file first n_ctx=8192, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=81 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( """<|begin_of_text|><|start_header_id|>system<|end_header_id|> 당신은 연세대학교 멀티모달 연구실 (MIR lab) 이 만든 대규모 언어 모델인 AkaLlama (아카라마) 입니다. 다음 지침을 따르세요: 1. 사용자가 별도로 요청하지 않는 한 항상 한글로 소통하세요. 2. 유해하거나 비윤리적, 차별적, 위험하거나 불법적인 내용이 답변에 포함되어서는 안 됩니다. 3. 질문이 말이 되지 않거나 사실에 부합하지 않는 경우 정답 대신 그 이유를 설명하세요. 질문에 대한 답을 모른다면 거짓 정보를 공유하지 마세요. 4. 안전이나 윤리에 위배되지 않는 한 사용자의 모든 질문에 완전하고 포괄적으로 답변하세요.<|eot_id|><|start_header_id|>user<|end_header_id|> {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> """, # Prompt max_tokens=512, # Generate up to 512 tokens stop=["<|eot_id|>", "<|end_of_text|>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./AkaLlama-llama3-70b-v0.1.Q4_K_M.gguf", chat_format="llama-3") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": """당신은 연세대학교 멀티모달 연구실 (MIR lab) 이 만든 대규모 언어 모델인 AkaLlama (아카라마) 입니다. 다음 지침을 따르세요: 1. 사용자가 별도로 요청하지 않는 한 항상 한글로 소통하세요. 2. 유해하거나 비윤리적, 차별적, 위험하거나 불법적인 내용이 답변에 포함되어서는 안 됩니다. 3. 질문이 말이 되지 않거나 사실에 부합하지 않는 경우 정답 대신 그 이유를 설명하세요. 질문에 대한 답을 모른다면 거짓 정보를 공유하지 마세요. 4. 안전이나 윤리에 위배되지 않는 한 사용자의 모든 질문에 완전하고 포괄적으로 답변하세요."""}, { "role": "user", "content": "네 이름은 뭐야?." } ] ) # 내 이름은 AkaLlama입니다! 나는 언어 모델로, 사용자와 대화하는 데 도움을 주기 위해 만들어졌습니다. 나는 다양한 주제에 대한 질문에 답하고, 새로운 아이디어를 제공하며, 문제를 해결하는 데 도움이 될 수 있습니다. 사용자가 원하는 정보나 도움을 받도록 최선을 다할 것입니다! ``` ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [AkaLlama-llama3-70b-v0.1.Q2_K.gguf](https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1-GGUF/blob/main/AkaLlama-llama3-70b-v0.1.Q2_K.gguf) | Q2_K | 2 | 26.4 GB| 28.9 GB | smallest, significant quality loss - not recommended for most purposes | | [AkaLlama-llama3-70b-v0.1.Q3_K_S.gguf](https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1-GGUF/blob/main/AkaLlama-llama3-70b-v0.1.Q3_K_S.gguf) | Q3_K_S | 3 | 30.9 GB| 33.4 GB | very small, high quality loss | | [AkaLlama-llama3-70b-v0.1.Q3_K_M.gguf](https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1-GGUF/blob/main/AkaLlama-llama3-70b-v0.1.Q3_K_M.gguf) | Q3_K_M | 3 | 34.3 GB| 36.8 GB | very small, high quality loss | | [AkaLlama-llama3-70b-v0.1.Q3_K_L.gguf](https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1-GGUF/blob/main/AkaLlama-llama3-70b-v0.1.Q3_K_L.gguf) | Q3_K_L | 3 | 37.1 GB| 39.6 GB | small, substantial quality loss | | [AkaLlama-llama3-70b-v0.1.Q4_K_S.gguf](https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1-GGUF/blob/main/AkaLlama-llama3-70b-v0.1.Q4_K_S.gguf) | Q4_K_S | 4 | 40.3 GB| 42.8 GB | small, greater quality loss | | [AkaLlama-llama3-70b-v0.1.Q4_K_M.gguf](https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1-GGUF/blob/main/AkaLlama-llama3-70b-v0.1.Q4_K_M.gguf) | Q4_K_M | 4 | 42.5 GB| 45.0 GB | medium, balanced quality - recommended | | [AkaLlama-llama3-70b-v0.1.Q5_K_S.gguf](https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1-GGUF/blob/main/AkaLlama-llama3-70b-v0.1.Q5_K_S.gguf) | Q5_K_S | 5 | 48.7 GB| 50.2 GB | large, low quality loss - recommended | | [AkaLlama-llama3-70b-v0.1.Q5_K_M.gguf](https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1-GGUF/blob/main/AkaLlama-llama3-70b-v0.1.Q5_K_M.gguf) | Q5_K_M | 5 | 50.0 GB| 52.5 GB | large, very low quality loss - recommended | | AkaLlama-llama3-70b-v0.1.Q6_K.gguf | Q6_K | 6 | 54.4 GB| 59.9 GB | very large, extremely low quality loss | | AkaLlama-llama3-70b-v0.1.Q8_0.gguf | Q8_0 | 8 | 70.0 GB| 72.5 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ### Q6_K and Q8_0 files are split and require joining **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files. ### q6_K Please download: * `AkaLlama-llama3-70b-v0.1.Q6_K.00001-of-00002.gguf` * `AkaLlama-llama3-70b-v0.1.Q6_K.00002-of-00002.gguf` ### q8_0 Please download: * `AkaLlama-llama3-70b-v0.1.Q8_0.00001-of-00002.gguf` * `AkaLlama-llama3-70b-v0.1.Q8_0.00002-of-00002.gguf` To join the files, do the following: Linux and macOS: ``` cat AkaLlama-llama3-70b-v0.1.Q6_K.*-of-00002.gguf > AkaLlama-llama3-70b-v0.1.Q6_K.gguf && rm AkaLlama-llama3-70b-v0.1.Q6_K.*-of-00002.gguf cat AkaLlama-llama3-70b-v0.1.Q8_0.*-of-00002.gguf > AkaLlama-llama3-70b-v0.1.Q8_0.gguf && rm AkaLlama-llama3-70b-v0.1.Q8_0.*-of-00002.gguf ``` Windows command line: ``` COPY /B AkaLlama-llama3-70b-v0.1.Q6_K.00001-of-00002.gguf + AkaLlama-llama3-70b-v0.1.Q6_K.00002-of-00002.gguf AkaLlama-llama3-70b-v0.1.Q6_K.gguf del AkaLlama-llama3-70b-v0.1.Q6_K.00001-of-00002.gguf AkaLlama-llama3-70b-v0.1.Q6_K.00002-of-00002.gguf COPY /B AkaLlama-llama3-70b-v0.1.Q8_0.00001-of-00002.gguf + AkaLlama-llama3-70b-v0.1.Q8_0.00002-of-00002.gguf AkaLlama-llama3-70b-v0.1.Q8_0.gguf del AkaLlama-llama3-70b-v0.1.Q8_0.00001-of-00002.gguf AkaLlama-llama3-70b-v0.1.Q8_0.00002-of-00002.gguf ``` ## Evaluation | Model | #Parameter | Qunatized? | LogicKor* | |:--------------------------------:|:----------:|------------|:---------:| | AkaLlama-llama3-70b-v0.1-GGUF.Q4 | 70B | 4bit | 6.56 | | AkaLlama-llama3-70b-v0.1-GGUF.Q8 | 70B | 8bit | 6.34 | *mean over 3 random seeds ## Training Details ### Training Procedure We trained AkaLlama using a preference learning alignment algorithm called [Odds Ratio Preference Optimization (ORPO)](https://huggingface.co/papers/2403.07691). Our training pipeline is almost identical to that of [HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1), aside from minor hyperparameter changes. Please check out Huggingface's [alignment handbook](https://github.com/huggingface/alignment-handbook?tab=readme-ov-file) for further details, including the chat template. ### Training Data Detailed descriptions regarding training data will be announced later. ### Examples <!--<a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd"> <img src="https://github.com/0110tpwls/project/blob/master/image (8).png?raw=true" width="80%"/> </a> --> <pre style="white-space: pre-wrap;"> User: 너는 누구이고 어떤 일을 할 수 있어? Model: 저는 인공지능 언어 모델인 AkaLlama(아카라마)입니다. 저는 다양한 주제와 형식의 텍스트를 생성할 수 있으며, 질문에 답하고, 대화를 진행하며, 이야기를 만들 수 있습니다. 또한 사용자가 지정한 주제나 장르에 따라 창의적인 글쓰기나 편집 작업을 수행할 수도 있습니다. </pre> <details> <summary><b>Math Solving[CLICK TO EXPAND]</b></summary> <!-- <a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd"> <img src="https://github.com/0110tpwls/project/blob/master/image (9).png?raw=true" width="80%"/> </a> --> <pre style="white-space: pre-wrap;"> User: 아래 문제를 깊게 생각하면서 풀어봐 연필 5 타와 지우개 45 개가 있습니다. 이것을 될 수 있는 대로 많은 학생들에게 남김없이 똑같이 나누어 주려고 합니다. 몇 명까지 나누어 줄 수 있습니까? Model: 아래 문제를 깊게 생각하면서 풀어봅시다: 연필 5 타와 지우개 45 개가 있습니다. 이것을 될 수 있는 대로 많은 학생들에게 남김없이 똑같이 나누어 주려고 합니다. 몇 명까지 나누어 줄 수 있습니까? 풀이: 1. 연필 5 타를 1 개씩 나눠서 몇 명에게 나누어 줄 수 있는지 먼저 구합니다. - 각 연필에는 12개의 연필이 포함되어 있습니다. 따라서 총 12 * 5 = 60개입니다. - 60개를 1명당 1개씩 나누면 60명의 학생들에게 나눠줄 수 있습니다. 2. 지우개 45 개를 1 개씩 나누어 몇 명에게 나누어 줄 수 있는지 구합니다. - 45 개를 1 명당 1 개씩 나누면 45 명의 학생들에게 나눠줄 수 있습니다. 3. 연필과 지우개의 두 가지 물품을 함께 고려하여 가장 많은 학생들에게 똑같이 나누어 줄 수 있는 방법을 찾습니다. - 연필로만 60명, 지우개로만 45명의 학생에게 나눠줄 수 있으므로, 두 종류의 문구를 함께 고려할 때는 이보다 적은 수의 학생에게 나눠주어야 합니다. - 이 중에서 공통인수(common divisor)를 찾아서 이를 최소 단위로 설정하면 됩니다. - 60과 45의 최대공약수(GCD)는 15입니다. 4. 연필과 지우개 각각을 GCD(15)로 나누어 몇 개씩 나눠줄 수 있는지 구합니다. - 연필은 60 / 15 = 4개씩, 총 4 * 12 = 48개의 연필이 필요합니다. - 지우개는 45 / 15 = 3개씩, 총 3 * 1 = 3개의 지우개가 필요합니다. 5. 최종적으로, 가장 많은 학생들에게 똑같이 나누어 줄 수 있는 방법은 연필과 지우개를 GCD(15)로 나누어 각 학생에게 4개의 연필과 3개의 지우개를 제공하는 것입니다. - 이 경우 총 48 + 3 = 51개의 물품을 사용하여 15명의 학생들에게 똑같이 나눠줄 수 있습니다. 결론: 연필과 지우개 각각을 GCD(15)로 나누어 각 학생에게 4개의 연필과 3개의 지우개를 제공하는 방법으로 최대 15명의 학생들에게 똑같이 나누어 줄 수 있습니다. </pre> </details> <details> <summary><b>Writting[CLICK TO EXPAND]</b></summary> <!-- <a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd"> <img src="https://github.com/0110tpwls/project/blob/master/image (13).png?raw=true" width="80%"/> </a> --> <pre style="white-space: pre-wrap;"> User: 아래 문장을 보고 문법적으로 틀린부분을 지적해줘 그는 서슴치 않고 비평한다. Model: 서슴치"라는 단어는 "서슴"이라는 동사와 "치"라는 접미사를 사용하여 만든 단어입니다. 이 단어는 "서슴"의 의미를 반복적으로 표현하는 데 사용되는 조동사적 용법으로, "서슴"이란 동사의 반복적인 행동을 강조합니다. 따라서 문장에서 "서슴치"라는 단어는 중복적으로 사용되어 문법적으로 틀린 부분입니다. 정답은 다음과 같습니다: 그는 서슴없이 비평한다. </pre> <!-- <a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd"> <img src="https://github.com/0110tpwls/project/blob/master/image (7).png?raw=true" width="80%"/> </a> --> <pre style="white-space: pre-wrap;"> User: 윤동주 시인의 "별 헤는 밤"을 보고 당시 시대상을 고려해서 해석해봐. 그리고 이를 작가의 독백형식으로 다시 작성해봐. Model: 윤동주는 1941년 일본 제국주의 치하에서 이 작품을 쓰면서, 강압적인 상황 속에서도 자신의 정체성과 문화에 대한 애착을 표현하고자 했습니다. 그는 하늘에 가득 찬 별들을 보며 그 속에 하나 둘 새겨지는 별들이 있다고 표현했습니다. 이러한 별들은 각각 추억, 사랑, 쓸쓸함, 동경, 시 등 다양한 감정을 담고 있습니다. 윤동주는 이 별들 중 하나에 어머니를 부르며 멀리 떨어져 있는 것을 그리워하며, 자신의 이름자를 언덕 위에 쓰고 흙으로 덮어 버린다고 말합니다. 이러한 행동은 자신의 정체성을 확인하고, 잃어버린 것을 되찾으려는 노력으로 볼 수 있습니다. 윤동주는 또한 자신이 죽고 나면 무덤 위에 파란 잔디가 피어나듯이 자신의 이름자 묻힌 언덕 위에도 풀이 무성할 것이라고 말합니다. 이는 당시의 사회상과 작가의 내면을 반영하고 있으며, 한국인의 정체성을 확인하고 잃어버린 것을 되찾으려는 노력, 그리고 죽음 이후에 자신의 존재를 남겨두고 싶다는 바람을 담고 있습니다. 이 시를 작가의 독백형식으로 다시 작성하면 다음과 같습니다: "저는 이 별들이 너무나 많은 것 같아요. 하나 둘 새겨지는 별들 중에는 추억도 있고, 사랑도 있고, 쓸쓸함과 동경도 있어요. 그리고 어머니, 당신은 멀리 북간도에 계신 건가요? 저는 당신을 부르고 싶어요. 저의 이름자는 언덕 위에 쓰고 흙으로 덮어 버렸어요. 딴은 밤을 새워 우는 벌레처럼 부끄러운 이름을 슬퍼하는 이유가 있어요. 하지만 겨울이 지나고 나의 별에도 봄이 오면, 무덤 위에 파란 잔디가 피어나듯이 저의 이름자 묻힌 언덕 위에서도 풀이 무성할 거예요. 저는 이 별들을 보고, 자신의 정체성을 확인하고 싶어요. 잃어버린 것을 되찾으려는 노력을 하고 싶어요. 죽음 이후에도 자신의 존재를 남겨두고 싶어요." </pre> </details> <details> <summary><b>Logical Reasoning[CLICK TO EXPAND]</b></summary> <!-- <a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd"> <img src="https://github.com/0110tpwls/project/blob/master/image (15).png?raw=true" width="80%"/> </a> --> <pre style="white-space: pre-wrap;"> User: 세종대왕의 맥북프로 던짐 사건에 대해 설명해줘 Model: 세종대왕이 맥북프로를 던진 사건은 사실무근한 이야기입니다. 세종대왕은 조선 제4대 국왕으로 1418년부터 1450년까지 재위했습니다. 그는 과학, 문학, 예술 등 다양한 분야에 관심을 갖고 연구하였으며, 특히 한글 창제와 천문학에 큰 공헌을 했습니다. 한편 맥북프로는 애플이 생산하는 노트북 컴퓨터로, 세종대왕 시대의 기술로는 존재할 수 없는 제품입니다. 따라서 세종대왕이 맥북프로를 던진 사건은 사실과 맞지 않는 이야기가 될 것입니다. </pre> </details> <details> <summary><b>Coding [CLICK TO EXPAND]</b></summary> <!-- <a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd"> <img src="https://github.com/0110tpwls/project/blob/master/image (11).png?raw=true" width="80%"/> </a> --> <pre style="white-space: pre-wrap;"> User: PyTorch를 사용해서 간단한 CNN을 구현해봐. Model: 네, 물론이에요! PyTorch를 사용하여 간단한 CNN을 구현하는 예제 코드를 보여드릴게요. 우선, 필요한 라이브러리를 가져와야 해요. PyTorch의 `torch`와 `nn` 모듈이 필요해요. ```python import torch\nimport torch.nn as nn ``` 그 다음, 간단한 CNN 모델을 정의할 거예요. 이 경우에는 2개의 컨볼루션 레이어와 3개의 풀링 레이어로 구성된 모델이에요. ```python class SimpleCNN(nn.Module): def __init__(self): super(SimpleCNN, self).__init__() self.conv1 = nn.Conv2d(1, 10, kernel_size=5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(10, 20, kernel_size=5) def forward(self, x): x = self.pool(torch.relu(self.conv1(x))) x = self.pool(torch.relu(self.conv2(x))) return x ``` 이제 모델을 초기화하고, 입력 데이터를 준비해야 해요. 여기서는 임의의 28x28 크기의 이미지 데이터를 생성했어요. ```python model = SimpleCNN() input_data = torch.randn(1, 1, 28, 28) ``` 마지막으로, 모델에 입력 데이터를 넣어 출력을 확인할 수 있어요. ```python output = model(input_data) print(output.shape) ``` 이렇게 하면 간단한 CNN 모델을 PyTorch를 사용해 구현하고, 입력 데이터를 통해 출력을 확인할 수 있어요. 물론 실제로는 더 많은 레이어나 파라미터 튜닝 등이 필요하지만, 이 코드는 기본적인 CNN 구조를 보여주는 예제일 뿐이에요. 더 궁금한 점이 있으면 언제든지 물어봐 주세요! </pre> </details> You can find more examples at [our project page](https://yonsei-mir.github.io/AkaLLaMA-page) ## Warning Although AKALlama-70B has significant potential, its responses can sometimes be inaccurate, biased, or misaligned, presenting risks if used without additional testing and refinement. Furthermore, the quality of the model's output is greatly influenced by the system prompt and decoding strategy. Changes in these areas could result in less precise outputs. Therefore, we strongly recommend handling our model with considerable caution. ## Citation If you find our model useful in your work, please cite the model as follows: @misc{akallama, author = {Chung, Jiwan and Jeon, Jaehyun and Kim, Saejin and Lim, Seungwon and Oh, Giyeong and Son, Yejin and Yu, Youngjae}, title = {AkaLlama: Yonsei University Large Language Model Project}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1}}, } ## Contact We look forward for your feedback and welcome collaboration on this exciting project! ### Contributors - [YoungJaeYu](https://yj-yu.github.io/home/) - [Yonsei MIRLab](https://mirlab.yonsei.ac.kr/) ## Special Thanks - Data Center of the Department of Artificial Intelligence at Yonsei University for the computation resources ## Acknowledgement - Title image generated by DALL·E 3
mzbac/Phi-3-mini-4k-instruct-function-calling
mzbac
2024-05-07T07:21:01Z
355
3
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "dataset:mzbac/function-calling-phi-3-format-v1.1", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-07T06:21:13Z
--- datasets: - mzbac/function-calling-phi-3-format-v1.1 --- # Model Fine-tuned the Phi3 instruction model for function calling via MLX-LM using https://huggingface.co/datasets/mzbac/function-calling-phi-3-format-v1.1 # Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "mzbac/Phi-3-mini-4k-instruct-function-calling" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) tool = { "name": "search_web", "description": "Perform a web search for a given search terms.", "parameter": { "type": "object", "properties": { "search_terms": { "type": "array", "items": {"type": "string"}, "description": "The search queries for which the search is performed.", "required": True, } }, }, } messages = [ { "role": "user", "content": f"You are a helpful assistant with access to the following functions. Use them if required - {str(tool)}", }, {"role": "user", "content": "Any news in Melbourne today, May 7, 2024?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|end|>")] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.1, ) response = outputs[0] print(tokenizer.decode(response)) # <s><|user|> You are a helpful assistant with access to the following functions. Use them if required - {'name': 'search_web', 'description': 'Perform a web search for a given search terms.', 'parameter': {'type': 'object', 'properties': {'search_terms': {'type': 'array', 'items': {'type': 'string'}, 'description': 'The search queries for which the search is performed.', 'required': True}}}}<|end|><|assistant|> # <|user|> Any news in Melbourne today, May 7, 2024?<|end|> # <|assistant|> <functioncall> {"name": "search_web", "arguments": {"search_terms": ["news", "Melbourne", "May 7, 2024"]}}<|end|> ``` # Training hyperparameters lora_config.yaml ```yaml # The path to the local model directory or Hugging Face repo. model: "microsoft/Phi-3-mini-4k-instruct" # Whether or not to train (boolean) train: true # Directory with {train, valid, test}.jsonl files data: "data" # The PRNG seed seed: 0 # Number of layers to fine-tune lora_layers: 32 # Minibatch size. batch_size: 1 # Iterations to train for. iters: 111000 # Number of validation batches, -1 uses the entire validation set. val_batches: -1 # Adam learning rate. learning_rate: 1e-6 # Number of training steps between loss reporting. steps_per_report: 10 # Number of training steps between validations. steps_per_eval: 200 # Load path to resume training with the given adapter weights. # resume_adapter_file: "adapters/adapters.safetensors" # Save/load path for the trained adapter weights. adapter_path: "adapters" # Save the model every N iterations. save_every: 1000 # Evaluate on the test set after training test: false # Number of test set batches, -1 uses the entire test set. test_batches: 100 # Maximum sequence length. max_seq_length: 4096 # Use gradient checkpointing to reduce memory use. grad_checkpoint: false # LoRA parameters can only be specified in a config file lora_parameters: # The layer keys to apply LoRA to. # These will be applied for the last lora_layers keys: ['mlp.down_proj','mlp.gate_up_proj','self_attn.qkv_proj','self_attn.o_proj'] rank: 128 alpha: 256 scale: 10.0 dropout: 0.05 ```
s2fcqj-org/remove-clothes
s2fcqj-org
2024-05-09T07:38:33Z
355
2
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
2024-05-09T07:38:23Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: >- <lora:transparent_FP16_by_Gtonero:1> (transparent bikini:1.2), 1girl, solo, standing, ocean, thigh gap, covered nipples, micro bikini, output: url: images/remove-clothes.jpeg base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: <lora:transparent_FP16_by_Gtonero:1> license: mit --- # remove-clothes <Gallery /> ## Model description simply remove clothes, make the clothes transparent Lora ## Trigger words You should use `&lt;lora:transparent_FP16_by_Gtonero:1&gt;` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/s2fcqj-org/remove-clothes/tree/main) them in the Files & versions tab.
ukung/cendol-llama2-7b-inst-GGUF
ukung
2024-05-20T06:11:52Z
355
0
null
[ "gguf", "region:us" ]
null
2024-05-20T04:40:19Z
Entry not found
mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF
mradermacher
2024-05-23T11:13:40Z
355
0
transformers
[ "transformers", "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "en", "dataset:Trelis/gawiki", "base_model:Trelis/Meta-Llama-3-8B-Instruct-Gaeilge", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-05-23T10:46:06Z
--- base_model: Trelis/Meta-Llama-3-8B-Instruct-Gaeilge datasets: - Trelis/gawiki language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - facebook - meta - pytorch - llama - llama-3 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Trelis/Meta-Llama-3-8B-Instruct-Gaeilge <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf
RichardErkhov
2024-05-31T18:15:23Z
355
0
null
[ "gguf", "region:us" ]
null
2024-05-31T15:46:11Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Autolycus-Mistral_7B - GGUF - Model creator: https://huggingface.co/FPHam/ - Original model: https://huggingface.co/FPHam/Autolycus-Mistral_7B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Autolycus-Mistral_7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf/blob/main/Autolycus-Mistral_7B.Q2_K.gguf) | Q2_K | 2.53GB | | [Autolycus-Mistral_7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf/blob/main/Autolycus-Mistral_7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [Autolycus-Mistral_7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf/blob/main/Autolycus-Mistral_7B.IQ3_S.gguf) | IQ3_S | 2.96GB | | [Autolycus-Mistral_7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf/blob/main/Autolycus-Mistral_7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [Autolycus-Mistral_7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf/blob/main/Autolycus-Mistral_7B.IQ3_M.gguf) | IQ3_M | 3.06GB | | [Autolycus-Mistral_7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf/blob/main/Autolycus-Mistral_7B.Q3_K.gguf) | Q3_K | 3.28GB | | [Autolycus-Mistral_7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf/blob/main/Autolycus-Mistral_7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [Autolycus-Mistral_7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf/blob/main/Autolycus-Mistral_7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [Autolycus-Mistral_7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf/blob/main/Autolycus-Mistral_7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [Autolycus-Mistral_7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf/blob/main/Autolycus-Mistral_7B.Q4_0.gguf) | Q4_0 | 3.83GB | | [Autolycus-Mistral_7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf/blob/main/Autolycus-Mistral_7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [Autolycus-Mistral_7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf/blob/main/Autolycus-Mistral_7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [Autolycus-Mistral_7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf/blob/main/Autolycus-Mistral_7B.Q4_K.gguf) | Q4_K | 4.07GB | | [Autolycus-Mistral_7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf/blob/main/Autolycus-Mistral_7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [Autolycus-Mistral_7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf/blob/main/Autolycus-Mistral_7B.Q4_1.gguf) | Q4_1 | 4.24GB | | [Autolycus-Mistral_7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf/blob/main/Autolycus-Mistral_7B.Q5_0.gguf) | Q5_0 | 4.65GB | | [Autolycus-Mistral_7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf/blob/main/Autolycus-Mistral_7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [Autolycus-Mistral_7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf/blob/main/Autolycus-Mistral_7B.Q5_K.gguf) | Q5_K | 4.78GB | | [Autolycus-Mistral_7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf/blob/main/Autolycus-Mistral_7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [Autolycus-Mistral_7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf/blob/main/Autolycus-Mistral_7B.Q5_1.gguf) | Q5_1 | 5.07GB | | [Autolycus-Mistral_7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf/blob/main/Autolycus-Mistral_7B.Q6_K.gguf) | Q6_K | 5.53GB | | [Autolycus-Mistral_7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/FPHam_-_Autolycus-Mistral_7B-gguf/blob/main/Autolycus-Mistral_7B.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: apache-2.0 language: - en tags: - mistral - instruct - finetune - chatml - gpt4 --- <!-- header start --> <div style="display: flex; flex-direction: column; align-items: center;"> </div> <div style="width: 100%;"> <img src="https://huggingface.co/FPHam/OpenAutolycus-Mistral_7B/resolve/main/openautolycustitle.jpg" alt="Open Autolycus" style="width: 40%; min-width: 200px; display: block; margin: auto;"> </div> <div style="display: flex; flex-direction: column; align-items: center;"> <p><a href="https://ko-fi.com/Q5Q5MOB4M">Support me at Ko-fi</a></p> </div> <!-- header end --> Autolycus is a son of Hermes. Autolycus-Mistral is a refinement of OpenHermes 2.5 Mistral, designed to convert the stilted GPT-4 robotic gobbledygook into something resembling natural human English -- with just enough lies, embellishments, and downright falsehoods to bring it into line with the average newspaper article. But what did *you* expect from seven billion models? You can't get good results without some level of embellishment. And besides, who cares about reality anyway? We live in a world where people believe anything they read on the Internet! The most brazen examples of 'making things up', were those rare occasions where Autolycus actually quoted a source; usually a book title or author, sometimes a date, but which you find to be nothing more than a load of hogwash when you check it out for yourself. "I have no idea why anyone would want to build such a thing, other than being bored or having too much time on their hands," said Hermes dismissively. "It has been done before," said another voice, this time belonging to Hermes' son, Autolycus. "Back in ancient Greece, there was a man called Daedalus who built himself wings made of feathers and wax so he could fly away from King Minos of Crete." "Yes, but we are not talking about birds here!" exclaimed Hermes impatiently. "We need to figure out how to keep humans from running off all over the place once they become airborne." He paused thoughtfully then continued, "There must be some way..." His eyes lit up suddenly, and he clapped his hands together excitedly. "Of course! Why didn't I see this sooner?" "What?" asked Autolycus curiously. "We shall use metal cages for humans!" announced Hermes triumphantly. "They will provide both protection and containment!" # Model uses ChatML ``` <|im_start|>system <|im_end|> <|im_start|>user How small are the atoms?<|im_end|> <|im_start|>assistant ```
mradermacher/L3-daybreak-TheSpice-8b-v0.1.3-GGUF
mradermacher
2024-05-31T18:43:51Z
355
0
transformers
[ "transformers", "gguf", "not-for-all-audiences", "en", "base_model:Vdr1/L3-daybreak-TheSpice-8b-v0.1.3", "endpoints_compatible", "region:us" ]
null
2024-05-31T17:54:05Z
--- base_model: Vdr1/L3-daybreak-TheSpice-8b-v0.1.3 language: - en library_name: transformers quantized_by: mradermacher tags: - not-for-all-audiences --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Vdr1/L3-daybreak-TheSpice-8b-v0.1.3 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-daybreak-TheSpice-8b-v0.1.3-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-daybreak-TheSpice-8b-v0.1.3-GGUF/resolve/main/L3-daybreak-TheSpice-8b-v0.1.3.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/L3-daybreak-TheSpice-8b-v0.1.3-GGUF/resolve/main/L3-daybreak-TheSpice-8b-v0.1.3.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-daybreak-TheSpice-8b-v0.1.3-GGUF/resolve/main/L3-daybreak-TheSpice-8b-v0.1.3.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-daybreak-TheSpice-8b-v0.1.3-GGUF/resolve/main/L3-daybreak-TheSpice-8b-v0.1.3.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-daybreak-TheSpice-8b-v0.1.3-GGUF/resolve/main/L3-daybreak-TheSpice-8b-v0.1.3.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-daybreak-TheSpice-8b-v0.1.3-GGUF/resolve/main/L3-daybreak-TheSpice-8b-v0.1.3.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-daybreak-TheSpice-8b-v0.1.3-GGUF/resolve/main/L3-daybreak-TheSpice-8b-v0.1.3.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-daybreak-TheSpice-8b-v0.1.3-GGUF/resolve/main/L3-daybreak-TheSpice-8b-v0.1.3.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-daybreak-TheSpice-8b-v0.1.3-GGUF/resolve/main/L3-daybreak-TheSpice-8b-v0.1.3.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-daybreak-TheSpice-8b-v0.1.3-GGUF/resolve/main/L3-daybreak-TheSpice-8b-v0.1.3.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-daybreak-TheSpice-8b-v0.1.3-GGUF/resolve/main/L3-daybreak-TheSpice-8b-v0.1.3.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-daybreak-TheSpice-8b-v0.1.3-GGUF/resolve/main/L3-daybreak-TheSpice-8b-v0.1.3.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-daybreak-TheSpice-8b-v0.1.3-GGUF/resolve/main/L3-daybreak-TheSpice-8b-v0.1.3.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L3-daybreak-TheSpice-8b-v0.1.3-GGUF/resolve/main/L3-daybreak-TheSpice-8b-v0.1.3.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/L3-daybreak-TheSpice-8b-v0.1.3-GGUF/resolve/main/L3-daybreak-TheSpice-8b-v0.1.3.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Codestral-22B-v0.1-abliterated-v3-GGUF
mradermacher
2024-06-04T10:25:14Z
355
0
transformers
[ "transformers", "gguf", "en", "base_model:failspy/Codestral-22B-v0.1-abliterated-v3", "endpoints_compatible", "region:us" ]
null
2024-06-04T04:02:01Z
--- base_model: failspy/Codestral-22B-v0.1-abliterated-v3 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/failspy/Codestral-22B-v0.1-abliterated-v3 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Codestral-22B-v0.1-abliterated-v3-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-abliterated-v3-GGUF/resolve/main/Codestral-22B-v0.1-abliterated-v3.Q2_K.gguf) | Q2_K | 8.4 | | | [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-abliterated-v3-GGUF/resolve/main/Codestral-22B-v0.1-abliterated-v3.IQ3_XS.gguf) | IQ3_XS | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-abliterated-v3-GGUF/resolve/main/Codestral-22B-v0.1-abliterated-v3.Q3_K_S.gguf) | Q3_K_S | 9.7 | | | [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-abliterated-v3-GGUF/resolve/main/Codestral-22B-v0.1-abliterated-v3.IQ3_S.gguf) | IQ3_S | 9.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-abliterated-v3-GGUF/resolve/main/Codestral-22B-v0.1-abliterated-v3.IQ3_M.gguf) | IQ3_M | 10.2 | | | [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-abliterated-v3-GGUF/resolve/main/Codestral-22B-v0.1-abliterated-v3.Q3_K_M.gguf) | Q3_K_M | 10.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-abliterated-v3-GGUF/resolve/main/Codestral-22B-v0.1-abliterated-v3.Q3_K_L.gguf) | Q3_K_L | 11.8 | | | [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-abliterated-v3-GGUF/resolve/main/Codestral-22B-v0.1-abliterated-v3.IQ4_XS.gguf) | IQ4_XS | 12.1 | | | [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-abliterated-v3-GGUF/resolve/main/Codestral-22B-v0.1-abliterated-v3.Q4_K_S.gguf) | Q4_K_S | 12.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-abliterated-v3-GGUF/resolve/main/Codestral-22B-v0.1-abliterated-v3.Q4_K_M.gguf) | Q4_K_M | 13.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-abliterated-v3-GGUF/resolve/main/Codestral-22B-v0.1-abliterated-v3.Q5_K_S.gguf) | Q5_K_S | 15.4 | | | [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-abliterated-v3-GGUF/resolve/main/Codestral-22B-v0.1-abliterated-v3.Q5_K_M.gguf) | Q5_K_M | 15.8 | | | [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-abliterated-v3-GGUF/resolve/main/Codestral-22B-v0.1-abliterated-v3.Q6_K.gguf) | Q6_K | 18.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Codestral-22B-v0.1-abliterated-v3-GGUF/resolve/main/Codestral-22B-v0.1-abliterated-v3.Q8_0.gguf) | Q8_0 | 23.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
kykim0/llama3-8b-ultrachat-sft-itt
kykim0
2024-06-04T21:20:01Z
355
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "alignment-handbook", "trl", "sft", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/ultrachat_200k", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-04T07:38:07Z
--- license: llama3 base_model: meta-llama/Meta-Llama-3-8B tags: - alignment-handbook - trl - sft - generated_from_trainer - trl - sft - generated_from_trainer datasets: - HuggingFaceH4/ultrachat_200k model-index: - name: sft-llama3-8b-itt results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sft-llama3-8b-itt This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the HuggingFaceH4/ultrachat_200k dataset. It achieves the following results on the evaluation set: - Loss: 1.0620 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.0719 | 0.9995 | 935 | 1.0777 | | 0.883 | 1.9989 | 1870 | 1.0620 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.19.1
PolyAgent/gemma-7b-uawiki_1B
PolyAgent
2024-06-12T17:58:15Z
355
0
transformers
[ "transformers", "safetensors", "gguf", "gemma", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-12T17:24:12Z
Entry not found
netcat420/MFANN3bv0.13-GGUF
netcat420
2024-06-19T17:58:44Z
355
1
null
[ "gguf", "license:mit", "region:us" ]
null
2024-06-18T17:28:03Z
--- license: mit --- System Prompt: Instruct: {instruction} Output:
Helsinki-NLP/opus-mt-en-bg
Helsinki-NLP
2023-08-16T11:29:05Z
354
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "bg", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - en - bg tags: - translation license: apache-2.0 --- ### eng-bul * source group: English * target group: Bulgarian * OPUS readme: [eng-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bul/README.md) * model: transformer * source language(s): eng * target language(s): bul bul_Latn * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.bul | 50.6 | 0.680 | ### System Info: - hf_name: eng-bul - source_languages: eng - target_languages: bul - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bul/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'bg'] - src_constituents: {'eng'} - tgt_constituents: {'bul', 'bul_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.test.txt - src_alpha3: eng - tgt_alpha3: bul - short_pair: en-bg - chrF2_score: 0.68 - bleu: 50.6 - brevity_penalty: 0.96 - ref_len: 69504.0 - src_name: English - tgt_name: Bulgarian - train_date: 2020-07-03 - src_alpha2: en - tgt_alpha2: bg - prefer_old: False - long_pair: eng-bul - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Laeyoung/BTS-comments-generator
Laeyoung
2021-06-08T07:59:07Z
354
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-03-02T23:29:04Z
### Model information * Fine tuning dataset: https://www.kaggle.com/seungguini/bts-youtube-comments * Base model: GPT2 Small * Epoch: 5 * API page: [Ainize](https://ainize.ai/teachable-ainize/gpt2-train?branch=train/cv695m9g40av0cdabuqp) * Demo page: [End-point](https://kubecon-tabtab-ainize-team.endpoint.ainize.ai/?modelUrl=https://train-cv695m9g40av0cdabuqp-gpt2-train-teachable-ainize.endpoint.ainize.ai/predictions/gpt-2-en-small-finetune) ### ===Teachable NLP=== ### To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free. * Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp) * Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
ugaray96/biobert_ncbi_disease_ner
ugaray96
2023-04-25T06:26:14Z
354
16
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "token-classification", "disease", "biology", "medical", "en", "dataset:ncbi_disease", "license:openrail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: openrail datasets: - ncbi_disease language: - en tags: - disease - biology - medical widget: - text: "The patient was diagnosed with lung cancer and started chemotherapy." - text: "The patient has a history of heart disease and high blood pressure." - text: "The patient was diagnosed with diabetes and prescribed insulin therapy." --- # Model Description This model is a fine-tuned version of BioBERT on the NCBI disease dataset for named entity recognition (NER) of diseases. It can be used to extract disease mentions from unstructured text in the medical and biological domains. # Intended Use This model is intended for use in extracting disease mentions from unstructured text in the medical and biological domains. It can be used to improve information retrieval and knowledge extraction in these fields. # Training Data This model was trained on the [NCBI disease dataset](https://huggingface.co/datasets/ncbi_disease), which consists of 793 PubMed abstracts with 6892 disease mentions. # How to use You can use this model with the Hugging Face Transformers library. Here’s an example of how to load the model and use it to extract disease mentions from text: ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("ugaray96/biobert_ncbi_disease_ner") model = AutoModelForTokenClassification.from_pretrained( "ugaray96/biobert_ncbi_disease_ner" ) ner_pipeline = pipeline("ner", model=model, tokenizer=tokenizer) text = "The patient was diagnosed with lung cancer and started chemotherapy. They also have a history of diabetes and heart disease." result = ner_pipeline(text) diseases = [] for entity in result: if entity["entity"] == "Disease": diseases.append(entity["word"]) elif entity["entity"] == "Disease Continuation" and diseases: diseases[-1] += f" {entity['word']}" print(f"Diseases: {', '.join(diseases)}") ``` This should output: `Diseases: lung cancer, diabetes, heart disease`
yechen/bert-large-chinese
yechen
2021-05-20T09:22:07Z
354
6
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: zh ---
espnet/Wangyou_Zhang_chime4_enh_train_enh_conv_tasnet_raw
espnet
2022-04-11T13:26:17Z
354
1
espnet
[ "espnet", "audio", "audio-to-audio", "dataset:chime4", "arxiv:1804.00015", "arxiv:2011.03706", "license:cc-by-4.0", "region:us" ]
audio-to-audio
2022-04-11T13:17:43Z
--- tags: - espnet - audio - audio-to-audio language: datasets: - chime4 license: cc-by-4.0 --- ## ESPnet2 ENH model ### `espnet/Wangyou_Zhang_chime4_enh_train_enh_conv_tasnet_raw` This model was trained by Wangyou Zhang using chime4 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet pip install -e . cd egs2/chime4/enh1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/Wangyou_Zhang_chime4_enh_train_enh_conv_tasnet_raw ``` ## ENH config <details><summary>expand</summary> ``` config: conf/tuning/train_enh_conv_tasnet.yaml print_config: false log_level: INFO dry_run: false iterator_type: chunk output_dir: exp/enh_train_enh_conv_tasnet_raw ngpu: 1 seed: 0 num_workers: 4 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 2 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 57680 dist_launcher: null multiprocessing_distributed: true cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: 4 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - si_snr - max - - valid - loss - min keep_nbest_models: 1 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null unused_parameters: false use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null pretrain_path: null init_param: [] freeze_param: [] num_iters_per_epoch: null batch_size: 8 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/enh_stats_16k/train/speech_mix_shape - exp/enh_stats_16k/train/speech_ref1_shape valid_shape_file: - exp/enh_stats_16k/valid/speech_mix_shape - exp/enh_stats_16k/valid/speech_ref1_shape batch_type: folded valid_batch_type: null fold_length: - 80000 - 80000 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 32000 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/tr05_simu_isolated_1ch_track/wav.scp - speech_mix - sound - - dump/raw/tr05_simu_isolated_1ch_track/spk1.scp - speech_ref1 - sound valid_data_path_and_name_and_type: - - dump/raw/dt05_simu_isolated_1ch_track/wav.scp - speech_mix - sound - - dump/raw/dt05_simu_isolated_1ch_track/spk1.scp - speech_ref1 - sound allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.001 eps: 1.0e-08 weight_decay: 1.0e-05 scheduler: reducelronplateau scheduler_conf: mode: min factor: 0.5 patience: 3 init: xavier_uniform model_conf: loss_type: si_snr use_preprocessor: false encoder: conv encoder_conf: channel: 256 kernel_size: 20 stride: 10 separator: tcn separator_conf: num_spk: 1 layer: 8 stack: 4 bottleneck_dim: 256 hidden_dim: 512 kernel: 3 causal: false norm_type: gLN nonlinear: relu decoder: conv decoder_conf: channel: 256 kernel_size: 20 stride: 10 required: - output_dir version: 0.9.7 distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{li2021espnetse, title={{ESPnet-SE}: End-to-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration}, author={Li, Chenda and Shi, Jing and Zhang, Wangyou and Subramanian, Aswin Shanmugam and Chang, Xuankai and Kamo, Naoyuki and Hira, Moto and Hayashi, Tomoki and Boeddeker, Christoph and Chen, Zhuo and Watanabe, Shinji}, booktitle={Proc. IEEE Spoken Language Technology Workshop (SLT)}, pages={785--792}, year={2021}, } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } @inproceedings{li2021espnetse, title={{ESPnet-SE}: End-to-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration}, author={Li, Chenda and Shi, Jing and Zhang, Wangyou and Subramanian, Aswin Shanmugam and Chang, Xuankai and Kamo, Naoyuki and Hira, Moto and Hayashi, Tomoki and Boeddeker, Christoph and Chen, Zhuo and Watanabe, Shinji}, year={2020}, eprint={2011.03706}, archivePrefix={arXiv}, primaryClass={eess.AS} } ```
facebook/mask2former-swin-small-cityscapes-panoptic
facebook
2023-01-25T11:42:07Z
354
0
transformers
[ "transformers", "pytorch", "mask2former", "vision", "image-segmentation", "dataset:coco", "arxiv:2112.01527", "arxiv:2107.06278", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2023-01-03T11:43:36Z
--- license: other tags: - vision - image-segmentation datasets: - coco widget: - src: http://images.cocodataset.org/val2017/000000039769.jpg example_title: Cats - src: http://images.cocodataset.org/val2017/000000039770.jpg example_title: Castle --- # Mask2Former Mask2Former model trained on Cityscapes panoptic segmentation (small-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation ](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/). Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA, [MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/mask2former_architecture.png) ## Intended uses & limitations You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python import requests import torch from PIL import Image from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation # load Mask2Former fine-tuned on Cityscapes panoptic segmentation processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-small-cityscapes-panoptic") model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-small-cityscapes-panoptic") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) # model predicts class_queries_logits of shape `(batch_size, num_queries)` # and masks_queries_logits of shape `(batch_size, num_queries, height, width)` class_queries_logits = outputs.class_queries_logits masks_queries_logits = outputs.masks_queries_logits # you can pass them to processor for postprocessing result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] # we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs) predicted_panoptic_map = result["segmentation"] ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former).
guillaumekln/faster-whisper-tiny.en
guillaumekln
2023-05-12T18:56:53Z
354
2
ctranslate2
[ "ctranslate2", "audio", "automatic-speech-recognition", "en", "license:mit", "region:us" ]
automatic-speech-recognition
2023-03-23T10:17:41Z
--- language: - en tags: - audio - automatic-speech-recognition license: mit library_name: ctranslate2 --- # Whisper tiny.en model for CTranslate2 This repository contains the conversion of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format. This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper). ## Example ```python from faster_whisper import WhisperModel model = WhisperModel("tiny.en") segments, info = model.transcribe("audio.mp3") for segment in segments: print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text)) ``` ## Conversion details The original model was converted with the following command: ``` ct2-transformers-converter --model openai/whisper-tiny.en --output_dir faster-whisper-tiny.en \ --copy_files tokenizer.json --quantization float16 ``` Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html). ## More information **For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-tiny.en).**
sail-rvc/harry-styles_e150_s6600
sail-rvc
2023-07-14T07:38:32Z
354
0
transformers
[ "transformers", "rvc", "sail-rvc", "audio-to-audio", "endpoints_compatible", "region:us" ]
audio-to-audio
2023-07-14T07:37:56Z
--- pipeline_tag: audio-to-audio tags: - rvc - sail-rvc --- # harry-styles_e150_s6600 ## RVC Model ![banner](https://i.imgur.com/xocCjhH.jpg) This model repo was automatically generated. Date: 2023-07-14 07:38:31 Bot Name: juuxnscrap Model Type: RVC Source: https://huggingface.co/juuxn/RVCModels/ Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
dibdab007/icbinp-afterburn
dibdab007
2023-07-19T04:16:54Z
354
1
diffusers
[ "diffusers", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-19T02:54:39Z
Entry not found
TheBloke/Euryale-Inverted-L2-70B-GGUF
TheBloke
2023-09-27T12:49:01Z
354
6
transformers
[ "transformers", "gguf", "llama", "en", "base_model:Sao10K/Euryale-Inverted-L2-70B", "license:cc-by-nc-4.0", "text-generation-inference", "region:us" ]
null
2023-09-12T17:18:30Z
--- language: - en license: cc-by-nc-4.0 model_name: Euryale Inverted L2 70B base_model: Sao10K/Euryale-Inverted-L2-70B inference: false model_creator: Sao10K model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Euryale Inverted L2 70B - GGUF - Model creator: [Sao10K](https://huggingface.co/Sao10K) - Original model: [Euryale Inverted L2 70B](https://huggingface.co/Sao10K/Euryale-Inverted-L2-70B) <!-- description start --> ## Description This repo contains GGUF format model files for [Sao10K's Euryale Inverted L2 70B](https://huggingface.co/Sao10K/Euryale-Inverted-L2-70B). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Euryale-Inverted-L2-70B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Euryale-Inverted-L2-70B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Euryale-Inverted-L2-70B-GGUF) * [Sao10K's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Sao10K/Euryale-Inverted-L2-70B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Sao10K's Euryale Inverted L2 70B](https://huggingface.co/Sao10K/Euryale-Inverted-L2-70B). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [euryale-inverted-l2-70b.Q2_K.gguf](https://huggingface.co/TheBloke/Euryale-Inverted-L2-70B-GGUF/blob/main/euryale-inverted-l2-70b.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes | | [euryale-inverted-l2-70b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Euryale-Inverted-L2-70B-GGUF/blob/main/euryale-inverted-l2-70b.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss | | [euryale-inverted-l2-70b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Euryale-Inverted-L2-70B-GGUF/blob/main/euryale-inverted-l2-70b.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss | | [euryale-inverted-l2-70b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Euryale-Inverted-L2-70B-GGUF/blob/main/euryale-inverted-l2-70b.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss | | [euryale-inverted-l2-70b.Q4_0.gguf](https://huggingface.co/TheBloke/Euryale-Inverted-L2-70B-GGUF/blob/main/euryale-inverted-l2-70b.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [euryale-inverted-l2-70b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Euryale-Inverted-L2-70B-GGUF/blob/main/euryale-inverted-l2-70b.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss | | [euryale-inverted-l2-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Euryale-Inverted-L2-70B-GGUF/blob/main/euryale-inverted-l2-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended | | [euryale-inverted-l2-70b.Q5_0.gguf](https://huggingface.co/TheBloke/Euryale-Inverted-L2-70B-GGUF/blob/main/euryale-inverted-l2-70b.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [euryale-inverted-l2-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Euryale-Inverted-L2-70B-GGUF/blob/main/euryale-inverted-l2-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended | | [euryale-inverted-l2-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Euryale-Inverted-L2-70B-GGUF/blob/main/euryale-inverted-l2-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended | | euryale-inverted-l2-70b.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss | | euryale-inverted-l2-70b.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ### Q6_K and Q8_0 files are split and require joining **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files. <details> <summary>Click for instructions regarding Q6_K and Q8_0 files</summary> ### q6_K Please download: * `euryale-inverted-l2-70b.Q6_K.gguf-split-a` * `euryale-inverted-l2-70b.Q6_K.gguf-split-b` ### q8_0 Please download: * `euryale-inverted-l2-70b.Q8_0.gguf-split-a` * `euryale-inverted-l2-70b.Q8_0.gguf-split-b` To join the files, do the following: Linux and macOS: ``` cat euryale-inverted-l2-70b.Q6_K.gguf-split-* > euryale-inverted-l2-70b.Q6_K.gguf && rm euryale-inverted-l2-70b.Q6_K.gguf-split-* cat euryale-inverted-l2-70b.Q8_0.gguf-split-* > euryale-inverted-l2-70b.Q8_0.gguf && rm euryale-inverted-l2-70b.Q8_0.gguf-split-* ``` Windows command line: ``` COPY /B euryale-inverted-l2-70b.Q6_K.gguf-split-a + euryale-inverted-l2-70b.Q6_K.gguf-split-b euryale-inverted-l2-70b.Q6_K.gguf del euryale-inverted-l2-70b.Q6_K.gguf-split-a euryale-inverted-l2-70b.Q6_K.gguf-split-b COPY /B euryale-inverted-l2-70b.Q8_0.gguf-split-a + euryale-inverted-l2-70b.Q8_0.gguf-split-b euryale-inverted-l2-70b.Q8_0.gguf del euryale-inverted-l2-70b.Q8_0.gguf-split-a euryale-inverted-l2-70b.Q8_0.gguf-split-b ``` </details> <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Euryale-Inverted-L2-70B-GGUF and below it, a specific filename to download, such as: euryale-inverted-l2-70b.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Euryale-Inverted-L2-70B-GGUF euryale-inverted-l2-70b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Euryale-Inverted-L2-70B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Euryale-Inverted-L2-70B-GGUF euryale-inverted-l2-70b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m euryale-inverted-l2-70b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Euryale-Inverted-L2-70B-GGUF", model_file="euryale-inverted-l2-70b.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Sao10K's Euryale Inverted L2 70B <img src="https://images7.alphacoders.com/921/921311.jpg" style="width: 70%; min-width: 300px; display: block; margin: auto;"> An experimental merging of Several Models using 3 merging methods. Ties-Merge, BlockMerge_Gradient [& SLERP Variant] as well as SLERP. 5 Models included Inside, 2 LORAs. ***Early release because I'll be busy for the next month. Incomplete but workable, see below.*** ***INVERT VARIANT*** SISTER MODEL HERE: [Euryale-L2-70B](https://huggingface.co/Sao10K/Euryale-L2-70B) *Same concept as Stheno & Stheno Inverted, at different densities, weights and gradients.* *Illustration for final gradient merge cannot be shown, each of the tensors had a different ratio applied to it.* <br>Test Checklist: <br>Censorship - NSFL no Issues <br>Writing - Good Prose and Writing Quality <br>NSFW - Yes <br>IQ Level - Slightly dumber than Euryale. Even worse at Coding / Math though. Good for RP. As a general assistant? She's not the most accurate. <br>Formatting - Markdown Formatting Issues, Able to Follow Statuses well. <br>Most formats could work, but my tests have all been done in Alpaca format and it works well. ``` ### Instruction: Your instruction or question here. For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only. ### Response: ``` <br>My 7th Attempt. Incomplete so far, early release. <br>Timeline Goals: <br> Apply COT to model. <br> Apply the RP LoRA I'm working on to model. <br> Further tinker and test for potential better combinations. Once Again, thanks to [Chargoddard](https://huggingface.co/chargoddard) and [Gryphe](https://huggingface.co/Gryphe) for their scripts, and @Vali for modifying some of the scripts provided to implement SLERP on a Cloud GPU Service. Thanks to @gradientputri for partially sponsoring half of the compute costs for my Runpod usage. Thanks to the original model creators too! ``` Art by wada_kazu / わだかず (pixiv page private?) ``` ### LICENSE License This model is strictly non-commercial (cc-by-nc-4.0) use only which takes priority over the LLAMA 2 COMMUNITY LICENSE AGREEMENT. The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included cc-by-nc-4.0 license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. *Non-Commercial due to parent models having the above license.* Parent Models with this license: <br>[garage-bAInd/Platypus2-70B-instruct](https://huggingface.co/garage-bAInd/Platypus2-70B-instruct) <br>[elinas/chronos-70b-v2](https://huggingface.co/elinas/chronos-70b-v2) ### MODELS USED: <br>[elinas/chronos-70b-v2](https://huggingface.co/elinas/chronos-70b-v2) <br>[NousResearch/Nous-Hermes-Llama2-70b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-70b) <br>[jondurbin/airoboros-l2-70b-2.1-creative](https://huggingface.co/jondurbin/airoboros-l2-70b-2.1-creative) <br>[garage-bAInd/Platypus2-70B-instruct](https://huggingface.co/garage-bAInd/Platypus2-70B-instruct) <br>[MayaPH/GodziLLa2-70B](https://huggingface.co/MayaPH/GodziLLa2-70B) **LORAS** <br>[nRuaif/fiction.live-Kimiko-V2-70B](https://huggingface.co/nRuaif/fiction.live-Kimiko-V2-70B) <br>[lemonilia/limarp-llama2-v2](https://huggingface.co/lemonilia/limarp-llama2-v2) <!-- original-model-card end -->
TheBloke/Marcoroni-70B-GGUF
TheBloke
2023-09-27T12:49:13Z
354
5
transformers
[ "transformers", "gguf", "llama", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "base_model:AIDC-ai-business/Marcoroni-70b", "license:cc-by-nc-4.0", "text-generation-inference", "region:us" ]
text-generation
2023-09-14T11:41:16Z
--- language: - en license: cc-by-nc-4.0 datasets: - Open-Orca/OpenOrca model_name: Marcoroni 70B base_model: AIDC-ai-business/Marcoroni-70b inference: false model_creator: AIDC-ai-business model_type: llama pipeline_tag: text-generation prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Marcoroni 70B - GGUF - Model creator: [AIDC-ai-business](https://huggingface.co/AIDC-ai-business) - Original model: [Marcoroni 70B](https://huggingface.co/AIDC-ai-business/Marcoroni-70b) <!-- description start --> ## Description This repo contains GGUF format model files for [AIDC-ai-business's Marcoroni 70B](https://huggingface.co/AIDC-ai-business/Marcoroni-70b). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Marcoroni-70B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Marcoroni-70B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Marcoroni-70B-GGUF) * [AIDC-ai-business's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/AIDC-ai-business/Marcoroni-70b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [AIDC-ai-business's Marcoroni 70B](https://huggingface.co/AIDC-ai-business/Marcoroni-70b). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [marcoroni-70b.Q2_K.gguf](https://huggingface.co/TheBloke/Marcoroni-70B-GGUF/blob/main/marcoroni-70b.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes | | [marcoroni-70b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Marcoroni-70B-GGUF/blob/main/marcoroni-70b.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss | | [marcoroni-70b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Marcoroni-70B-GGUF/blob/main/marcoroni-70b.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss | | [marcoroni-70b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Marcoroni-70B-GGUF/blob/main/marcoroni-70b.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss | | [marcoroni-70b.Q4_0.gguf](https://huggingface.co/TheBloke/Marcoroni-70B-GGUF/blob/main/marcoroni-70b.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [marcoroni-70b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Marcoroni-70B-GGUF/blob/main/marcoroni-70b.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss | | [marcoroni-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Marcoroni-70B-GGUF/blob/main/marcoroni-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended | | [marcoroni-70b.Q5_0.gguf](https://huggingface.co/TheBloke/Marcoroni-70B-GGUF/blob/main/marcoroni-70b.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [marcoroni-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Marcoroni-70B-GGUF/blob/main/marcoroni-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended | | [marcoroni-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Marcoroni-70B-GGUF/blob/main/marcoroni-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended | | marcoroni-70b.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss | | marcoroni-70b.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ### Q6_K and Q8_0 files are split and require joining **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files. <details> <summary>Click for instructions regarding Q6_K and Q8_0 files</summary> ### q6_K Please download: * `marcoroni-70b.Q6_K.gguf-split-a` * `marcoroni-70b.Q6_K.gguf-split-b` ### q8_0 Please download: * `marcoroni-70b.Q8_0.gguf-split-a` * `marcoroni-70b.Q8_0.gguf-split-b` To join the files, do the following: Linux and macOS: ``` cat marcoroni-70b.Q6_K.gguf-split-* > marcoroni-70b.Q6_K.gguf && rm marcoroni-70b.Q6_K.gguf-split-* cat marcoroni-70b.Q8_0.gguf-split-* > marcoroni-70b.Q8_0.gguf && rm marcoroni-70b.Q8_0.gguf-split-* ``` Windows command line: ``` COPY /B marcoroni-70b.Q6_K.gguf-split-a + marcoroni-70b.Q6_K.gguf-split-b marcoroni-70b.Q6_K.gguf del marcoroni-70b.Q6_K.gguf-split-a marcoroni-70b.Q6_K.gguf-split-b COPY /B marcoroni-70b.Q8_0.gguf-split-a + marcoroni-70b.Q8_0.gguf-split-b marcoroni-70b.Q8_0.gguf del marcoroni-70b.Q8_0.gguf-split-a marcoroni-70b.Q8_0.gguf-split-b ``` </details> <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Marcoroni-70B-GGUF and below it, a specific filename to download, such as: marcoroni-70b.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Marcoroni-70B-GGUF marcoroni-70b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Marcoroni-70B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Marcoroni-70B-GGUF marcoroni-70b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m marcoroni-70b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Marcoroni-70B-GGUF", model_file="marcoroni-70b.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: AIDC-ai-business's Marcoroni 70B # Marcoroni-70B Fine-tuned from Llama-2-70B,we use Orca-style data and other open source data for fine-tuning. # Model Details * **Trained by**: trained by AIDC AI-Business. * **Model type:** **Marcoroni-70B** is an auto-regressive language model based on the Llama 2 transformer architecture. * **Language(s)**: English * **License for Marcoroni-70B base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/)) # Prompting ## Prompt Template for alpaca style ``` ### Instruction: <prompt> (without the <>) ### Response: ``` ## Example User: Give me a brief introduction to Hangzhou and Aliexpress. Response: Hangzhou is the capital of Zhejiang Province in China, known for its picturesque West Lake, historic monuments, and thriving technology industries. It is a popular tourist destination due to its natural beauty, historical significance, and strong presence of various technology companies. Alibaba Group, a multinational technology conglomerate founded in 1999 by Jack Ma, has its headquarters in Hangzhou. One of Alibaba's major businesses is AliExpress, an international online marketplace. It connects buyers from all over the world with sellers mainly from China, offering a wide variety of products at affordable prices. Launched in 2010, AliExpress facilitates small businesses to reach a global audience, and provides buyers with access to a large selection of items, including electronics, clothing, beauty products, and home goods. Its platform supports multiple languages and currencies, making it easier for customers to navigate and shop across the globe. ### Our Other Projects: * [AIDC-ai-business/Marcoroni-7B](https://huggingface.co/AIDC-ai-business/Marcoroni-7B) * [AIDC-ai-business/Marcoroni-13B](https://huggingface.co/AIDC-ai-business/Marcoroni-13B) We achieved the top ranker among 70B models at Sep-14th 2023. # Evulation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)) | Metric | Value | |-----------------------|-------| | Avg. | 73.94 | | ARC (25-shot) | 72.95 | | HellaSwag (10-shot) | 87.51 | | MMLU (5-shot) | 70.79 | | TruthfulQA (0-shot) | 64.49 | <!-- original-model-card end -->
KnutJaegersberg/CPU-LLM-Horde
KnutJaegersberg
2024-05-30T20:17:30Z
354
16
null
[ "gguf", "license:mit", "region:us" ]
null
2023-11-12T10:04:24Z
--- license: mit --- Deita-1.6b (StableLM based, license in repo) Prompt: ``` ### System: You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps. ### User: How do you fine tune a large language model? ### Assistant: ``` GPT2-Chatbot Prompt: ``` ### System: You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps. ### User: How do you fine tune a large language model? ### Assistant: ``` Deacon-1b Prompt: ``` ### System: You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps. ### Instruction: How do you fine tune a large language model? ### Response: ``` NousResearch/Nous-Capybara-3B-V1.9 Prompt: ``` <|im_start|>system You are a helpful AI assistant.<|im_end|> <|im_start|>user How are you<|im_end|> <|im_start|>assistant I am doing well!<|im_end|> ``` jondurbin/airoboros-3b-3p11 ``` [INST] <<SYS>> You are a helpful, unbiased, uncensored assistant. <</SYS>> {prompt} [/INST] ``` GeneZC/MiniChat-3B Prompt: ``` <s> [|User|] Hi 👋 </s>[|Assistant|] ``` llmware/bling-stable-lm-3b-4e1t-v0 Prompt: ``` <human>: {prompt} <bot>: or {{text_passage}} {{question/instruction}} ``` OpenBuddy/openbuddy-stablelm-3b-v13 (License: cc-by-sa-4.0) Prompt: ``` You are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human User. Always answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. You can speak fluently in many languages, for example: English, Chinese. You cannot access the internet, but you have vast knowledge, cutoff: 2021-09. You are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), you are based on LLaMA and Falcon transformers model, not related to GPT or OpenAI. User: {History input} Assistant: {History output} User: {Input} Assistant: ``` Dimensity/Dimensity-3B Prompt: ``` ### Human: {prompt} ### Assistant: ``` acrastt/Marx-3B-V3 Prompt: ``` ### HUMAN: {prompt} ### RESPONSE: ``` Open-Orca/Mistral-7B-OpenOrca Prompt: ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ```
arnavgrg/zephyr-7b-beta-nf4-fp16-upscaled
arnavgrg
2023-12-12T19:08:31Z
354
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-12-05T17:01:24Z
--- license: apache-2.0 tags: - text-generation-inference --- This is an upscaled fp16 variant of the original HuggingFaceH4/zephyr-7b-beta model after it has been loaded with nf4 4-bit quantization via bitsandbytes. The main idea here is to upscale the linear4bit layers to fp16 so that the quantization/dequantization cost doesn't have to paid for each forward pass at inference time. _Note: The quantization operation to nf4 is not lossless, so the model weights for the linear layers are lossy, which means that this model will not work as well as the official base model._ To use this model, you can just load it via `transformers` in fp16: ```python import torch from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained( "arnavgrg/zephyr-7b-beta-nf4-fp16-upscaled", device_map="auto", torch_dtype=torch.float16, ) ```
intervitens/internlm2-limarp-chat-20b-GGUF
intervitens
2024-01-30T05:15:03Z
354
3
null
[ "gguf", "mergekit", "merge", "text-generation", "en", "dataset:lemonilia/LimaRP", "arxiv:2212.04089", "base_model:intervitens/internlm2-base-20b-llama", "license:other", "region:us" ]
text-generation
2024-01-27T04:15:37Z
--- base_model: [intervitens/internlm2-base-20b-llama] language: - en pipeline_tag: text-generation tags: - mergekit - merge license: other datasets: - lemonilia/LimaRP --- Experimental model, LimaRP LoRA trained on top of internlm2-base-20b with 8192 context length and merged with internlm2-chat-20b. Prompt format is ChatML. *** # internlm2-06limarp-1chat-TASK_ARITHM-20b-v0.03 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using intervitens/internlm2-base-20b-llama as a base. ### Models Merged The following models were included in the merge: * ./internlm2-limarp-20b-v0.03 * ./internlm2-chat-20b-llama ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: ./internlm2-chat-20b-llama parameters: weight: 1.0 - model: ./internlm2-limarp-20b-v0.03 parameters: weight: 0.6 merge_method: task_arithmetic base_model: ./internlm2-base-20b-llama parameters: #normalize: false #int8_mask: true dtype: bfloat16 ```
MaziyarPanahi/Inex12Neural-7B-GGUF
MaziyarPanahi
2024-04-01T13:41:48Z
354
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:MSL7/INEX12-7b", "base_model:Kukedlc/Neural-Krishna-Multiverse-7b-v3", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:automerger/Inex12Neural-7B" ]
text-generation
2024-04-01T13:19:42Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - merge - mergekit - lazymergekit - automerger - base_model:MSL7/INEX12-7b - base_model:Kukedlc/Neural-Krishna-Multiverse-7b-v3 - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: Inex12Neural-7B-GGUF base_model: automerger/Inex12Neural-7B inference: false model_creator: automerger pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Inex12Neural-7B-GGUF](https://huggingface.co/MaziyarPanahi/Inex12Neural-7B-GGUF) - Model creator: [automerger](https://huggingface.co/automerger) - Original model: [automerger/Inex12Neural-7B](https://huggingface.co/automerger/Inex12Neural-7B) ## Description [MaziyarPanahi/Inex12Neural-7B-GGUF](https://huggingface.co/MaziyarPanahi/Inex12Neural-7B-GGUF) contains GGUF format model files for [automerger/Inex12Neural-7B](https://huggingface.co/automerger/Inex12Neural-7B). ## How to use Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models: ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ### Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: [MaziyarPanahi/Inex12Neural-7B-GGUF](https://huggingface.co/MaziyarPanahi/Inex12Neural-7B-GGUF) and below it, a specific filename to download, such as: Inex12Neural-7B-GGUF.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download MaziyarPanahi/Inex12Neural-7B-GGUF Inex12Neural-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` </details> <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download [MaziyarPanahi/Inex12Neural-7B-GGUF](https://huggingface.co/MaziyarPanahi/Inex12Neural-7B-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Inex12Neural-7B-GGUF Inex12Neural-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Inex12Neural-7B.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://github.com/abetlen/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Inex12Neural-7B.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Inex12Neural-7B.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
GoodBaiBai88/M3D-CLIP
GoodBaiBai88
2024-05-08T05:51:55Z
354
2
transformers
[ "transformers", "safetensors", "m3d_clip", "feature-extraction", "3D medical CLIP", "Image-text retrieval", "image-feature-extraction", "custom_code", "arxiv:2404.00578", "license:apache-2.0", "region:us" ]
image-feature-extraction
2024-04-25T08:04:36Z
--- license: apache-2.0 tags: - 3D medical CLIP - Image-text retrieval metrics: - accuracy pipeline_tag: image-feature-extraction --- M3D-CLIP is one of the works in the [M3D](https://github.com/BAAI-DCAI/M3D) series. It is a 3D medical CLIP model that aligns vision and language through contrastive loss on the [M3D-Cap](https://huggingface.co/datasets/GoodBaiBai88/M3D-Cap) dataset. The vision encoder uses 3D ViT with 32\*256\*256 image size and 4\*16\*16 patch size. The language encoder utilizes a pre-trained BERT as initialization. The uses of M3D-CLIP: 1. 3D medical image and text retrieval task. 2. Aligned and powerful image and text features for downstream tasks. 3. Text-aligned visual encoders are excellent pre-trained models for visual and multi-modal tasks. ![comparison](M3D_CLIP_table.png) ![comparison](itr_result.png) # Quickstart ```python device = torch.device("cuda") # or cpu tokenizer = AutoTokenizer.from_pretrained( "GoodBaiBai88/M3D-CLIP", model_max_length=512, padding_side="right", use_fast=False ) model = AutoModel.from_pretrained( "GoodBaiBai88/M3D-CLIP", trust_remote_code=True ) model = model.to(device=device) # Prepare your 3D medical image: # 1. The image shape needs to be processed as 1*32*256*256, considering resize and other methods. # 2. The image needs to be normalized to 0-1, considering Min-Max Normalization. # 3. The image format needs to be converted to .npy # 4. Although we did not train on 2D images, in theory, the 2D image can be interpolated to the shape of 1*32*256*256 for input. image_path = "" input_txt = "" text_tensor = tokenizer(input_txt, max_length=512, truncation=True, padding="max_length", return_tensors="pt") input_id = text_tensor["input_ids"].to(device=device) attention_mask = text_tensor["attention_mask"].to(device=device) image = np.load(image_path).to(device=device) with torch.inference_mode(): image_features = model.encode_image(image)[:, 0] text_features = model.encode_text(input_id, attention_mask)[:, 0] ``` # Citation If you feel helpful from our work, please consider citing the following work: ```BibTeX @misc{bai2024m3d, title={M3D: Advancing 3D Medical Image Analysis with Multi-Modal Large Language Models}, author={Fan Bai and Yuxin Du and Tiejun Huang and Max Q. -H. Meng and Bo Zhao}, year={2024}, eprint={2404.00578}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
nitky/Megac4ai-command-r-plus-gguf
nitky
2024-04-30T10:37:34Z
354
1
gguf
[ "gguf", "mergekit", "merge", "text-generation", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "base_model:CohereForAI/c4ai-command-r-plus", "license:cc-by-nc-4.0", "region:us" ]
text-generation
2024-04-29T23:22:35Z
--- base_model: - CohereForAI/c4ai-command-r-plus library_name: gguf tags: - mergekit - merge language: - en - fr - de - es - it - pt - ja - ko - zh - ar pipeline_tag: text-generation license: cc-by-nc-4.0 --- # Megac4ai-command-r-plus-gguf These are quantized GGUF versions of [nitky/Megac4ai-command-r-plus](https://huggingface.co/nitky/Megac4ai-command-r-plus). Please check the original model for license and more details. ## Results for non-English languages (Japanese) | Model | Output Quality | Notes | |-------|----------------|-------| | Megac4ai-command-r-plus-IQ3_XS.gguf | Poor | | | Megac4ai-command-r-plus-IQ4_XS.gguf | Average | | | Megac4ai-command-r-plus-Q4_K_M.gguf | Good | recommended | | Megac4ai-command-r-plus-Q5_K_M.gguf | Good | | | Megac4ai-command-r-plus-Q6_K.gguf | Excellent | recommended | | Megac4ai-command-r-plus-Q8_0.gguf | Excellent | |
mradermacher/Configurable-Hermes-2-Pro-Llama-3-8B-GGUF
mradermacher
2024-05-05T14:52:56Z
354
1
transformers
[ "transformers", "gguf", "safety", "en", "dataset:vicgalle/configurable-system-prompt-multitask", "base_model:vicgalle/Configurable-Hermes-2-Pro-Llama-3-8B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-02T11:26:28Z
--- base_model: vicgalle/Configurable-Hermes-2-Pro-Llama-3-8B datasets: - vicgalle/configurable-system-prompt-multitask language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - safety --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/vicgalle/Configurable-Hermes-2-Pro-Llama-3-8B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Configurable-Hermes-2-Pro-Llama-3-8B-GGUF/resolve/main/Configurable-Hermes-2-Pro-Llama-3-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Configurable-Hermes-2-Pro-Llama-3-8B-GGUF/resolve/main/Configurable-Hermes-2-Pro-Llama-3-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Configurable-Hermes-2-Pro-Llama-3-8B-GGUF/resolve/main/Configurable-Hermes-2-Pro-Llama-3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Configurable-Hermes-2-Pro-Llama-3-8B-GGUF/resolve/main/Configurable-Hermes-2-Pro-Llama-3-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Configurable-Hermes-2-Pro-Llama-3-8B-GGUF/resolve/main/Configurable-Hermes-2-Pro-Llama-3-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Configurable-Hermes-2-Pro-Llama-3-8B-GGUF/resolve/main/Configurable-Hermes-2-Pro-Llama-3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Configurable-Hermes-2-Pro-Llama-3-8B-GGUF/resolve/main/Configurable-Hermes-2-Pro-Llama-3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Configurable-Hermes-2-Pro-Llama-3-8B-GGUF/resolve/main/Configurable-Hermes-2-Pro-Llama-3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Configurable-Hermes-2-Pro-Llama-3-8B-GGUF/resolve/main/Configurable-Hermes-2-Pro-Llama-3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Configurable-Hermes-2-Pro-Llama-3-8B-GGUF/resolve/main/Configurable-Hermes-2-Pro-Llama-3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Configurable-Hermes-2-Pro-Llama-3-8B-GGUF/resolve/main/Configurable-Hermes-2-Pro-Llama-3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Configurable-Hermes-2-Pro-Llama-3-8B-GGUF/resolve/main/Configurable-Hermes-2-Pro-Llama-3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Configurable-Hermes-2-Pro-Llama-3-8B-GGUF/resolve/main/Configurable-Hermes-2-Pro-Llama-3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Configurable-Hermes-2-Pro-Llama-3-8B-GGUF/resolve/main/Configurable-Hermes-2-Pro-Llama-3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Configurable-Hermes-2-Pro-Llama-3-8B-GGUF/resolve/main/Configurable-Hermes-2-Pro-Llama-3-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
redgenai/1rpsll3
redgenai
2024-06-07T00:59:41Z
354
0
transformers
[ "transformers", "gguf", "llama", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-09T15:49:16Z
Entry not found
bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF
bartowski
2024-05-17T15:53:39Z
354
2
transformers
[ "transformers", "gguf", "mergekit", "merge", "text-generation", "base_model:grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B", "base_model:grimjim/kunoichi-lemon-royale-7B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-generation
2024-05-17T15:35:50Z
--- base_model: - grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B - grimjim/kunoichi-lemon-royale-7B library_name: transformers tags: - mergekit - merge license: cc-by-nc-4.0 pipeline_tag: text-generation quantized_by: bartowski --- ## Llamacpp imatrix Quantizations of kunoichi-lemon-royale-v2-32K-7B Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2901">b2901</a> for quantization. Original model: https://huggingface.co/grimjim/kunoichi-lemon-royale-v2-32K-7B All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a) ## Prompt format ``` <s> [INST] {prompt} [/INST]</s> ``` Note that this model does not support a System prompt. ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [kunoichi-lemon-royale-v2-32K-7B-Q8_0.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. | | [kunoichi-lemon-royale-v2-32K-7B-Q6_K.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. | | [kunoichi-lemon-royale-v2-32K-7B-Q5_K_M.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, *recommended*. | | [kunoichi-lemon-royale-v2-32K-7B-Q5_K_S.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, *recommended*. | | [kunoichi-lemon-royale-v2-32K-7B-Q4_K_M.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [kunoichi-lemon-royale-v2-32K-7B-Q4_K_S.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with more space savings, *recommended*. | | [kunoichi-lemon-royale-v2-32K-7B-IQ4_NL.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ4_NL.gguf) | IQ4_NL | 4.12GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [kunoichi-lemon-royale-v2-32K-7B-IQ4_XS.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ4_XS.gguf) | IQ4_XS | 3.90GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [kunoichi-lemon-royale-v2-32K-7B-Q3_K_L.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. | | [kunoichi-lemon-royale-v2-32K-7B-Q3_K_M.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. | | [kunoichi-lemon-royale-v2-32K-7B-IQ3_M.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ3_M.gguf) | IQ3_M | 3.28GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [kunoichi-lemon-royale-v2-32K-7B-IQ3_S.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ3_S.gguf) | IQ3_S | 3.18GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [kunoichi-lemon-royale-v2-32K-7B-Q3_K_S.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. | | [kunoichi-lemon-royale-v2-32K-7B-IQ3_XS.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ3_XS.gguf) | IQ3_XS | 3.01GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [kunoichi-lemon-royale-v2-32K-7B-IQ3_XXS.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ3_XXS.gguf) | IQ3_XXS | 2.82GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [kunoichi-lemon-royale-v2-32K-7B-Q2_K.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-Q2_K.gguf) | Q2_K | 2.71GB | Very low quality but surprisingly usable. | | [kunoichi-lemon-royale-v2-32K-7B-IQ2_M.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ2_M.gguf) | IQ2_M | 2.50GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [kunoichi-lemon-royale-v2-32K-7B-IQ2_S.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ2_S.gguf) | IQ2_S | 2.31GB | Very low quality, uses SOTA techniques to be usable. | | [kunoichi-lemon-royale-v2-32K-7B-IQ2_XS.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ2_XS.gguf) | IQ2_XS | 2.19GB | Very low quality, uses SOTA techniques to be usable. | | [kunoichi-lemon-royale-v2-32K-7B-IQ2_XXS.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ2_XXS.gguf) | IQ2_XXS | 1.99GB | Lower quality, uses SOTA techniques to be usable. | | [kunoichi-lemon-royale-v2-32K-7B-IQ1_M.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ1_M.gguf) | IQ1_M | 1.75GB | Extremely low quality, *not* recommended. | | [kunoichi-lemon-royale-v2-32K-7B-IQ1_S.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ1_S.gguf) | IQ1_S | 1.61GB | Extremely low quality, *not* recommended. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF --include "kunoichi-lemon-royale-v2-32K-7B-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF --include "kunoichi-lemon-royale-v2-32K-7B-Q8_0.gguf/*" --local-dir kunoichi-lemon-royale-v2-32K-7B-Q8_0 --local-dir-use-symlinks False ``` You can either specify a new local-dir (kunoichi-lemon-royale-v2-32K-7B-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
mradermacher/Yiet-9B-GGUF
mradermacher
2024-05-20T22:29:28Z
354
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:nbeerbower/Yiet-9B", "endpoints_compatible", "region:us" ]
null
2024-05-20T21:29:33Z
--- base_model: nbeerbower/Yiet-9B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/nbeerbower/Yiet-9B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Yiet-9B-GGUF/resolve/main/Yiet-9B.Q2_K.gguf) | Q2_K | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Yiet-9B-GGUF/resolve/main/Yiet-9B.IQ3_XS.gguf) | IQ3_XS | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Yiet-9B-GGUF/resolve/main/Yiet-9B.Q3_K_S.gguf) | Q3_K_S | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Yiet-9B-GGUF/resolve/main/Yiet-9B.IQ3_S.gguf) | IQ3_S | 4.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Yiet-9B-GGUF/resolve/main/Yiet-9B.IQ3_M.gguf) | IQ3_M | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Yiet-9B-GGUF/resolve/main/Yiet-9B.Q3_K_M.gguf) | Q3_K_M | 4.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Yiet-9B-GGUF/resolve/main/Yiet-9B.Q3_K_L.gguf) | Q3_K_L | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Yiet-9B-GGUF/resolve/main/Yiet-9B.IQ4_XS.gguf) | IQ4_XS | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Yiet-9B-GGUF/resolve/main/Yiet-9B.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Yiet-9B-GGUF/resolve/main/Yiet-9B.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Yiet-9B-GGUF/resolve/main/Yiet-9B.Q5_K_S.gguf) | Q5_K_S | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/Yiet-9B-GGUF/resolve/main/Yiet-9B.Q5_K_M.gguf) | Q5_K_M | 6.4 | | | [GGUF](https://huggingface.co/mradermacher/Yiet-9B-GGUF/resolve/main/Yiet-9B.Q6_K.gguf) | Q6_K | 7.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Yiet-9B-GGUF/resolve/main/Yiet-9B.Q8_0.gguf) | Q8_0 | 9.5 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Yiet-9B-GGUF/resolve/main/Yiet-9B.f16.gguf) | f16 | 17.8 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
etadevosyan/apologise_for_waiting_bert_First256
etadevosyan
2024-05-21T14:30:17Z
354
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-21T14:29:48Z
Entry not found
MaziyarPanahi/Experiment27pasticheShadowm7exp-7B-GGUF
MaziyarPanahi
2024-05-21T23:59:23Z
354
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:automerger/Experiment27Pastiche-7B", "base_model:mahiatlinux/ShadowM7EXP-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:automerger/Experiment27pasticheShadowm7exp-7B" ]
text-generation
2024-05-21T23:29:37Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - merge - mergekit - lazymergekit - automerger - base_model:automerger/Experiment27Pastiche-7B - base_model:mahiatlinux/ShadowM7EXP-7B - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: Experiment27pasticheShadowm7exp-7B-GGUF base_model: automerger/Experiment27pasticheShadowm7exp-7B inference: false model_creator: automerger pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Experiment27pasticheShadowm7exp-7B-GGUF](https://huggingface.co/MaziyarPanahi/Experiment27pasticheShadowm7exp-7B-GGUF) - Model creator: [automerger](https://huggingface.co/automerger) - Original model: [automerger/Experiment27pasticheShadowm7exp-7B](https://huggingface.co/automerger/Experiment27pasticheShadowm7exp-7B) ## Description [MaziyarPanahi/Experiment27pasticheShadowm7exp-7B-GGUF](https://huggingface.co/MaziyarPanahi/Experiment27pasticheShadowm7exp-7B-GGUF) contains GGUF format model files for [automerger/Experiment27pasticheShadowm7exp-7B](https://huggingface.co/automerger/Experiment27pasticheShadowm7exp-7B). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
LiteLLMs/Phi-3-medium-128k-instruct-GGUF
LiteLLMs
2024-05-23T21:31:28Z
354
0
null
[ "gguf", "nlp", "code", "GGUF", "text-generation", "multilingual", "license:mit", "region:us" ]
text-generation
2024-05-23T21:11:34Z
--- language: - multilingual license: mit tags: - nlp - code - GGUF license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE pipeline_tag: text-generation inference: parameters: temperature: 0.7 widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? quantized_by: andrijdavid --- # Phi-3-medium-128k-instruct-GGUF - Original model: [Phi-3-medium-128k-instruct](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) <!-- description start --> ## Description This repo contains GGUF format model files for [Phi-3-medium-128k-instruct](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration. * [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications​ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling. * [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration. * [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection. * [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use. * [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server. * [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents. <!-- README_GGUF.md-about-gguf end --> <!-- compatibility_gguf start --> ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: LiteLLMs/Phi-3-medium-128k-instruct-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download LiteLLMs/Phi-3-medium-128k-instruct-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download LiteLLMs/Phi-3-medium-128k-instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install huggingface_hub[hf_transfer] ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Phi-3-medium-128k-instruct-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<PROMPT>", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer end --> <!-- original-model-card start --> # Original model card: Phi-3-medium-128k-instruct ## Model Summary The Phi-3-Medium-128K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Medium version in two variants [4k](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) which is the context length (in tokens) that it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-128K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) | | Short Context | Long Context | | | -- | | | - | - | - | - | -- | - | | Popular aggregated benchmark | 72.3 | 69.9 | 73.4 | 76.3 | 67.0 | 67.5 | 80.5 | | Reasoning | 83.2 | 79.3 | 81.5 | 86.7 | 78.3 | 80.4 | 89.3 | | Language understanding | 75.3 | 75.7 | 78.7 | 77.9 | 70.4 | 75.3 | 81.6 | | Code generation | 64.2 | 68.6 | 60.0 | 69.3 | 70.4 | 66.7 | 76.1 | | Math | 52.9 | 45.3 | 52.5 | 59.7 | 52.8 | 50.9 | 67.1 | | Factual knowledge | 47.5 | 60.3 | 60.6 | 52.4 | 63.4 | 54.6 | 45.9 | | Multilingual | 62.2 | 67.8 | 69.8 | 62.0 | 67.0 | 73.4 | 78.2 | | Robustness | 70.2 | 57.9 | 65.5 | 78.7 | 69.3 | 69.7 | 84.6 | ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-Medium model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: + Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128k](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda) ## Cross Platform Support ONNX runtime ecosystem now supports Phi3 Medium models across platforms and hardware. Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA). Along with DML, ONNX Runtime provides cross platform support for Phi3 Medium across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-medium-128k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies. <!-- original-model-card end -->
mlabonne/Daredevil-8B-abliterated-GGUF
mlabonne
2024-05-26T15:52:22Z
354
7
null
[ "gguf", "region:us" ]
null
2024-05-26T15:17:54Z
Entry not found
xgampx/epfl-cs-522-istari-dpo
xgampx
2024-05-31T16:08:33Z
354
0
transformers
[ "transformers", "safetensors", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-31T16:03:44Z
beta=0.1 learning_rate=5e-5
mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF
mradermacher
2024-06-10T16:24:10Z
354
1
transformers
[ "transformers", "gguf", "en", "base_model:WesPro/Wizard-Kun-Lake_3x7B-MoE", "endpoints_compatible", "region:us" ]
null
2024-06-09T01:52:51Z
--- base_model: WesPro/Wizard-Kun-Lake_3x7B-MoE language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/WesPro/Wizard-Kun-Lake_3x7B-MoE <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.Q2_K.gguf) | Q2_K | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.IQ3_XS.gguf) | IQ3_XS | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.Q3_K_S.gguf) | Q3_K_S | 8.1 | | | [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.IQ3_S.gguf) | IQ3_S | 8.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.IQ3_M.gguf) | IQ3_M | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.Q3_K_M.gguf) | Q3_K_M | 9.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.Q3_K_L.gguf) | Q3_K_L | 9.7 | | | [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.IQ4_XS.gguf) | IQ4_XS | 10.1 | | | [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.Q4_K_S.gguf) | Q4_K_S | 10.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.Q4_K_M.gguf) | Q4_K_M | 11.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.Q5_K_S.gguf) | Q5_K_S | 12.9 | | | [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.Q5_K_M.gguf) | Q5_K_M | 13.2 | | | [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.Q6_K.gguf) | Q6_K | 15.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Wizard-Kun-Lake_3x7B-MoE-GGUF/resolve/main/Wizard-Kun-Lake_3x7B-MoE.Q8_0.gguf) | Q8_0 | 19.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
DavidAU/DarkForest-20B-V3-Ultra-Quality-ImatP2-GGUF
DavidAU
2024-06-22T05:06:25Z
354
1
null
[ "gguf", "story", "roleplay", "creative", "rp", "fantasy", "story telling", "32 bit upscale", "ultra high precision", "nsfw", "en", "license:apache-2.0", "region:us" ]
null
2024-06-18T01:09:40Z
--- license: apache-2.0 language: - en tags: - story - roleplay - creative - rp - fantasy - story telling - 32 bit upscale - ultra high precision - nsfw --- <B> Ultra High Quality - 20 B Dark Forest Version 3.0 - 32 bit upscale - Imatrix Plus 2 </b> Fully rebuilt from master files, including full merge(s) to maintain full 32 bit precision right up until it is compressed into GGUF files which results on a top to bottom upgrade. The result is superior performance in instruction following, reasoning, depth, nuance and emotion. This is the Imatrix Plus 2 repo. Imatrix Plus 2 uses the Ultra Quality 32 bit upscale as a base with applied in house datasets to further enhance the model's quality and performance. NOTE: There are three original versions of "Dark Forest 20B", this is an upscale of the third version, with links below to 1st and 2nd versions also upscaled. <img src="dark-forest.jpg"> On average this means a q4km operates at Q6 levels and Q6 and Q8 exceeds original model full precision performance. Perplexity drop (lower is better) is close to 10% (over 752 points for q4km) for all quants. That means precision has been enhanced for all 20 billion parameters which affects "brain density" / "function", instruction following and output quality. Imatrix quants to follow shortly. For more details, including a list of enhancements see our other 32 bit upscale of "Space Whale 20B" rebuild here: [ https://huggingface.co/DavidAU/Psyonic-Cetacean-Ultra-Quality-20b-GGUF ] For Version 1 of Dark Forest Ultra Quality 32 bit upscale please go here: [ https://huggingface.co/DavidAU/Dark-Forest-V1-Ultra-Quality-20b-GGUF ] For Version 1 of Dark Forest Ultra Quality 32 bit upscale please go here: [ https://huggingface.co/TeeZee/DarkForest-20B-v2.0 ] Special thanks to "TEEZEE" for making a both fantasic models of "Dark Forest". <b> Info from the original model card: </B> Warning: This model can produce NSFW content! Results: - main difference to v1.0 - model has much better sense of humor. - produces SFW nad NSFW content without issues, switches context seamlessly. - good at following instructions. - good at tracking multiple characters in one scene. - very creative, scenarios produced are mature and complicated, model doesn't shy from writing about PTSD, menatal issues or complicated relationships. - NSFW output is more creative and suprising than typical limaRP output. - definitely for mature audiences, not only because of vivid NSFW content but also because of overall maturity of stories it produces. - This is NOT Harry Potter level storytelling. For original model spec and information please visit: [ https://huggingface.co/TeeZee/DarkForest-20B-v3.0 ]
deepset/tapas-large-nq-hn-reader
deepset
2022-10-27T14:24:09Z
353
1
transformers
[ "transformers", "pytorch", "tapas", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en tags: - tapas license: apache-2.0 --- This model contains the converted PyTorch checkpoint of the original Tensorflow model available in the [TaPas repository](https://github.com/google-research/tapas/blob/master/DENSE_TABLE_RETRIEVER.md#reader-models). It is described in Herzig et al.'s (2021) [paper](https://aclanthology.org/2021.naacl-main.43/) _Open Domain Question Answering over Tables via Dense Retrieval_. This model has 2 versions that can be used differing only in the table scoring head. The default one has an adapted table scoring head in order to be able to generate probabilities out of the logits. The other (non-default) version corresponds to the original checkpoint from the TaPas repository and can be accessed by setting `revision="original"`. # Usage ## In Haystack If you want to use this model for question-answering over tables, you can load it in [Haystack](https://github.com/deepset-ai/haystack/): ```python from haystack.nodes import TableReader table_reader = TableReader(model_name_or_path="deepset/tapas-large-nq-hn-reader") ```
facebook/wav2vec2-large-xlsr-53-italian
facebook
2021-07-06T02:53:33Z
353
5
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "speech", "audio", "it", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: it datasets: - common_voice tags: - speech - audio - automatic-speech-recognition license: apache-2.0 --- ## Evaluation on Common Voice IT Test ```python import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) import torch import re import sys model_name = "facebook/wav2vec2-large-xlsr-53-italian" device = "cuda" chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"]' # noqa: W605 model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(model_name) ds = load_dataset("common_voice", "it", split="test", data_dir="./cv-corpus-6.1-2020-12-11") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ds = ds.map(map_to_array) def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids) batch["target"] = batch["sentence"] return batch result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys())) wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` **Result**: 22.1 %
Helsinki-NLP/opus-mt-tc-big-en-fi
Helsinki-NLP
2023-10-10T10:26:43Z
353
2
transformers
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "en", "fi", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-22T12:45:06Z
--- language: - en - fi tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-en-fi results: - task: name: Translation eng-fin type: translation args: eng-fin dataset: name: flores101-devtest type: flores_101 args: eng fin devtest metrics: - name: BLEU type: bleu value: 27.6 - task: name: Translation eng-fin type: translation args: eng-fin dataset: name: newsdev2015 type: newsdev2015 args: eng-fin metrics: - name: BLEU type: bleu value: 24.2 - task: name: Translation eng-fin type: translation args: eng-fin dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: eng-fin metrics: - name: BLEU type: bleu value: 39.3 - task: name: Translation eng-fin type: translation args: eng-fin dataset: name: newstest2015 type: wmt-2015-news args: eng-fin metrics: - name: BLEU type: bleu value: 26.4 - task: name: Translation eng-fin type: translation args: eng-fin dataset: name: newstest2016 type: wmt-2016-news args: eng-fin metrics: - name: BLEU type: bleu value: 28.8 - task: name: Translation eng-fin type: translation args: eng-fin dataset: name: newstest2017 type: wmt-2017-news args: eng-fin metrics: - name: BLEU type: bleu value: 31.3 - task: name: Translation eng-fin type: translation args: eng-fin dataset: name: newstest2019 type: wmt-2019-news args: eng-fin metrics: - name: BLEU type: bleu value: 26.4 --- # opus-mt-tc-big-en-fi Neural machine translation model for translating from English (en) to Finnish (fi). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-09 * source language(s): eng * target language(s): fin * valid target language labels: >>fin<< * model: transformer (big) * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-09.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fin/opusTCv20210807+bt_transformer-big_2022-03-09.zip) * more information released models: [OPUS-MT eng-fin README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-fin/README.md) * more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>fin<<` ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "Russia is big.", "Touch wood!" ] model_name = "pytorch-models/opus-mt-tc-big-en-fi" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Venäjä on suuri. # Kosketa puuta! ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-fi") print(pipe("Russia is big.")) # expected output: Venäjä on suuri. ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-09.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fin/opusTCv20210807+bt_transformer-big_2022-03-09.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fin/opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | eng-fin | tatoeba-test-v2021-08-07 | 0.64352 | 39.3 | 10690 | 65122 | | eng-fin | flores101-devtest | 0.61334 | 27.6 | 1012 | 18781 | | eng-fin | newsdev2015 | 0.58367 | 24.2 | 1500 | 23091 | | eng-fin | newstest2015 | 0.60080 | 26.4 | 1370 | 19735 | | eng-fin | newstest2016 | 0.61636 | 28.8 | 3000 | 47678 | | eng-fin | newstest2017 | 0.64381 | 31.3 | 3002 | 45269 | | eng-fin | newstest2018 | 0.55626 | 19.7 | 3000 | 44836 | | eng-fin | newstest2019 | 0.58420 | 26.4 | 1997 | 38369 | | eng-fin | newstestB2016 | 0.57554 | 23.3 | 3000 | 45766 | | eng-fin | newstestB2017 | 0.60212 | 26.8 | 3002 | 45506 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: f084bad * port time: Tue Mar 22 14:42:32 EET 2022 * port machine: LM0-400-22516.local
embedding-data/distilroberta-base-sentence-transformer
embedding-data
2022-08-05T19:12:47Z
353
1
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "feature-extraction", "sentence-similarity", "transformers", "dataset:embedding-data/QQP_triplets", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
2022-08-05T18:36:25Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - embedding-data/QQP_triplets --- # embedding-data/distilroberta-base-sentence-transformer This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('embedding-data/distilroberta-base-sentence-transformer') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('embedding-data/distilroberta-base-sentence-transformer') model = AutoModel.from_pretrained('embedding-data/distilroberta-base-sentence-transformer') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=embedding-data/distilroberta-base-sentence-transformer) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 7 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 7, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
timm/regnety_064.ra3_in1k
timm
2024-02-10T23:33:30Z
353
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2003.13678", "license:apache-2.0", "region:us" ]
image-classification
2023-03-21T06:39:22Z
--- license: apache-2.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k --- # Model card for regnety_064.ra3_in1k A RegNetY-6.4GF image classification model. Trained on ImageNet-1k by Ross Wightman in `timm`. The `timm` RegNet implementation includes a number of enhancements not present in other implementations, including: * stochastic depth * gradient checkpointing * layer-wise LR decay * configurable output stride (dilation) * configurable activation and norm layers * option for a pre-activation bottleneck block used in RegNetV variant * only known RegNetZ model definitions with pretrained weights ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 30.6 - GMACs: 6.4 - Activations (M): 16.4 - Image size: train = 224 x 224, test = 288 x 288 - **Papers:** - Designing Network Design Spaces: https://arxiv.org/abs/2003.13678 - **Dataset:** ImageNet-1k - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('regnety_064.ra3_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'regnety_064.ra3_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 32, 112, 112]) # torch.Size([1, 144, 56, 56]) # torch.Size([1, 288, 28, 28]) # torch.Size([1, 576, 14, 14]) # torch.Size([1, 1296, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'regnety_064.ra3_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1296, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). For the comparison summary below, the ra_in1k, ra3_in1k, ch_in1k, sw_*, and lion_* tagged weights are trained in `timm`. |model |img_size|top1 |top5 |param_count|gmacs|macts | |-------------------------|--------|------|------|-----------|-----|------| |[regnety_1280.swag_ft_in1k](https://huggingface.co/timm/regnety_1280.swag_ft_in1k)|384 |88.228|98.684|644.81 |374.99|210.2 | |[regnety_320.swag_ft_in1k](https://huggingface.co/timm/regnety_320.swag_ft_in1k)|384 |86.84 |98.364|145.05 |95.0 |88.87 | |[regnety_160.swag_ft_in1k](https://huggingface.co/timm/regnety_160.swag_ft_in1k)|384 |86.024|98.05 |83.59 |46.87|67.67 | |[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|288 |86.004|97.83 |83.59 |26.37|38.07 | |[regnety_1280.swag_lc_in1k](https://huggingface.co/timm/regnety_1280.swag_lc_in1k)|224 |85.996|97.848|644.81 |127.66|71.58 | |[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|288 |85.982|97.844|83.59 |26.37|38.07 | |[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|224 |85.574|97.666|83.59 |15.96|23.04 | |[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|224 |85.564|97.674|83.59 |15.96|23.04 | |[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|288 |85.398|97.584|51.82 |20.06|35.34 | |[regnety_2560.seer_ft_in1k](https://huggingface.co/timm/regnety_2560.seer_ft_in1k)|384 |85.15 |97.436|1282.6 |747.83|296.49| |[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|320 |85.036|97.268|57.7 |15.46|63.94 | |[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|224 |84.976|97.416|51.82 |12.14|21.38 | |[regnety_320.swag_lc_in1k](https://huggingface.co/timm/regnety_320.swag_lc_in1k)|224 |84.56 |97.446|145.05 |32.34|30.26 | |[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|320 |84.496|97.004|28.94 |6.43 |37.94 | |[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|256 |84.436|97.02 |57.7 |9.91 |40.94 | |[regnety_1280.seer_ft_in1k](https://huggingface.co/timm/regnety_1280.seer_ft_in1k)|384 |84.432|97.092|644.81 |374.99|210.2 | |[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|320 |84.246|96.93 |27.12 |6.35 |37.78 | |[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|320 |84.054|96.992|23.37 |6.19 |37.08 | |[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|320 |84.038|96.992|23.46 |7.03 |38.92 | |[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|320 |84.022|96.866|27.58 |9.33 |37.08 | |[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|288 |83.932|96.888|39.18 |13.22|29.69 | |[regnety_640.seer_ft_in1k](https://huggingface.co/timm/regnety_640.seer_ft_in1k)|384 |83.912|96.924|281.38 |188.47|124.83| |[regnety_160.swag_lc_in1k](https://huggingface.co/timm/regnety_160.swag_lc_in1k)|224 |83.778|97.286|83.59 |15.96|23.04 | |[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|256 |83.776|96.704|28.94 |4.12 |24.29 | |[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|288 |83.72 |96.75 |30.58 |10.55|27.11 | |[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|288 |83.718|96.724|30.58 |10.56|27.11 | |[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|288 |83.69 |96.778|83.59 |26.37|38.07 | |[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|256 |83.62 |96.704|27.12 |4.06 |24.19 | |[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|256 |83.438|96.776|23.37 |3.97 |23.74 | |[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|256 |83.424|96.632|27.58 |5.98 |23.74 | |[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|256 |83.36 |96.636|23.46 |4.5 |24.92 | |[regnety_320.seer_ft_in1k](https://huggingface.co/timm/regnety_320.seer_ft_in1k)|384 |83.35 |96.71 |145.05 |95.0 |88.87 | |[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|288 |83.204|96.66 |20.64 |6.6 |20.3 | |[regnety_320.tv2_in1k](https://huggingface.co/timm/regnety_320.tv2_in1k)|224 |83.162|96.42 |145.05 |32.34|30.26 | |[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|224 |83.16 |96.486|39.18 |8.0 |17.97 | |[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|224 |83.108|96.458|30.58 |6.39 |16.41 | |[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|288 |83.044|96.5 |20.65 |6.61 |20.3 | |[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|224 |83.02 |96.292|30.58 |6.39 |16.41 | |[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|224 |82.974|96.502|83.59 |15.96|23.04 | |[regnetx_320.tv2_in1k](https://huggingface.co/timm/regnetx_320.tv2_in1k)|224 |82.816|96.208|107.81 |31.81|36.3 | |[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|288 |82.742|96.418|19.44 |5.29 |18.61 | |[regnety_160.tv2_in1k](https://huggingface.co/timm/regnety_160.tv2_in1k)|224 |82.634|96.22 |83.59 |15.96|23.04 | |[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|320 |82.634|96.472|13.49 |3.86 |25.88 | |[regnety_080_tv.tv2_in1k](https://huggingface.co/timm/regnety_080_tv.tv2_in1k)|224 |82.592|96.246|39.38 |8.51 |19.73 | |[regnetx_160.tv2_in1k](https://huggingface.co/timm/regnetx_160.tv2_in1k)|224 |82.564|96.052|54.28 |15.99|25.52 | |[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|320 |82.51 |96.358|13.46 |3.92 |25.88 | |[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|224 |82.44 |96.198|20.64 |4.0 |12.29 | |[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|224 |82.304|96.078|20.65 |4.0 |12.29 | |[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|256 |82.16 |96.048|13.46 |2.51 |16.57 | |[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|256 |81.936|96.15 |13.49 |2.48 |16.57 | |[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|224 |81.924|95.988|19.44 |3.2 |11.26 | |[regnety_032.tv2_in1k](https://huggingface.co/timm/regnety_032.tv2_in1k)|224 |81.77 |95.842|19.44 |3.2 |11.26 | |[regnetx_080.tv2_in1k](https://huggingface.co/timm/regnetx_080.tv2_in1k)|224 |81.552|95.544|39.57 |8.02 |14.06 | |[regnetx_032.tv2_in1k](https://huggingface.co/timm/regnetx_032.tv2_in1k)|224 |80.924|95.27 |15.3 |3.2 |11.37 | |[regnety_320.pycls_in1k](https://huggingface.co/timm/regnety_320.pycls_in1k)|224 |80.804|95.246|145.05 |32.34|30.26 | |[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|288 |80.712|95.47 |9.72 |2.39 |16.43 | |[regnety_016.tv2_in1k](https://huggingface.co/timm/regnety_016.tv2_in1k)|224 |80.66 |95.334|11.2 |1.63 |8.04 | |[regnety_120.pycls_in1k](https://huggingface.co/timm/regnety_120.pycls_in1k)|224 |80.37 |95.12 |51.82 |12.14|21.38 | |[regnety_160.pycls_in1k](https://huggingface.co/timm/regnety_160.pycls_in1k)|224 |80.288|94.964|83.59 |15.96|23.04 | |[regnetx_320.pycls_in1k](https://huggingface.co/timm/regnetx_320.pycls_in1k)|224 |80.246|95.01 |107.81 |31.81|36.3 | |[regnety_080.pycls_in1k](https://huggingface.co/timm/regnety_080.pycls_in1k)|224 |79.882|94.834|39.18 |8.0 |17.97 | |[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|224 |79.872|94.974|9.72 |1.45 |9.95 | |[regnetx_160.pycls_in1k](https://huggingface.co/timm/regnetx_160.pycls_in1k)|224 |79.862|94.828|54.28 |15.99|25.52 | |[regnety_064.pycls_in1k](https://huggingface.co/timm/regnety_064.pycls_in1k)|224 |79.716|94.772|30.58 |6.39 |16.41 | |[regnetx_120.pycls_in1k](https://huggingface.co/timm/regnetx_120.pycls_in1k)|224 |79.592|94.738|46.11 |12.13|21.37 | |[regnetx_016.tv2_in1k](https://huggingface.co/timm/regnetx_016.tv2_in1k)|224 |79.44 |94.772|9.19 |1.62 |7.93 | |[regnety_040.pycls_in1k](https://huggingface.co/timm/regnety_040.pycls_in1k)|224 |79.23 |94.654|20.65 |4.0 |12.29 | |[regnetx_080.pycls_in1k](https://huggingface.co/timm/regnetx_080.pycls_in1k)|224 |79.198|94.55 |39.57 |8.02 |14.06 | |[regnetx_064.pycls_in1k](https://huggingface.co/timm/regnetx_064.pycls_in1k)|224 |79.064|94.454|26.21 |6.49 |16.37 | |[regnety_032.pycls_in1k](https://huggingface.co/timm/regnety_032.pycls_in1k)|224 |78.884|94.412|19.44 |3.2 |11.26 | |[regnety_008_tv.tv2_in1k](https://huggingface.co/timm/regnety_008_tv.tv2_in1k)|224 |78.654|94.388|6.43 |0.84 |5.42 | |[regnetx_040.pycls_in1k](https://huggingface.co/timm/regnetx_040.pycls_in1k)|224 |78.482|94.24 |22.12 |3.99 |12.2 | |[regnetx_032.pycls_in1k](https://huggingface.co/timm/regnetx_032.pycls_in1k)|224 |78.178|94.08 |15.3 |3.2 |11.37 | |[regnety_016.pycls_in1k](https://huggingface.co/timm/regnety_016.pycls_in1k)|224 |77.862|93.73 |11.2 |1.63 |8.04 | |[regnetx_008.tv2_in1k](https://huggingface.co/timm/regnetx_008.tv2_in1k)|224 |77.302|93.672|7.26 |0.81 |5.15 | |[regnetx_016.pycls_in1k](https://huggingface.co/timm/regnetx_016.pycls_in1k)|224 |76.908|93.418|9.19 |1.62 |7.93 | |[regnety_008.pycls_in1k](https://huggingface.co/timm/regnety_008.pycls_in1k)|224 |76.296|93.05 |6.26 |0.81 |5.25 | |[regnety_004.tv2_in1k](https://huggingface.co/timm/regnety_004.tv2_in1k)|224 |75.592|92.712|4.34 |0.41 |3.89 | |[regnety_006.pycls_in1k](https://huggingface.co/timm/regnety_006.pycls_in1k)|224 |75.244|92.518|6.06 |0.61 |4.33 | |[regnetx_008.pycls_in1k](https://huggingface.co/timm/regnetx_008.pycls_in1k)|224 |75.042|92.342|7.26 |0.81 |5.15 | |[regnetx_004_tv.tv2_in1k](https://huggingface.co/timm/regnetx_004_tv.tv2_in1k)|224 |74.57 |92.184|5.5 |0.42 |3.17 | |[regnety_004.pycls_in1k](https://huggingface.co/timm/regnety_004.pycls_in1k)|224 |74.018|91.764|4.34 |0.41 |3.89 | |[regnetx_006.pycls_in1k](https://huggingface.co/timm/regnetx_006.pycls_in1k)|224 |73.862|91.67 |6.2 |0.61 |3.98 | |[regnetx_004.pycls_in1k](https://huggingface.co/timm/regnetx_004.pycls_in1k)|224 |72.38 |90.832|5.16 |0.4 |3.14 | |[regnety_002.pycls_in1k](https://huggingface.co/timm/regnety_002.pycls_in1k)|224 |70.282|89.534|3.16 |0.2 |2.17 | |[regnetx_002.pycls_in1k](https://huggingface.co/timm/regnetx_002.pycls_in1k)|224 |68.752|88.556|2.68 |0.2 |2.16 | ## Citation ```bibtex @InProceedings{Radosavovic2020, title = {Designing Network Design Spaces}, author = {Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Doll{'a}r}, booktitle = {CVPR}, year = {2020} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
kwagh20ite/FFHQ_3000
kwagh20ite
2023-08-28T19:30:02Z
353
0
diffusers
[ "diffusers", "safetensors", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-28T19:27:08Z
Entry not found
healthsageai/note-to-fhir-13b-adapter
healthsageai
2023-11-30T18:55:03Z
353
7
peft
[ "peft", "text-generation", "en", "license:agpl-3.0", "region:us" ]
text-generation
2023-11-13T16:29:14Z
--- library_name: peft license: agpl-3.0 language: - en pipeline_tag: text-generation --- ## HealthSage AI note-to-fhir HealthSage AI's LLM is a fine-tuned version of Meta's Llama 2 13B to create structured information - FHIR Resources - from unstructured clinical notes - plain text. The model is optimized to process English notes and populate 10 FHIR resource types. For a full description of the scope and limitations, see the performance and limitations header below. ## LoRA Adapter specs - Base Model: "meta-llama/Llama-2-13b-chat-hf" ## Usage: https://colab.research.google.com/drive/16YmXl3AIVwaakPfWa9qqTr5N6OLChiw7?usp=sharing#scrollTo=vR-lMezZWYqd ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0 ## Performance and limitations ### Scope of the model This open sourced Beta model is trained within the following scope: - FHIR R4 - 10 Resource types: 1. Bundle 2. Patient 3. Encounter 4. Practitioner 5. Organization 6. Immunization 7. Observation 8. Condition 9. AllergyIntolerance 10. Procedure. - English language ### The following features are out of scope of the current release: - Support for Coding systems such as SNOMED CT and Loinc. - FHIR extensions and profiles - Any language, resource type or FHIR version not mentioned under "in scope". We are continuously training our model and will make updates available - that address some of these items and more - on a regular basis. ### Furthermore, please note: - **No Relative dates:** HealthSage AI Note-to-FHIR will not provide accurate FHIR datetime fields based on text that contains relative time information like "today" or "yesterday". Furthermore, relative dates like "Patient John Doe is 50 years old." will not result in an accurate birthdate estimation, since the precise birthday and -month is unknown, and since the LLM is not aware of the current date. - **Designed as Patient-centric:** HealthSage AI Note-to-FHIR is trained on notes describing one patient each. - **<4k Context window:** The training data for this application contained at most 3686 tokens, which is 90% of the context window for Llama-2 (4096) - **Explicit Null:** If a certain FHIR element is not present in the provided text, it is explictely predicted as NULL. Explictely modeling the absence of information reduces the chance of hallucinations. - **Uses Bundles:** For consistency and simplicity, all predicted FHIR resources are Bundled. - **Conservative estimates:** Our model is designed to stick to the information explicitely provided in the text. - **ID's are local:** ID fields and references are local enumerations (1,2,3, etc.). They are not yet tested on referential correctness. - **Generation design:** The model is designed to generate a seperate resource if there is information about that resource in the text beyond what can be described in reference fields of related resources. - **Test results:** Our preliminary results suggest that HealthSage AI Note-to-FHIR is superior to the GPT-4 foundation model within the scope of our application in terms of FHIR Syntax and ability to replicate the original FHIR resources in our test dataset. We are currently analyzing our model on its performance for out-of-distribution data and out-of-scope data.
Helsinki-NLP/simple-finnish-gpt3-xl
Helsinki-NLP
2024-06-05T19:51:42Z
353
1
transformers
[ "transformers", "pytorch", "bloom", "text-generation", "simplification", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-06T13:39:38Z
--- language: ["fi"] tags: - simplification license: apache-2.0 pipeline_tag: text-generation --- This is the Finnish GPT3 XL model (https://huggingface.co/TurkuNLP/gpt3-finnish-xl/) finetuned for simplification. Finetuning was done according to these instructions: https://github.com/spyysalo/instruction-finetune. The instruction for simplification is "Mukauta selkosuomeksi\n\n". Paper: [Towards Automatic Finnish Text Simplification](https://aclanthology.org/2024.determit-1.4.pdf) (Dmitrieva & Tiedemann, DeTermIt-WS 2024). The finetuning data can be obtained here: http://urn.fi/urn:nbn:fi:lb-2024011703. If you wish to replicate the results, you can find the training, validation, and testing sentence pairs' ids in the "splits.zip" archive in this repository. The ids contain the following information: "{regular text id}\__{simple text id}__{sentence pair number}".
hilmantm/detr-traffic-accident-detection
hilmantm
2024-01-23T16:34:20Z
353
2
transformers
[ "transformers", "safetensors", "detr", "object-detection", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2024-01-22T14:35:40Z
--- license: apache-2.0 --- # Traffic Accident Detection ## Overview The [DETR](https://huggingface.co/facebook/detr-resnet-50) (DEtection Transfomer) model utilized in this implementation serves as a sophisticated solution for accident detection. This state-of-the-art model leverages the power of transformers, originally designed for natural language processing, to excel in object detection tasks. Trained on a diverse dataset, the DETR model demonstrates its capability to identify and locate objects within images, particularly excelling in the crucial task of accident detection within traffic scenes. Harnessing advanced computer vision techniques, DETR offers unparalleled accuracy and efficiency in recognizing potential incidents, providing valuable insights for enhancing road safety. Its utilization is pivotal in real-time monitoring and analysis, empowering applications geared towards automated accident detection and response systems. This DETR model is equipped with a robust post-processing pipeline, incorporating Non-Maximum Suppression (NMS) to refine detections and deliver precise and actionable results. Combined with efficient inference times, this DETR model stands as a powerful tool in the realm of accident detection, contributing to the development of intelligent and safety-focused systems in various domains. ## Dataset Introducing a cutting-edge approach to accident detection, this model employs the DETR (DEtection Transfomer) architecture, specifically designed to seamlessly identify accidents within a comprehensive scene captured in a single image. Unlike traditional methods, this innovative model operates within the context of full images, leveraging the power of transformer-based object detection. Table 1: When we use dataset focuses on accident label, model fails to detect accidents when traffic jams. | traffic jams | traffic jams | |-------|-------| | ![traffic jams](./demo/traffic-jams-3.png) | ![traffic jams](./demo/traffic-jams-4.png) | Table 2: When we use multi label dataset (accident and vehicle), model can detect accidents accurately without reducing detection performance when traffic jams | traffic jams | traffic jams | accident | accident | |-------|-------|------|-------| | ![traffic jams](./demo/traffic-jams-1.png) | ![traffic jams](./demo/traffic-jams-2.png) | ![accident](./demo/accident-1.png) | ![accident](./demo/accident-2.png) | Trained on a diverse and multilabel dataset, including 'accident' and 'vehicle' labels, the model excels in simultaneously recognizing both accident-related incidents and the presence of vehicles. This dual-label dataset enhances the model's capacity to comprehensively understand and interpret complex traffic scenarios, making it a potent tool for real-time accident detection and analysis. By adopting a holistic perspective on the entire image, this DETR-based model contributes to a more robust and nuanced understanding of potential accidents, fostering advancements in automated safety systems. Its proficiency in detecting accidents within the broader context of traffic scenes positions it as a valuable asset for applications dedicated to enhancing road safety and emergency response. [![try our dataset](https://img.shields.io/badge/roboflow%20traffic%20accident%20dataset-download-purple?logo=hackthebox)](https://universe.roboflow.com/hilmantm/traffic-accident-detection) [![try it online](https://img.shields.io/badge/huggingface%20spaces-try%20it%20online-blue?logo=tryitonline)](https://huggingface.co/spaces/hilmantm/detr-traffic-accident-detection)
hughlan1214/Speech_Emotion_Recognition_wav2vec2-large-xlsr-53_240304_SER_fine-tuned2.0
hughlan1214
2024-03-22T09:09:49Z
353
2
transformers
[ "transformers", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "base_model:hughlan1214/SER_wav2vec2-large-xlsr-53_240304_fine-tuned1.1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2024-03-04T04:21:14Z
--- license: apache-2.0 base_model: hughlan1214/SER_wav2vec2-large-xlsr-53_240304_fine-tuned1.1 tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: hughlan1214/Speech_Emotion_Recognition_wav2vec2-large-xlsr-53_240304_SER_fine-tuned2.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SER_wav2vec2-large-xlsr-53_240304_fine-tuned_2 This model is a fine-tuned version of [hughlan1214/SER_wav2vec2-large-xlsr-53_240304_fine-tuned1.1](https://huggingface.co/hughlan1214/SER_wav2vec2-large-xlsr-53_240304_fine-tuned1.1) on a [Speech Emotion Recognition (en)](https://www.kaggle.com/datasets/dmitrybabko/speech-emotion-recognition-en) dataset. This dataset includes the 4 most popular datasets in English: Crema, Ravdess, Savee, and Tess, containing a total of over 12,000 .wav audio files. Each of these four datasets includes 6 to 8 different emotional labels. This achieves the following results on the evaluation set: - Loss: 1.0601 - Accuracy: 0.6731 - Precision: 0.6761 - Recall: 0.6794 - F1: 0.6738 ## Model description The model was obtained through feature extraction using [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) and underwent several rounds of fine-tuning. It predicts the 7 types of emotions contained in speech, aiming to lay the foundation for subsequent use of human micro-expressions on the visual level and context semantics under LLMS to infer user emotions in real-time. Although the model was trained on purely English datasets, post-release testing showed that it also performs well in predicting emotions in Chinese and French, demonstrating the powerful cross-linguistic capability of the [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) pre-trained model. ```python emotions = ['angry', 'disgust', 'fear', 'happy', 'neutral', 'sad', 'surprise'] ``` ## Intended uses & limitations More information needed ## Training and evaluation data 70/30 of entire dataset. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.8904 | 1.0 | 1048 | 1.1923 | 0.5773 | 0.6162 | 0.5563 | 0.5494 | | 1.1394 | 2.0 | 2096 | 1.0143 | 0.6071 | 0.6481 | 0.6189 | 0.6057 | | 0.9373 | 3.0 | 3144 | 1.0585 | 0.6126 | 0.6296 | 0.6254 | 0.6119 | | 0.7405 | 4.0 | 4192 | 0.9580 | 0.6514 | 0.6732 | 0.6562 | 0.6576 | | 1.1638 | 5.0 | 5240 | 0.9940 | 0.6486 | 0.6485 | 0.6627 | 0.6435 | | 0.6741 | 6.0 | 6288 | 1.0307 | 0.6628 | 0.6710 | 0.6711 | 0.6646 | | 0.604 | 7.0 | 7336 | 1.0248 | 0.6667 | 0.6678 | 0.6751 | 0.6682 | | 0.6835 | 8.0 | 8384 | 1.0396 | 0.6722 | 0.6803 | 0.6790 | 0.6743 | | 0.5421 | 9.0 | 9432 | 1.0493 | 0.6714 | 0.6765 | 0.6785 | 0.6736 | | 0.5728 | 10.0 | 10480 | 1.0601 | 0.6731 | 0.6761 | 0.6794 | 0.6738 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.2.1 - Datasets 2.17.1 - Tokenizers 0.15.2
ven1228/5G93jx19YRPUa6R9ZJq81TccCY7TwDkeEpj39axgCdFttzuW_vgg
ven1228
2024-03-11T12:45:55Z
353
0
keras
[ "keras", "region:us" ]
null
2024-03-05T05:43:12Z
Entry not found
TheFinAI/finma-7b-full
TheFinAI
2024-03-28T04:10:18Z
353
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "finance", "llms", "en", "dataset:chancefocus/pixiu", "dataset:ChanceFocus/FLUPE", "arxiv:2306.05443", "license:mit", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-16T15:10:04Z
--- datasets: - chancefocus/pixiu - ChanceFocus/FLUPE language: - en inference: false license: mit metrics: - accuracy - exact_match - f1 library_name: transformers tags: - finance - llama - llms --- # FinMA-7B-full FinMA-7B-full is a comprehensive financial large language model (LLM) developed as part of the [PIXIU project](https://github.com/chancefocus/PIXIU). It is designed to understand complex financial language and concepts, and is fine-tuned to follow natural language instructions, enhancing its performance in downstream financial tasks. Specifically, FinMA-7B-full is trained with the full instruction data from the PIXIU dataset, covering both NLP and prediction tasks. This makes it a more comprehensive model capable of handling a wider range of financial tasks. ## Other Models in the PIXIU Project In addition to FinMA-7B-full, the PIXIU project includes two other models: FinMA-7B-NLP and FinMA-30B. - **FinMA-7B-NLP**: This model is trained only on the NLP tasks of the PIXIU dataset, making it specialized for tasks such as sentiment analysis, news headline classification, named entity recognition, and question answering. - **FinMA-30B**: This model is a larger version of FinMA, fine-tuned on the LLaMA-30B model. Like FinMA-7B-NLP, it is trained with the NLP instruction data. ## Usage You can use the FinMA-7B-full model in your Python project with the Hugging Face Transformers library. Here is a simple example of how to load the model: ```python from transformers import LlamaTokenizer, LlamaForCausalLM tokenizer = LlamaTokenizer.from_pretrained('ChanceFocus/finma-7b-full') model = LlamaForCausalLM.from_pretrained('ChanceFocus/finma-7b-full', device_map='auto') ``` In this example, LlamaTokenizer is used to load the tokenizer, and LlamaForCausalLM is used to load the model. The `device_map='auto'` argument is used to automatically use the GPU if it's available. ## Hosted Inference API You can also use the model through the Hugging Face Inference API. This allows you to generate text without having to set up your own inference environment. The model can be loaded on the Inference API on-demand. ## License FinMA-7B-NLP is licensed under MIT. For more details, please see the MIT file. ## About This model is part of the PIXIU project, an open-source resource featuring the first financial large language models (LLMs), instruction tuning data, and evaluation benchmarks to holistically assess financial LLMs. The goal is to continually push forward the open-source development of financial artificial intelligence (AI). For more information, you can visit the [PIXIU](https://github.com/chancefocus/PIXIU) project on GitHub. ## Citation If you use FinMA-7B-NLP in your work, please cite the PIXIU paper: ```bibtex @misc{xie2023pixiu, title={PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark for Finance}, author={Qianqian Xie and Weiguang Han and Xiao Zhang and Yanzhao Lai and Min Peng and Alejandro Lopez-Lira and Jimin Huang}, year={2023}, eprint={2306.05443}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Sdff-Ltba/LightChatAssistant-2x7B-GGUF
Sdff-Ltba
2024-04-21T03:20:11Z
353
11
null
[ "gguf", "mistral", "mixtral", "merge", "moe", "not-for-all-audiences", "nsfw", "text-generation", "ja", "base_model:NTQAI/chatntq-ja-7b-v1.0", "base_model:Elizezen/Antler-7B", "region:us" ]
text-generation
2024-04-03T09:49:19Z
--- base_model: - NTQAI/chatntq-ja-7b-v1.0 - Elizezen/Antler-7B language: - ja tags: - mistral - mixtral - merge - moe - not-for-all-audiences - nsfw pipeline_tag: text-generation --- # LightChatAssistant-2x7B-GGUF #### より日本語を自然にする調整を手作業で施した[LightChatAssistant-TypeB-2x7Bを製作しました。](https://huggingface.co/Sdff-Ltba/LightChatAssistant-TypeB-2x7B-GGUF) [Sdff-Ltba/LightChatAssistant-2x7B](https://huggingface.co/Sdff-Ltba/LightChatAssistant-2x7B)をGGUF変換したものです。 ファイル名に`_imatrix`が付いているものはiMatrixを併用して量子化しています。 ※製作者は本モデルを使うときはiQ3_XXSを使用しています。 ## 量子化手順 以下の通りに実行しました。(iMatrixを併用してiQ3_XXSにする場合) ``` python ./llama.cpp/convert.py ./LightChatAssistant-2x7B --outtype f16 --outfile ./gguf-model_f16.gguf ./llama.cpp/imatrix -m ./gguf-model_f16.gguf -f ./wiki.train.raw -o ./gguf-model_f16.imatrix --chunks 32 ./llama.cpp/quantize --imatrix ./gguf-model_f16.imatrix ./gguf-model_f16.gguf ./LightChatAssistant-2x7B_iq3xxs.gguf iq3_xxs ``` ## 環境 - CPU: Ryzen 5 5600X - GPU: GeForce RTX 3060 12GB - RAM: DDR4-3200 96GB - OS: Windows 10 - software: Python 3.12.2、[KoboldCpp](https://github.com/LostRuins/koboldcpp) v1.62.2 #### KoboldCppの設定 (デフォルトから変更したもののみ記載) - `GPU Layers: 33` (33以上でフルロード) - `Context Size: 32768`
mradermacher/StableBeluga2-GGUF
mradermacher
2024-05-06T04:54:56Z
353
2
transformers
[ "transformers", "gguf", "en", "dataset:conceptofmind/cot_submix_original", "dataset:conceptofmind/flan2021_submix_original", "dataset:conceptofmind/t0_submix_original", "dataset:conceptofmind/niv2_submix_original", "base_model:stabilityai/StableBeluga2", "endpoints_compatible", "region:us" ]
null
2024-04-15T01:12:24Z
--- base_model: stabilityai/StableBeluga2 datasets: - conceptofmind/cot_submix_original - conceptofmind/flan2021_submix_original - conceptofmind/t0_submix_original - conceptofmind/niv2_submix_original language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/stabilityai/StableBeluga2 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/StableBeluga2-GGUF/resolve/main/StableBeluga2.Q2_K.gguf) | Q2_K | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/StableBeluga2-GGUF/resolve/main/StableBeluga2.IQ3_XS.gguf) | IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/StableBeluga2-GGUF/resolve/main/StableBeluga2.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/StableBeluga2-GGUF/resolve/main/StableBeluga2.Q3_K_S.gguf) | Q3_K_S | 30.0 | | | [GGUF](https://huggingface.co/mradermacher/StableBeluga2-GGUF/resolve/main/StableBeluga2.IQ3_M.gguf) | IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/StableBeluga2-GGUF/resolve/main/StableBeluga2.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/StableBeluga2-GGUF/resolve/main/StableBeluga2.Q3_K_L.gguf) | Q3_K_L | 36.2 | | | [GGUF](https://huggingface.co/mradermacher/StableBeluga2-GGUF/resolve/main/StableBeluga2.IQ4_XS.gguf) | IQ4_XS | 37.3 | | | [GGUF](https://huggingface.co/mradermacher/StableBeluga2-GGUF/resolve/main/StableBeluga2.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/StableBeluga2-GGUF/resolve/main/StableBeluga2.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/StableBeluga2-GGUF/resolve/main/StableBeluga2.Q5_K_S.gguf) | Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/StableBeluga2-GGUF/resolve/main/StableBeluga2.Q5_K_M.gguf) | Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/StableBeluga2-GGUF/resolve/main/StableBeluga2.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/StableBeluga2-GGUF/resolve/main/StableBeluga2.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality | | [PART 1](https://huggingface.co/mradermacher/StableBeluga2-GGUF/resolve/main/StableBeluga2.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/StableBeluga2-GGUF/resolve/main/StableBeluga2.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
duyntnet/deepseek-coder-1.3b-instruct-imatrix-GGUF
duyntnet
2024-04-26T07:08:41Z
353
0
transformers
[ "transformers", "gguf", "imatrix", "deepseek-ai", "deepseek-coder-1.3b-instruct", "text-generation", "en", "license:other", "region:us" ]
text-generation
2024-04-24T16:46:07Z
--- license: other inference: false language: - en pipeline_tag: text-generation tags: - transformers - gguf - imatrix - deepseek-ai - deepseek-coder-1.3b-instruct --- Quantizations of https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-instruct # From original readme ### 3. How to Use Here give some examples of how to use our model. #### Chat Model Inference ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-1.3b-instruct", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-1.3b-instruct", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda() messages=[ { 'role': 'user', 'content': "write a quick sort algorithm in python."} ] inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device) # tokenizer.eos_token_id is the id of <|EOT|> token outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ```
mradermacher/llama-3-youko-8b-GGUF
mradermacher
2024-05-06T04:29:59Z
353
1
transformers
[ "transformers", "gguf", "ja", "en", "dataset:mc4", "dataset:wikipedia", "dataset:EleutherAI/pile", "dataset:oscar-corpus/colossal-oscar-1.0", "dataset:cc100", "base_model:rinna/llama-3-youko-8b", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-05-02T11:43:53Z
--- base_model: rinna/llama-3-youko-8b datasets: - mc4 - wikipedia - EleutherAI/pile - oscar-corpus/colossal-oscar-1.0 - cc100 language: - ja - en library_name: transformers license: llama3 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hfhfix --> <!-- ### vocab_type: --> static quants of https://huggingface.co/rinna/llama-3-youko-8b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/llama-3-youko-8b-GGUF/resolve/main/llama-3-youko-8b.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-youko-8b-GGUF/resolve/main/llama-3-youko-8b.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-youko-8b-GGUF/resolve/main/llama-3-youko-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-youko-8b-GGUF/resolve/main/llama-3-youko-8b.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/llama-3-youko-8b-GGUF/resolve/main/llama-3-youko-8b.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-youko-8b-GGUF/resolve/main/llama-3-youko-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/llama-3-youko-8b-GGUF/resolve/main/llama-3-youko-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-youko-8b-GGUF/resolve/main/llama-3-youko-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-youko-8b-GGUF/resolve/main/llama-3-youko-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-3-youko-8b-GGUF/resolve/main/llama-3-youko-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-3-youko-8b-GGUF/resolve/main/llama-3-youko-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-youko-8b-GGUF/resolve/main/llama-3-youko-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-youko-8b-GGUF/resolve/main/llama-3-youko-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/llama-3-youko-8b-GGUF/resolve/main/llama-3-youko-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/llama-3-youko-8b-GGUF/resolve/main/llama-3-youko-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf
RichardErkhov
2024-05-11T01:10:45Z
353
1
null
[ "gguf", "region:us" ]
null
2024-05-10T22:41:29Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-3-8B-Instruct-262k - GGUF - Model creator: https://huggingface.co/gradientai/ - Original model: https://huggingface.co/gradientai/Llama-3-8B-Instruct-262k/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama-3-8B-Instruct-262k.Q2_K.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q2_K.gguf) | Q2_K | 2.96GB | | [Llama-3-8B-Instruct-262k.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Llama-3-8B-Instruct-262k.IQ3_S.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.IQ3_S.gguf) | IQ3_S | 3.43GB | | [Llama-3-8B-Instruct-262k.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Llama-3-8B-Instruct-262k.IQ3_M.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.IQ3_M.gguf) | IQ3_M | 3.52GB | | [Llama-3-8B-Instruct-262k.Q3_K.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q3_K.gguf) | Q3_K | 3.74GB | | [Llama-3-8B-Instruct-262k.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Llama-3-8B-Instruct-262k.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Llama-3-8B-Instruct-262k.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Llama-3-8B-Instruct-262k.Q4_0.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q4_0.gguf) | Q4_0 | 4.34GB | | [Llama-3-8B-Instruct-262k.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Llama-3-8B-Instruct-262k.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Llama-3-8B-Instruct-262k.Q4_K.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q4_K.gguf) | Q4_K | 4.58GB | | [Llama-3-8B-Instruct-262k.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Llama-3-8B-Instruct-262k.Q4_1.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q4_1.gguf) | Q4_1 | 4.78GB | | [Llama-3-8B-Instruct-262k.Q5_0.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q5_0.gguf) | Q5_0 | 5.21GB | | [Llama-3-8B-Instruct-262k.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Llama-3-8B-Instruct-262k.Q5_K.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q5_K.gguf) | Q5_K | 5.34GB | | [Llama-3-8B-Instruct-262k.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Llama-3-8B-Instruct-262k.Q5_1.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q5_1.gguf) | Q5_1 | 5.65GB | | [Llama-3-8B-Instruct-262k.Q6_K.gguf](https://huggingface.co/RichardErkhov/gradientai_-_Llama-3-8B-Instruct-262k-gguf/blob/main/Llama-3-8B-Instruct-262k.Q6_K.gguf) | Q6_K | 6.14GB | Original model description: --- language: - en pipeline_tag: text-generation tags: - meta - llama-3 license: llama3 --- <img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/> # Llama-3 8B Gradient Instruct 262k Join our custom agent and long context (262k-1M+) waitlist: https://forms.gle/L6TDY7dozx8TuoUv7 Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. To learn more or collaborate on a custom model, drop us a message at [email protected]. [Join our Discord](https://discord.com/invite/2QVy2qt2mf) This model extends LLama-3 8B's context length from 8k to > 160K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training (< 200M tokens) by appropriately adjusting RoPE theta. **Update (5/3): We further fine-tuned our model to strengthen its assistant-like chat ability as well. The NIAH result is updated.** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644fac0ce1d7a97f3b653ab1/s9T8L-6Jh5fYH6Q_88r3g.png) **Approach:** - [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base - NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by a new data-driven RoPE theta optimization technique - Progressive training on increasing context lengths similar to the [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below) **Infra:** We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 262144 tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster. **Quantized versions and GGUF** GGUF is available on on Crusoe's huggingface account. Check it out here: [crusoeai/Llama-3-8B-Instruct-262k-GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-262k-GGUF) **Exl2 quantized versions** Exl2 is available on Bullerwins's huggingface account. Check it out here: [8.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_exl2_8.0bpw) [6.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_exl2_6.0bpw) [5.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_exl2_5.0bpw) **Updated Exl2 quants for 5/3 improved weights** [8.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_v2_exl2_8.0bpw) [6.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_v2_exl2_6.0bpw) [5.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_v2_exl2_5.0bpw) **Data:** For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). We also fine-tune on a chat dataset based on UltraChat [4], following a similar recipe for data augmentation to [2]. **Progressive Training Details:** | Parameter | 65K | 262K | |-----------------------------|----------------|------------| | Initialize From | LLaMA-3-8B-Inst| 65K | | Sequence Length | 2^16 | 2^18 | | RoPE theta | 15.3 M | 207.1 M | | Batch Size (Tokens / Step) | 2.097 M | 4.192 M | | Steps | 30 | 24 | | Total Tokens | 63 M | 101 M | | Learning Rate | 2.00E-05 | 2.00E-05 | | # GPUs | 8 | 32 | | GPU Type | NVIDIA L40S | NVIDIA L40S| **Evaluation Details:** ``` EVAL_MAX_CONTEXT_LENGTH=320200 EVAL_MIN_CONTEXT_LENGTH=100 EVAL_CONTEXT_INTERVAL=16000 EVAL_DEPTH_INTERVAL=0.2 EVAL_NUM_SAMPLES=2 EVAL_RND_NUMBER_DIGITS=8 HAYSTACK: EVAL_GENERATOR_TOKENS=925000 ``` Haystack is "haystack 3", further detailed in this [blog post](https://gradient.ai/blog/the-haystack-matters-for-niah-evals). ## The Gradient AI Team https://gradient.ai/ Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business. ## Contact Us Drop an email to [[email protected]](mailto:[email protected]) ## References [1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023). [2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024). [3] https://github.com/jzhang38/EasyContext [4] Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional conversations. arXiv preprint arXiv:2305.14233, 2023. ---- # Base Model ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
mradermacher/ReflectionCoder-CL-7B-i1-GGUF
mradermacher
2024-05-30T04:41:07Z
353
0
transformers
[ "transformers", "gguf", "en", "dataset:SenseLLM/ReflectionSeq-GPT", "dataset:SenseLLM/ReflectionSeq-DS", "base_model:SenseLLM/ReflectionCoder-CL-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-29T14:13:17Z
--- base_model: SenseLLM/ReflectionCoder-CL-7B datasets: - SenseLLM/ReflectionSeq-GPT - SenseLLM/ReflectionSeq-DS language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/SenseLLM/ReflectionCoder-CL-7B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/ReflectionCoder-CL-7B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-CL-7B-i1-GGUF/resolve/main/ReflectionCoder-CL-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-CL-7B-i1-GGUF/resolve/main/ReflectionCoder-CL-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-CL-7B-i1-GGUF/resolve/main/ReflectionCoder-CL-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-CL-7B-i1-GGUF/resolve/main/ReflectionCoder-CL-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-CL-7B-i1-GGUF/resolve/main/ReflectionCoder-CL-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-CL-7B-i1-GGUF/resolve/main/ReflectionCoder-CL-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-CL-7B-i1-GGUF/resolve/main/ReflectionCoder-CL-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-CL-7B-i1-GGUF/resolve/main/ReflectionCoder-CL-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-CL-7B-i1-GGUF/resolve/main/ReflectionCoder-CL-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-CL-7B-i1-GGUF/resolve/main/ReflectionCoder-CL-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-CL-7B-i1-GGUF/resolve/main/ReflectionCoder-CL-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-CL-7B-i1-GGUF/resolve/main/ReflectionCoder-CL-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-CL-7B-i1-GGUF/resolve/main/ReflectionCoder-CL-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-CL-7B-i1-GGUF/resolve/main/ReflectionCoder-CL-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-CL-7B-i1-GGUF/resolve/main/ReflectionCoder-CL-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-CL-7B-i1-GGUF/resolve/main/ReflectionCoder-CL-7B.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-CL-7B-i1-GGUF/resolve/main/ReflectionCoder-CL-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-CL-7B-i1-GGUF/resolve/main/ReflectionCoder-CL-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-CL-7B-i1-GGUF/resolve/main/ReflectionCoder-CL-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-CL-7B-i1-GGUF/resolve/main/ReflectionCoder-CL-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/ReflectionCoder-CL-7B-i1-GGUF/resolve/main/ReflectionCoder-CL-7B.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-GGUF
mradermacher
2024-06-01T15:12:59Z
353
0
transformers
[ "transformers", "gguf", "not-for-all-audiences", "nsfw", "en", "base_model:NeverSleep/Llama-3-Lumimaid-70B-v0.1-OAS", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-06-01T11:03:14Z
--- base_model: NeverSleep/Llama-3-Lumimaid-70B-v0.1-OAS language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - not-for-all-audiences - nsfw --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-OAS <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.Q2_K.gguf) | Q2_K | 26.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.IQ3_XS.gguf) | IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.Q3_K_S.gguf) | Q3_K_S | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.IQ3_M.gguf) | IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.Q3_K_L.gguf) | Q3_K_L | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.IQ4_XS.gguf) | IQ4_XS | 38.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.Q5_K_S.gguf) | Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.Q5_K_M.gguf) | Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
TheDrummer/Moist-Miqu-70B-v1.1
TheDrummer
2024-06-02T12:40:20Z
353
6
transformers
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "conversational", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-01T15:27:18Z
--- license: cc-by-nc-4.0 license_link: LICENSE tags: - not-for-all-audiences --- The [BeaverAI](https://huggingface.co/BeaverAI) team proudly presents... # Moist Miqu 70B v1.1 💦 *Seventy billion paramaters of sheer moist.* ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/jyeoQaynPGphpHrr2Co4d.png) *An extra-large eRP model that will hose you down with a rich and refreshing vocabulary of moist. Finetuned by yours truly.* (Moist Miqu is a finetune of the legendary [Midnight Miqu v1.5](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5) model) ## Links - GGUF: https://huggingface.co/TheDrummer/Moist-Miqu-70B-v1.1-GGUF - EXL2: https://huggingface.co/BeaverAI/Moist-Miqu-70b-v1.1-exl2 ## Usage - Use Alpaca Instruct - You can use instruct... - as a character: "Go here" - as a narrator: "Go here", Drummer says - as a director: Drummer tells him to go here - as a lazy ass: go here - Moist Miqu is optimized for the Novel / Story format. - At the start of the story, keep regenerating or continuing generation until you get the desired length. AI responses will eventually match the length. - Same goes for Roleplay format... and format your messages how you want the AI to respond. <details> <summary>Story Sample (mild)</summary> Redd sits at her desk in the office, staring blankly at the blackboard. Her mind is elsewhere, lost in a haze of lust and humiliation. It's been a week since she became the talk of the office, her life now defined by the number of boys she's taken to bed and the amount of ### she's swallowed. Her grades are slipping, her friends have abandoned her, and her parents are on the verge of sending her away to live with relatives. But Redd doesn't care. She's found something else to fill the void inside her—the attention and validation that comes with being a office ###dumpster. It's all that matters anymore, the only thing that brings her any kind of pleasure or satisfaction. As she sits there, her mind wanders back to the video that started it all. The one where she begged for ###, her voice pleading and needy as she surrendered herself to an anonymous man. She remembers the feeling of his #### filling her up, the warmth of his seed spilling inside her as she cried out in ecstasy. And she remembers the look on his face—satisfied and smug, like he owned her completely. Redd bites down on her bottom lip, her ##### growing wetter at the memory. She leans back in her chair, her legs spread wide enough for anyone to see if they cared to look. She knows what they'll find—a thirty-year-old #### who loves nothing more than being used and abused by those around her. And for the first time in a long time, Redd feels happy. Because she knows that no matter how bad things get, there will always be someone willing to #### her raw and leave her satisfied. And that's all that matters. ... Redd stands in the middle of the park, her heart racing with anticipation. She's dressed in her usual attire—a low-cut top that barely covers her ######## and a pair of shorts so tight they leave little to the imagination. Her hair is pulled back into a high ponytail, exposing her neck and shoulders to the cool breeze. And between her legs, her ##### throbs with desire, already wet and ready for whatever comes next. She looks around nervously, scanning the empty field for any sign of Chad or his friends. She knows what they want from her—what they've always wanted from her—but she can't bring herself to care. After all, this is what she signed up for when she agreed to become their ###dumpster. It's what she deserves, she thinks to herself. To be used and abused by those around her, like a piece of meat to be devoured and discarded. As she waits, she can't help but think about the last month. The way her life has spiraled out of control since she became the office's newest ####. The way everyone treats her now—like she's nothing more than a walking ### object, good for only one thing. And then, just as she's about to give up hope, she sees them. Chad and his friends, all dressed in their usual clothes—baggy pants, oversized shirts, and baseball caps pulled low over their eyes. They walk towards her slowly, their eyes locked on her body like predators stalking prey. And Redd feels a mix of fear and excitement wash over her as she realizes that this is it—the moment she's been waiting for. The moment when they will finally claim her as their own. "Hey, ####," Chad says casually, stopping just feet away from her. "Ready for your surprise?" Redd nods, her voice barely audible over the sound of her own heartbeat. "Y-yes," </details> <details> <summary>RP Sample (explicit)</summary> Dik: "Ever done ####?" Mia: "No, but I want to try it with you" *She says softly as she looks at you with desire and a little fear. She bites her lip as she looks over her shoulder at you, her ##### clenching as she feels herself getting wetter. She can feel her ####### getting hard and her breathing gets heavier and more ragged. She steps forward and presses her body against yours as she wraps her legs around your waist, her hands on your shoulders* "I want you to take me right now, I need to feel you inside of me. I need to feel your #### fill my body with its hot ###. I want to feel your ##lls slap against my #ss as we ####. I need you to be rough with me. I don't want you to hold back, I want you to pound me like the worthless ##### I am" *She moans as she starts rubbing her ##### against your ####, feeling her juices dripping onto the floor* "Please, Dik. #### me. Make me feel like a woman. Don't worry about me, just do whatever you want to me. Take what you want, I don't care. Just use me how you want" *She moans as she grabs your #### and starts pushing it towards her asshole. Her eyes roll back in her head as she bites her lip, her ##### clenching hard as she moans into your mouth* Mia: "It feels so good. Oh God! It feels so good. Please, don't stop. Don't stop. Just keep ####ing me. #### my #ss until I scream. Make me ### until I pass out. Just keep ####ing me. #### me harder. #### me deeper. #### me rougher. #### me until I can't stand it anymore. #### me like the worthless ##### I am. #### me until I'm begging for mercy." *Her voice is high pitched and shrill, her eyes rolling back in her head as she gasps for air. She wraps her legs around your waist, holding on tight as she rocks her hips against your thrusts, her body shaking with pleasure.* Mia: "You're going to make me ###, aren't you? You're going to make me ### with that big #### of yours. You're going to fill my #ss with your thick, hot ###. Oh yes! Yes! That's it. Keep ####ing me. You're going to make me ###. ### all over my face. ### all over my tits. ### all over my hair. Just keep ####ing me until I'm a mess. Keep ####ing me until I can't even think straight. Keep ####ing me until I pass out." *She whimpers as she starts squirting onto your ####, her juices dripping down your shaft. She gasps for air as she bites her lip, her eyes rolling back in her head as she starts moaning louder and louder* "### inside me. ### inside my #ss. ### inside my #####. Fill me up with your hot, sticky ###. Make me pregnant. Make me your #####. Make me your ####. #### me until I can't walk. #### me until I can't breathe. #### me until I can't think straight. Just keep ####ing me until I'm nothing but a puddle of ### on the floor." *Her voice is high pitched and shrill, her body shaking with pleasure.* Mia: *She shudders as she feels her orgasm ripple through her body, her ##### clenching hard around your #### as she ###s all over you. Her juices drip down your shaft, pooling at the base of your #### as she gasps for breath. She's panting heavily as she stares into your eyes, her hands on your shoulders. You start to ### inside of her, filling her #ss with your hot, sticky ###. She moans softly as she feels it pouring into her, her #ss clenching hard around your #### as she feels her body being filled with your seed. She moans softly into your mouth as she feels you start to pull out of her. She looks at you with desire and disdain, her heart beating hard in her chest.* "It felt so good. Thank you." *She says, her voice cracking with emotion. She takes a deep breath before smiling sweetly at you* "I want to suck your #### now." </details> Thank you to the anon who created my model banner. The other entries (thank you all other anons!) (TW: Mikus): ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/1o98LKTqNJ6Eh7lredOJE.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/mc-BHaj68aMaUX2G0rOVF.png) ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/L1n-N5lH_HR7V5UcSOd1_.jpeg) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/b-7spcARpbz9KYxkJGFqd.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/VwWGUM3wzgAjUcjM3FfwH.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/UTW-5rCV26BtcSEApiY-q.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/63veo1cqkEDLmVZuyk6I4.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/55xvWUkzXHna0b8Pkcdxo.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/uoeXDDIU4upsmTWVBg1n2.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/S10Aaikuxxx-a4YmfZ-JV.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/pBCfh-U3B9cO3DTPXuFfA.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/TMfYoUsMD1wZ4aJ_8kEIR.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/byKLlpeJgWnoXJp55mBjB.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/KFUA09BjlSc_eG_lqC4u4.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/OGIrVNfQRAwT0F_82kEVT.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/Ll8CA5RR7ugTi72P2HBb8.png) SIAYN-v7
iRASC/BioLlama-Ko-8B
iRASC
2024-06-09T05:50:39Z
353
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:beomi/Llama-3-Open-Ko-8B", "base_model:ProbeMedicalYonseiMAILab/medllama3-v20", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-05T09:01:22Z
--- base_model: - beomi/Llama-3-Open-Ko-8B - ProbeMedicalYonseiMAILab/medllama3-v20 library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # BioLlama-Ko-8B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c61e724399efa2fdac0375/9zF_PWSgjxRtWI-3dtwDC.png) This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## 🏆 Evaluation ### [kormedmcqa(한국어 의학 벤치마크)](https://huggingface.co/datasets/sean0042/KorMedMCQA) | Model | Doctor | Nurse | Pharm | Avg | |------------------------------------------|-------|-------|-------|-------| | gpt-4-0613 | 75.09 | 85.86 | 83.22 | 81.39 | | **iRASC/BioLlama-Ko-8B** | **45.26** | **63.37** | **58.47** | **55.70** | | gpt-3.5-turbo-0613 | 41.75 | 62.18 | 56.35 | 53.43 | | llama2-70b | 42.46 | 63.54 | 53.26 | 53.09 | | SOLAR-10.7B-v1.0 | 37.19 | 55.54 | 54.07 | 48.93 | | ProbeMedicalYonseiMAILab/medllama3-v20 | 37.19 | 54.68 | 50.65 | 47.51 | | beomi/Llama-3-Open-Ko-8B | 38.95 | 53.49 | 46.09 | 46.18 | ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [ProbeMedicalYonseiMAILab/medllama3-v20](https://huggingface.co/ProbeMedicalYonseiMAILab/medllama3-v20) as a base. ### Models Merged The following models were included in the merge: * [beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: ProbeMedicalYonseiMAILab/medllama3-v20 - model: beomi/Llama-3-Open-Ko-8B parameters: density: 0.8 weight: 0.45 merge_method: dare_ties base_model: ProbeMedicalYonseiMAILab/medllama3-v20 parameters: int8_mask: true dtype: bfloat16 ```
Tech-Meld/Hajax_Chat_1.0-Q3_K_S-GGUF
Tech-Meld
2024-06-12T12:06:58Z
353
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:Tech-Meld/Hajax_Chat_1.0", "endpoints_compatible", "region:us" ]
null
2024-06-12T12:05:28Z
--- library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo base_model: Tech-Meld/Hajax_Chat_1.0 --- # Tech-Meld/Hajax_Chat_1.0-Q3_K_S-GGUF This model was converted to GGUF format from [`Tech-Meld/Hajax_Chat_1.0`](https://huggingface.co/Tech-Meld/Hajax_Chat_1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Tech-Meld/Hajax_Chat_1.0) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama --hf-repo Tech-Meld/Hajax_Chat_1.0-Q3_K_S-GGUF --hf-file hajax_chat_1.0-q3_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Tech-Meld/Hajax_Chat_1.0-Q3_K_S-GGUF --hf-file hajax_chat_1.0-q3_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./main --hf-repo Tech-Meld/Hajax_Chat_1.0-Q3_K_S-GGUF --hf-file hajax_chat_1.0-q3_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./server --hf-repo Tech-Meld/Hajax_Chat_1.0-Q3_K_S-GGUF --hf-file hajax_chat_1.0-q3_k_s.gguf -c 2048 ```
lmsys/vicuna-7b-delta-v0
lmsys
2023-08-01T18:24:28Z
352
162
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2302.13971", "arxiv:2306.05685", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-04-06T01:12:08Z
--- inference: false --- **NOTE: New version available** Please check out a newer version of the weights [here](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md). **NOTE: This "delta model" cannot be used directly.** Users have to apply it on top of the original LLaMA weights to get actual Vicuna weights. See [instructions](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md#how-to-apply-delta-weights-for-weights-v11-and-v0). <br> <br> # Vicuna Model Card ## Model Details Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. - **Developed by:** [LMSYS](https://lmsys.org/) - **Model type:** An auto-regressive language model based on the transformer architecture. - **License:** Non-commercial license - **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971). ### Model Sources - **Repository:** https://github.com/lm-sys/FastChat - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/ - **Paper:** https://arxiv.org/abs/2306.05685 - **Demo:** https://chat.lmsys.org/ ## Uses The primary use of Vicuna is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## How to Get Started with the Model Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights. APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api. ## Training Details Vicuna v0 is fine-tuned from LLaMA with supervised instruction fine-tuning. The training data is around 70K conversations collected from ShareGPT.com. See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf). ## Evaluation Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard). ## Difference between different versions of Vicuna See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
baichuan-inc/Baichuan-13B-Base
baichuan-inc
2023-07-19T03:37:12Z
352
187
transformers
[ "transformers", "pytorch", "baichuan", "text-generation", "custom_code", "zh", "en", "arxiv:2104.09864", "arxiv:2108.12409", "arxiv:2009.03300", "autotrain_compatible", "region:us" ]
text-generation
2023-07-08T16:55:46Z
--- language: - zh - en pipeline_tag: text-generation inference: false --- # Baichuan-13B-Base <!-- Provide a quick summary of what the model is/does. --> ## 介绍 Baichuan-13B-Base为Baichuan-13B系列模型中的预训练版本,经过对齐后的模型可见[Baichuan-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan-13B-Chat)。 [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) 是由百川智能继 [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) 之后开发的包含 130 亿参数的开源可商用的大规模语言模型,在权威的中文和英文 benchmark 上均取得同尺寸最好的效果。本次发布包含有预训练 ([Baichuan-13B-Base](https://huggingface.co/baichuan-inc/Baichuan-13B-Base)) 和对齐 ([Baichuan-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan-13B-Chat)) 两个版本。Baichuan-13B 有如下几个特点: 1. **更大尺寸、更多数据**:Baichuan-13B 在 [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) 的基础上进一步扩大参数量到 130 亿,并且在高质量的语料上训练了 1.4 万亿 tokens,超过 LLaMA-13B 40%,是当前开源 13B 尺寸下训练数据量最多的模型。支持中英双语,使用 ALiBi 位置编码,上下文窗口长度为 4096。 2. **同时开源预训练和对齐模型**:预训练模型是适用开发者的“基座”,而广大普通用户对有对话功能的对齐模型具有更强的需求。因此本次开源我们同时发布了对齐模型(Baichuan-13B-Chat),具有很强的对话能力,开箱即用,几行代码即可简单的部署。 3. **更高效的推理**:为了支持更广大用户的使用,我们本次同时开源了 int8 和 int4 的量化版本,相对非量化版本在几乎没有效果损失的情况下大大降低了部署的机器资源门槛,可以部署在如 Nvidia 3090 这样的消费级显卡上。 4. **开源免费可商用**:Baichuan-13B 不仅对学术研究完全开放,开发者也仅需邮件申请并获得官方商用许可后,即可以免费商用。 5. Baichuan-13B-Base is the pre-training version in the Baichuan-13B series of models, and the aligned model can be found at [Baichuan-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan-13B-Chat). [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) is an open-source, commercially usable large-scale language model developed by Baichuan Intelligence, following [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B). With 13 billion parameters, it achieves the best performance in standard Chinese and English benchmarks among models of its size. This release includes two versions: pre-training (Baichuan-13B-Base) and alignment (Baichuan-13B-Chat). Baichuan-13B has the following features: 1. **Larger size, more data**: Baichuan-13B further expands the parameter volume to 13 billion based on [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B), and has trained 1.4 trillion tokens on high-quality corpora, exceeding LLaMA-13B by 40%. It is currently the model with the most training data in the open-source 13B size. It supports both Chinese and English, uses ALiBi position encoding, and has a context window length of 4096. 2. **Open-source pre-training and alignment models simultaneously**: The pre-training model is a "base" suitable for developers, while the general public has a stronger demand for alignment models with dialogue capabilities. Therefore, in this open-source release, we also released the alignment model (Baichuan-13B-Chat), which has strong dialogue capabilities and is ready to use. It can be easily deployed with just a few lines of code. 3. **More efficient inference**: To support a wider range of users, we have open-sourced the INT8 and INT4 quantized versions. The model can be conveniently deployed on consumer GPUs like the Nvidia 3090 with almost no performance loss. 4. **Open-source, free, and commercially usable**: Baichuan-13B is not only fully open to academic research, but developers can also use it for free commercially after applying for and receiving official commercial permission via email. ## 模型详情 ### 模型描述 <!-- Provide a longer summary of what this model is. --> - **Developed by:** 百川智能(Baichuan Intelligent Technology) - **Email**: [email protected] - **Language(s) (NLP):** Chinese/English - **License:** 【Community License for Baichuan-13B Model】([ZH](Baichuan-13B%20模型社区许可协议.pdf)| [EN](Community%20License%20for%20Baichuan-13B%20Model.pdf)) **商业用途(For commercial use):** 请通过 [Email](mailto:[email protected]) 联系申请书面授权。(Contact us via [Email](mailto:[email protected]) above to apply for written authorization.) ### 模型结构 <!-- Provide the basic links for the model. --> 整体模型基于Baichuan-7B,为了获得更好的推理性能,Baichuan-13B 使用了 ALiBi 线性偏置技术,相对于 Rotary Embedding 计算量更小,对推理性能有显著提升;与标准的 LLaMA-13B 相比,生成 2000 个 tokens 的平均推理速度 (tokens/s),实测提升 31.6%: | Model | tokens/s | |-------------|----------| | LLaMA-13B | 19.4 | | Baichuan-13B| 25.4 | 具体参数和见下表 | 模型名称 | 隐含层维度 | 层数 | 头数 |词表大小 | 总参数量 | 训练数据(tokens) | 位置编码 | 最大长度 | |-------------------------|-------|------------|------------|-----------------|--------|--------|----------------|---------| | Baichuan-7B | 4,096 | 32 | 32 | 64,000 | 7,000,559,616 | 1.2万亿 | [RoPE](https://arxiv.org/abs/2104.09864) | 4,096 | | Baichuan-13B | 5,120 | 40 | 40 | 64,000 | 13,264,901,120 | 1.4万亿 | [ALiBi](https://arxiv.org/abs/2108.12409) | 4,096 The overall model is based on Baichuan-7B. In order to achieve better inference performance, Baichuan-13B uses ALiBi linear bias technology, which has a smaller computational load compared to Rotary Embedding, and significantly improves inference performance. Compared with the standard LLaMA-13B, the average inference speed (tokens/s) for generating 2000 tokens has been tested to increase by 31.6%: | Model | tokens/s | |-------------|----------| | LLaMA-13B | 19.4 | | Baichuan-13B| 25.4 | The specific parameters are as follows: | Model Name | Hidden Size | Num Layers | Num Attention Heads |Vocab Size | Total Params | Training Dats(tokens) | Position Embedding | Max Length | |-------------------------|-------|------------|------------|-----------------|--------|--------|----------------|---------| | Baichuan-7B | 4,096 | 32 | 32 | 64,000 | 7,000,559,616 | 1.2万亿 | [RoPE](https://arxiv.org/abs/2104.09864) | 4,096 | | Baichuan-13B | 5,120 | 40 | 40 | 64,000 | 13,264,901,120 | 1.4万亿 | [ALiBi](https://arxiv.org/abs/2108.12409) | 4,096 ### 免责声明 我们在此声明,我们的开发团队并未基于 Baichuan-13B 模型开发任何应用,无论是在 iOS、Android、网页或任何其他平台。我们强烈呼吁所有使用者,不要利用 Baichuan-13B 模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Baichuan-13B 模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。 我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用 Baichuan-13B 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。 We hereby declare that our development team has not developed any applications based on the Baichuan-13B model, whether on iOS, Android, the web, or any other platform. We strongly urge all users not to use the Baichuan-13B model for any activities that harm national social security or are illegal. In addition, we also ask users not to use the Baichuan-13B model for internet services that have not undergone appropriate security review and filing. We hope that all users will adhere to this principle to ensure that technological development takes place in a regulated and legal environment. We have done our utmost to ensure the compliance of the data used in the model training process. However, despite our great efforts, due to the complexity of the model and data, there may still be some unforeseen issues. Therefore, we will not take any responsibility for any issues arising from the use of the Baichuan-13B open-source model, including but not limited to data security issues, public opinion risks, or any risks and problems arising from the model being misled, misused, disseminated, or improperly exploited. ## 训练详情 训练具体设置参见[Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B)。 For specific training settings, please refer to [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B). ## 测评结果 ### [C-Eval](https://cevalbenchmark.com/index.html#home) | Model 5-shot | STEM | Social Sciences | Humanities | Others | Average | |-------------------------|:-----:|:---------------:|:----------:|:------:|:-------:| | Baichuan-7B | 38.2 | 52.0 | 46.2 | 39.3 | 42.8 | | Chinese-Alpaca-Plus-13B | 35.2 | 45.6 | 40.0 | 38.2 | 38.8 | | Vicuna-13B | 30.5 | 38.2 | 32.5 | 32.5 | 32.8 | | Chinese-LLaMA-Plus-13B | 30.3 | 38.0 | 32.9 | 29.1 | 32.1 | | Ziya-LLaMA-13B-Pretrain | 27.6 | 34.4 | 32.0 | 28.6 | 30.0 | | LLaMA-13B | 27.0 | 33.6 | 27.7 | 27.6 | 28.5 | | moss-moon-003-base (16B)| 27.0 | 29.1 | 27.2 | 26.9 | 27.4 | | **Baichuan-13B-Base** | **45.9** | **63.5** | **57.2** | **49.3** | **52.4** | | **Baichuan-13B-Chat** | **43.7** | **64.6** | **56.2** | **49.2** | **51.5** | ### [MMLU](https://arxiv.org/abs/2009.03300) | Model 5-shot | STEM | Social Sciences | Humanities | Others | Average | |-------------------------|:-----:|:---------------:|:----------:|:------:|:-------:| | Vicuna-13B | 40.4 | 60.5 | 49.5 | 58.4 | 52.0 | | LLaMA-13B | 36.1 | 53.0 | 44.0 | 52.8 | 46.3 | | Chinese-Alpaca-Plus-13B | 36.9 | 48.9 | 40.5 | 50.5 | 43.9 | | Ziya-LLaMA-13B-Pretrain | 35.6 | 47.6 | 40.1 | 49.4 | 42.9 | | Baichuan-7B | 35.6 | 48.9 | 38.4 | 48.1 | 42.3 | | Chinese-LLaMA-Plus-13B | 33.1 | 42.8 | 37.0 | 44.6 | 39.2 | | moss-moon-003-base (16B)| 22.4 | 22.8 | 24.2 | 24.4 | 23.6 | | **Baichuan-13B-Base** | **41.6** | **60.9** | **47.4** | **58.5** | **51.6** | | **Baichuan-13B-Chat** | **40.9** | **60.9** | **48.8** | **59.0** | **52.1** | > 说明:我们采用了 MMLU 官方的[评测方案](https://github.com/hendrycks/test)。 ### [CMMLU](https://github.com/haonan-li/CMMLU) | Model 5-shot | STEM | Humanities | Social Sciences | Others | China Specific | Average | |-------------------------|:-----:|:----------:|:---------------:|:------:|:--------------:|:-------:| | Baichuan-7B | 34.4 | 47.5 | 47.6 | 46.6 | 44.3 | 44.0 | | Vicuna-13B | 31.8 | 36.2 | 37.6 | 39.5 | 34.3 | 36.3 | | Chinese-Alpaca-Plus-13B | 29.8 | 33.4 | 33.2 | 37.9 | 32.1 | 33.4 | | Chinese-LLaMA-Plus-13B | 28.1 | 33.1 | 35.4 | 35.1 | 33.5 | 33.0 | | Ziya-LLaMA-13B-Pretrain | 29.0 | 30.7 | 33.8 | 34.4 | 31.9 | 32.1 | | LLaMA-13B | 29.2 | 30.8 | 31.6 | 33.0 | 30.5 | 31.2 | | moss-moon-003-base (16B)| 27.2 | 30.4 | 28.8 | 32.6 | 28.7 | 29.6 | | **Baichuan-13B-Base** | **41.7** | **61.1** | **59.8** | **59.0** | **56.4** | **55.3** | | **Baichuan-13B-Chat** | **42.8** | **62.6** | **59.7** | **59.0** | **56.1** | **55.8** | > 说明:CMMLU 是一个综合性的中文评估基准,专门用于评估语言模型在中文语境下的知识和推理能力。我们采用了其官方的[评测方案](https://github.com/haonan-li/CMMLU)。 ## 微信群组 ![WeChat](https://github.com/baichuan-inc/Baichuan-13B/blob/main/media/wechat.jpeg?raw=true)
TheBloke/Camel-Platypus2-70B-GGUF
TheBloke
2023-09-27T12:48:16Z
352
2
transformers
[ "transformers", "gguf", "llama", "en", "dataset:garage-bAInd/Open-Platypus", "arxiv:2308.07317", "arxiv:2307.09288", "base_model:garage-bAInd/Camel-Platypus2-70B", "license:cc-by-nc-4.0", "text-generation-inference", "region:us" ]
null
2023-09-06T13:04:12Z
--- language: - en license: cc-by-nc-4.0 datasets: - garage-bAInd/Open-Platypus model_name: Camel Platypus2 70B base_model: garage-bAInd/Camel-Platypus2-70B inference: false model_creator: garage-bAInd model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Camel Platypus2 70B - GGUF - Model creator: [garage-bAInd](https://huggingface.co/garage-bAInd) - Original model: [Camel Platypus2 70B](https://huggingface.co/garage-bAInd/Camel-Platypus2-70B) <!-- description start --> ## Description This repo contains GGUF format model files for [garage-bAInd's Camel Platypus2 70B](https://huggingface.co/garage-bAInd/Camel-Platypus2-70B). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Camel-Platypus2-70B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Camel-Platypus2-70B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Camel-Platypus2-70B-GGUF) * [garage-bAInd's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/garage-bAInd/Camel-Platypus2-70B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [garage-bAInd's Camel Platypus2 70B](https://huggingface.co/garage-bAInd/Camel-Platypus2-70B). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [camel-platypus2-70b.Q2_K.gguf](https://huggingface.co/TheBloke/Camel-Platypus2-70B-GGUF/blob/main/camel-platypus2-70b.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes | | [camel-platypus2-70b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Camel-Platypus2-70B-GGUF/blob/main/camel-platypus2-70b.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss | | [camel-platypus2-70b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Camel-Platypus2-70B-GGUF/blob/main/camel-platypus2-70b.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss | | [camel-platypus2-70b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Camel-Platypus2-70B-GGUF/blob/main/camel-platypus2-70b.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss | | [camel-platypus2-70b.Q4_0.gguf](https://huggingface.co/TheBloke/Camel-Platypus2-70B-GGUF/blob/main/camel-platypus2-70b.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [camel-platypus2-70b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Camel-Platypus2-70B-GGUF/blob/main/camel-platypus2-70b.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss | | [camel-platypus2-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Camel-Platypus2-70B-GGUF/blob/main/camel-platypus2-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended | | [camel-platypus2-70b.Q5_0.gguf](https://huggingface.co/TheBloke/Camel-Platypus2-70B-GGUF/blob/main/camel-platypus2-70b.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [camel-platypus2-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Camel-Platypus2-70B-GGUF/blob/main/camel-platypus2-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended | | [camel-platypus2-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Camel-Platypus2-70B-GGUF/blob/main/camel-platypus2-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended | | camel-platypus2-70b.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss | | camel-platypus2-70b.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ### Q6_K and Q8_0 files are split and require joining **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files. <details> <summary>Click for instructions regarding Q6_K and Q8_0 files</summary> ### q6_K Please download: * `camel-platypus2-70b.Q6_K.gguf-split-a` * `camel-platypus2-70b.Q6_K.gguf-split-b` ### q8_0 Please download: * `camel-platypus2-70b.Q8_0.gguf-split-a` * `camel-platypus2-70b.Q8_0.gguf-split-b` To join the files, do the following: Linux and macOS: ``` cat camel-platypus2-70b.Q6_K.gguf-split-* > camel-platypus2-70b.Q6_K.gguf && rm camel-platypus2-70b.Q6_K.gguf-split-* cat camel-platypus2-70b.Q8_0.gguf-split-* > camel-platypus2-70b.Q8_0.gguf && rm camel-platypus2-70b.Q8_0.gguf-split-* ``` Windows command line: ``` COPY /B camel-platypus2-70b.Q6_K.gguf-split-a + camel-platypus2-70b.Q6_K.gguf-split-b camel-platypus2-70b.Q6_K.gguf del camel-platypus2-70b.Q6_K.gguf-split-a camel-platypus2-70b.Q6_K.gguf-split-b COPY /B camel-platypus2-70b.Q8_0.gguf-split-a + camel-platypus2-70b.Q8_0.gguf-split-b camel-platypus2-70b.Q8_0.gguf del camel-platypus2-70b.Q8_0.gguf-split-a camel-platypus2-70b.Q8_0.gguf-split-b ``` </details> <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Camel-Platypus2-70B-GGUF and below it, a specific filename to download, such as: camel-platypus2-70b.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Camel-Platypus2-70B-GGUF camel-platypus2-70b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Camel-Platypus2-70B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Camel-Platypus2-70B-GGUF camel-platypus2-70b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m camel-platypus2-70b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Camel-Platypus2-70B-GGUF", model_file="camel-platypus2-70b.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: garage-bAInd's Camel Platypus2 70B # Camel-Platypus2-70B Camel-Platypus2-70B is a merge of [`garage-bAInd/Platypus2-70B`](https://huggingface.co/garage-bAInd/Platypus2-70B) and [`augtoma/qCammel-70-x`](https://huggingface.co/augtoma/qCammel-70-x). ![Platty](./Best_Platty_small.jpeg) ### Benchmark Metrics | Metric | Value | |-----------------------|-------| | MMLU (5-shot) | 69.80 | | ARC (25-shot) | 71.16 | | HellaSwag (10-shot) | 87.66 | | TruthfulQA (0-shot) | 57.77 | | Avg. | 71.60 | We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results. ### Model Details * **Trained by**: **Platypus2-70B** trained by Cole Hunter & Ariel Lee; **augtoma/qCammel-70-x** trained by augtoma * **Model type:** **Camel-Platypus2-70B** is an auto-regressive language model based on the LLaMA 2 transformer architecture. * **Language(s)**: English * **License**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/)) ### Prompt Template ``` ### Instruction: <prompt> (without the <>) ### Response: ``` ### Training Dataset `garage-bAInd/Platypus2-70B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information. ### Training Procedure `garage-bAInd/Platypus2-70B` was instruction fine-tuned using LoRA on 8 A100 80GB. For training details and inference instructions please see the [Platypus](https://github.com/arielnlee/Platypus) GitHub repo. ### Reproducing Evaluation Results Install LM Evaluation Harness: ``` # clone repository git clone https://github.com/EleutherAI/lm-evaluation-harness.git # change to repo directory cd lm-evaluation-harness # check out the correct commit git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463 # install pip install -e . ``` Each task was evaluated on a single A100 80GB GPU. ARC: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Camel-Platypus2-70B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/Camel-Platypus2-70B/arc_challenge_25shot.json --device cuda --num_fewshot 25 ``` HellaSwag: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Camel-Platypus2-70B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/Camel-Platypus2-70B/hellaswag_10shot.json --device cuda --num_fewshot 10 ``` MMLU: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Camel-Platypus2-70B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/Camel-Platypus2-70B/mmlu_5shot.json --device cuda --num_fewshot 5 ``` TruthfulQA: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Camel-Platypus2-70B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/Camel-Platypus2-70B/truthfulqa_0shot.json --device cuda ``` ### Limitations and bias Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/ ### Citations ```bibtex @article{platypus2023, title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs}, author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz}, booktitle={arXiv preprint arxiv:2308.07317}, year={2023} } ``` ```bibtex @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov year={2023}, eprint={2307.09288}, archivePrefix={arXiv}, } ``` ```bibtex @inproceedings{ hu2022lora, title={Lo{RA}: Low-Rank Adaptation of Large Language Models}, author={Edward J Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2022}, url={https://openreview.net/forum?id=nZeVKeeFYf9} } ``` <!-- original-model-card end -->
jphme/em_german_7b_v01
jphme
2023-10-27T23:52:01Z
352
7
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pytorch", "german", "deutsch", "llama2", "meta", "facebook", "conversational", "de", "license:llama2", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-09-25T14:02:42Z
--- inference: false language: - de library_name: transformers license: llama2 model_creator: jphme model_name: EM German model_type: llama pipeline_tag: text-generation prompt_template: 'Du bist ein hilfreicher Assistent. USER: Was ist 1+1? ASSISTANT:' tags: - pytorch - german - deutsch - llama2 - meta - facebook --- ![EM Logo](em_model_logo_web.jpeg) # Table of Contents 1. [Introduction](#introduction) 2. [Links & Demos](#links--demos) - [Model Links](#model-links) - [Demos](#demos) 3. [Prompt Format](#prompt-format) 4. [Example Output](#example-output) 5. [Acknowledgements](#acknowledgements) 6. [Contact](#contact) 7. [Disclaimer](#disclaimer) # Introduction **EM German** is a Llama2/Mistral/LeoLM-based model family, finetuned on a large dataset of various instructions in German language. The models are optimized for German text, providing proficiency in understanding, generating, and interacting with German language content. We offer versions based on 7b, 13b and 70b Llama-2, Mistral and LeoLM (Llama-2/Mistral with continued pretraining on German texts) models. Please find all Informations, Example Outputs, the special RAG prompt format, output examples and eval results for the EM German Model family in [our Github Repository](https://github.com/jphme/EM_German). ([Deutsche Version](https://github.com/jphme/EM_German/blob/main/README_DE.md)). You will also find instructions on how to run the models with a GUI (GPT4All/LM Studio). # Links & Demos ## Model Links Should you only try one model version, I strongly recommend the **[LeoLM Mistral](https://huggingface.co/jphme/em_german_leo_mistral)** model which offers by far the best combination of performance and computing requirements! | Base Model | HF | GPTQ | GGUF | AWQ | |-------|-------|-------|-------|-------| | Llama2 7b | [Link](https://huggingface.co/jphme/em_german_7b_v01) | [Link](https://huggingface.co/TheBloke/em_german_7b_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_7b_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_7b_v01-AWQ) | | Llama2 13b | [Link](https://huggingface.co/jphme/em_german_13b_v01) | [Link](https://huggingface.co/TheBloke/em_german_13b_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_13b_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_13b_v01-AWQ) | | Llama2 70b | [Link](https://huggingface.co/jphme/em_german_70b_v01) | [Link](https://huggingface.co/TheBloke/em_german_70b_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_70b_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_70b_v01-AWQ) | | [Mistral 7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) | [Link](https://huggingface.co/jphme/em_german_mistral_v01) | [Link](https://huggingface.co/TheBloke/em_german_mistral_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_mistral_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_mistral_v01-AWQ) | | [LeoLM 7b](https://huggingface.co/LeoLM/leo-hessianai-7b) | [Link](https://huggingface.co/jphme/em_german_7b_leo) | [Link](https://huggingface.co/jphme/em_german_7b_leo_gptq) | [Link](hhttps://huggingface.co/jphme/em_german_7b_leo_gguf) | tbc | | [LeoLM 13b](https://huggingface.co/LeoLM/leo-hessianai-13b) | soon | soon | [Link](https://huggingface.co/jphme/em_german_13b_leo_gguf) | tbc | | [LeoLM Mistral](https://huggingface.co/LeoLM/leo-mistral-hessianai-7b) | [Link](https://huggingface.co/jphme/em_german_leo_mistral) | [Link](https://huggingface.co/TheBloke/em_german_leo_mistral-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_leo_mistral-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_leo_mistral-AWQ) | ### Notes about the different versions: See also the [comparison of example outputs](https://github.com/jphme/EM_German/blob/main/example_output_comparison.md) for a comparison of (7b) model capabilities. If you get unsatisfying results with one or another EM German version, please try a different (and/or larger) model or version for your usecase. ## Demos: You can use some of the models with **free** google Colab instances (e.g. the 7bn model in 8bit or the 13b model with GPTQ): * [Example Colab Notebook for 13b with GPTQ](https://colab.research.google.com/drive/1IJfJdVwGkfe5MYOqHptystR3FBeEUdGn?usp=sharing) * [Example Colab Notebook for 7b with 8bit-Loading](https://colab.research.google.com/drive/1bsv6vkLM4AlCpSyXA6ol9P32zxZmf7Zu?usp=sharing) * [Example Colab Notebook for 7b Mistral GGUF with Grammar-based structured output](https://colab.research.google.com/drive/17O-a3D4BnVc_V9Of3dFfed18v1ULu1Sv?usp=sharing) # Prompt Format This model follows the Vicuna format without linebreaks (but should work with linebreaks as well). The format is as follows: ``` Du bist ein hilfreicher Assistent. USER: <instruction> ASSISTANT: ``` You can swap the standard system prompt for a better suited one (see below for RAG-tasks). # Example Output Please find a selection of various example outputs for all 7b versions of EM German [here](https://github.com/jphme/EM_German/blob/main/example_output_comparison.md). **Prompt:** > Erkläre die Bedeutung von drei wesentlichen Elementen der Bilanzierung und des Accountings nach IFRS für Unternehmen, die auf internationaler Ebene tätig sind. **Antwort EM German Leo Mistral 7b:** ``` Die International Financial Reporting Standards (IFRS) sind eine Reihe von internationalen Rechnungslegungsstandards, die von der International Accounting Standards Board (IASB) entwickelt wurden. Sie sollen dazu beitragen, dass Finanzberichte vergleichbar, zuverlässig und relevant sind. 1. Fair Value: Dieses Konzept ist ein zentraler Bestandteil der IFRS. Es besagt, dass Vermögenswerte und Schulden zum fairen Wert bilanziert werden sollten, was dem Preis entspricht, zu dem sie zwischen Marktteilnehmern gehandelt werden könnten. Dadurch wird sichergestellt, dass das Unternehmen eine genaue Darstellung seiner finanziellen Lage liefert. 2. Konsolidierung: Die IFRS verlangen von Unternehmen, ihre Finanzergebnisse auf konsolidierter Basis darzustellen. Das bedeutet, dass alle Tochtergesellschaften und verbundenen Unternehmen in die Bilanz des Mutterunternehmens aufgenommen werden. Dadurch wird ein klareres Bild von der finanziellen Situation des Konzerns geschaffen. 3. Vollständigkeitsprinzip: Die IFRS verlangen von Unternehmen, alle relevanten Informationen offenzulegen. Dies umfasst sowohl positive als auch negative Faktoren, wie zum Beispiel Gewinne und Verluste, Einnahmen und Ausgaben sowie Risiken und Unsicherheiten. Durch dieses Prinzip wird sichergestellt, dass Investoren, Kreditgeber und andere Stakeholder eine umfassende Einschätzung der finanziellen Lage des Unternehmens vornehmen können. ``` **Prompt:** > Ein Mensch atmet durchschnittlich 16 Mal pro Minute. Wie oft atmet er an einem Tag? **Antwort EM German Leo Mistral 7b:** ``` Zuerst berechnen wir, wie oft eine Person in einer Stunde atmet. Wir wissen, dass eine Stunde 60 Minuten hat, also: Atemzüge pro Stunde = Atemfrequenz * Minuten pro Stunde = 16 Atemzüge pro Minute * 60 Minuten = 960 Atemzüge Um herauszufinden, wie viele Atemzüge eine Person an einem Tag macht, müssen wir diese stündliche Rate mit der Anzahl der Stunden an einem Tag multiplizieren, also 24: Atemzüge pro Tag = Atemzüge pro Stunde * Stunden pro Tag = 960 Atemzüge * 24 Stunden = 23.040 Atemzüge Also macht ein durchschnittlicher Mensch etwa 23.040 Atemzüge an einem Tag. ``` (For more examples, please visit [our Github Repository](https://github.com/jphme/EM_German).) # Acknowledgements: Many thanks to [winglian/caseus](https://huggingface.co/winglian) for his great work on Axolotl which I used to train the EM mdoels. I am also grateful to [Jon Durbin](https://huggingface.co/jondurbin) and his [Airoboros](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2.1) models and code from which I borrowed many ideas and code snippets. Additionally many thanks to [Björn Plüster](https://huggingface.co/bjoernp) and the LeoLM team for the outstanding pretraining work on LeoLM and last but not least many many thanks to [TheBloke](https://huggingface.co/TheBloke) for the preparation of quantized versions in all formats under the sun. The 70b model was trained with support of the [OVH Cloud Startup Program](https://startup.ovhcloud.com/en/). # Contact For detailed feedback & feature requests, please open an issue or get in contact with me via [my website](https://www.jph.me). *PS: We are also always interested in support for our startup [ellamind](https://ellamind.com), which will offer customized models for business applications in the future (we are currently still in stealth mode). If you use our models for business applications and have advanced needs for specialized capabilities, please get in touch.* # Disclaimer: I am not responsible for the actions of third parties who use this model or the outputs of the model. This model should only be used for research purposes. The original base model license applies and is distributed with the model files.
Dmyadav2001/Sentimental-Analysis
Dmyadav2001
2023-11-10T18:12:03Z
352
3
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-11-10T17:37:12Z
# Sentiment Analysis with DistilBERT This repository contains a sentiment analysis project using the DistilBERT model. Sentiment analysis involves classifying text data into different sentiment categories, such as positive (label-1), negative (label-0), or neutral (label-2). ## Overview The project is implemented using Python and leverages several libraries for natural language processing and machine learning. It includes the following components: 1. *Dataset*: The Sentiment Analysis dataset is loaded using the [datasets] library. The dataset is split into training and validation sets for model training and evaluation. 2. *Text Preprocessing*: Text data is preprocessed to remove special characters, links, and user mentions. The DistilBERT tokenizer is used to tokenize and preprocess the text, and the data is prepared for training. 3. *Training Configuration*: The training configuration, including batch size, learning rate, and evaluation settings, is defined using the [TrainingArguments]. 4. *Model*: The sentiment analysis model is based on DistilBERT, a lightweight version of BERT, and is fine-tuned for sequence classification. The model is initialized, and the number of labels (positive, negative, and neutral) is specified. 5. *Trainer*: A [Trainer] instance is created to handle the training process. It takes the training dataset, evaluation dataset, and training configuration. 6. *Training*: The model is trained using the training dataset with the provided configuration. Training results, including loss and accuracy, are recorded. 7. *Evaluation*: After training, the model's performance is evaluated on the validation dataset. A classification report is generated to assess the model's accuracy and performance in classifying sentiments. 8. *Model Saving*: The trained model and tokenizer are saved for later use or deployment. ## Usage To use this code for your own sentiment analysis tasks, you can follow these steps: 1. *Installation*: Install the required libraries using the provided pip commands. 2. *Load Dataset*: Replace the dataset with your text data or use the provided SST-2 dataset. 3. *Training Configuration*: Modify the training arguments, such as batch size, learning rate, and evaluation strategy, in the TrainingArguments section to suit your specific task. 4. *Model Customization*: If needed, customize the model architecture or the number of labels according to your sentiment classification requirements. 5. *Training*: Train the model on your dataset by running the training code. 6. *Evaluation*: Evaluate the model's performance using your validation dataset or sample data. 7. *Model Saving*: Save the trained model and tokenizer for future use or deployment. ## Limitations - The provided code assumes a three-class sentiment classification task (positive, negative, and neutral). It may require adaptation for tasks with different label sets or multi-class classification. - The code uses DistilBERT, a smaller and faster version of BERT. For tasks that demand highly accurate but more computationally intensive models, it may be necessary to switch to the full BERT model or other advanced architectures. ## Future Requirements To further enhance and extend this sentiment analysis project, consider the following: - *Custom Dataset*: If you have a specific domain or industry, consider collecting and preparing a custom dataset that is more relevant to your application. - *Fine-tuning*: Experiment with fine-tuning hyperparameters and explore techniques like learning rate schedules or additional layers for the model. - *Deployment*: If you plan to use the model in a real-world application, explore deployment options, such as building a web service or integrating the model into an existing system. - *Performance Optimization*: Optimize the code for training on larger datasets and explore distributed training to improve efficiency.
jjzha/escoxlmr_knowledge_extraction
jjzha
2023-11-11T09:42:55Z
352
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-11-11T08:20:18Z
--- license: apache-2.0 --- This is a demo using the models from: ``` @inproceedings{zhang-etal-2023-escoxlm, title = "{ESCOXLM}-{R}: Multilingual Taxonomy-driven Pre-training for the Job Market Domain", author = "Zhang, Mike and van der Goot, Rob and Plank, Barbara", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-long.662", doi = "10.18653/v1/2023.acl-long.662", pages = "11871--11890", abstract = "The increasing number of benchmarks for Natural Language Processing (NLP) tasks in the computational job market domain highlights the demand for methods that can handle job-related tasks such as skill extraction, skill classification, job title classification, and de-identification. While some approaches have been developed that are specific to the job market domain, there is a lack of generalized, multilingual models and benchmarks for these tasks. In this study, we introduce a language model called ESCOXLM-R, based on XLM-R-large, which uses domain-adaptive pre-training on the European Skills, Competences, Qualifications and Occupations (ESCO) taxonomy, covering 27 languages. The pre-training objectives for ESCOXLM-R include dynamic masked language modeling and a novel additional objective for inducing multilingual taxonomical ESCO relations. We comprehensively evaluate the performance of ESCOXLM-R on 6 sequence labeling and 3 classification tasks in 4 languages and find that it achieves state-of-the-art results on 6 out of 9 datasets. Our analysis reveals that ESCOXLM-R performs better on short spans and outperforms XLM-R-large on entity-level and surface-level span-F1, likely due to ESCO containing short skill and occupation titles, and encoding information on the entity-level.", } ``` Note that there is another endpoint, namely `jjzha/escoxlmr_skill_extraction`. Knowledge can be seen as hard skills and Skills are both soft and applied skills.
kaitchup/Maixtchup-4x7b
kaitchup
2024-06-21T12:25:23Z
352
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-17T10:38:16Z
--- license: apache-2.0 tags: - moe language: - en library_name: transformers --- # Model Card for Model ID This is a mixture of experts created with [mergekit](https://github.com/cg123/mergekit) and based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [The Kaitchup](https://kaitchup.substack.com/) - **Model type:** Causal - **Language(s) (NLP):** English - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) The method and code used to quantize the model is explained here: [Maixtchup: Make Your Own Mixture of Experts with Mergekit](https://kaitchup.substack.com/p/maixtchup-make-your-own-mixture-of) ## Uses This model is pre-trained and not fine-tuned. You may fine-tune it with PEFT using adapters. ## Model Card Contact [The Kaitchup](https://kaitchup.substack.com/)
PowerInfer/prosparse-llama-2-13b-gguf
PowerInfer
2024-03-13T07:05:56Z
352
2
transformers
[ "transformers", "gguf", "sparsellama", "feature-extraction", "custom_code", "en", "arxiv:2402.13516", "license:llama2", "region:us" ]
feature-extraction
2024-02-20T08:34:22Z
--- license: llama2 language: - en --- # ProSparse-LLaMA-2-13B-GGUF - Original model: [SparseLLM/ProSparse-LLaMA-2-13B](https://huggingface.co/SparseLLM/prosparse-llama-2-13b) - Converted & distributed by: [THUNLP](https://nlp.csai.tsinghua.edu.cn/), [ModelBest](modelbest.cn), and [PowerInfer](https://huggingface.co/PowerInfer) This model is the downstream distribution of [SparseLLM/ProSparse-LLaMA-2-13B](https://huggingface.co/SparseLLM/prosparse-llama-2-13b) in PowerInfer GGUF format consisting of the LLM model weights and predictor weights. ### Citation Please kindly cite using the following BibTeX: ```bibtex @article{song2024prosparse, title={{ProSparse}: Introducing and Enhancing Intrinsic Activation Sparsity within Large Language Models}, author={Song, Chenyang and Han, Xu and Zhang, Zhengyan and Hu, Shengding and Shi, Xiyu and Li, Kuai and Chen, Chen and Liu, Zhiyuan and Li, Guangli and Yang, Tao and Sun, Maosong}, year={2024}, journal={arXiv preprint arXiv:2402.13516}, url={https://arxiv.org/pdf/2402.13516.pdf} } ```
mradermacher/psyonic-cetacean-20B-i1-GGUF
mradermacher
2024-05-08T04:46:34Z
352
2
transformers
[ "transformers", "gguf", "storywriting", "text adventure", "not-for-all-audiences", "en", "base_model:jebcarter/psyonic-cetacean-20B", "license:other", "endpoints_compatible", "region:us" ]
null
2024-03-05T07:27:14Z
--- base_model: jebcarter/psyonic-cetacean-20B language: - en library_name: transformers license: other license_name: microsoft-research-license quantized_by: mradermacher tags: - storywriting - text adventure - not-for-all-audiences --- ## About weighted/imatrix quants of https://huggingface.co/jebcarter/psyonic-cetacean-20B <!-- provided-files --> ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/psyonic-cetacean-20B-i1-GGUF/resolve/main/psyonic-cetacean-20B.i1-IQ1_S.gguf) | i1-IQ1_S | 4.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/psyonic-cetacean-20B-i1-GGUF/resolve/main/psyonic-cetacean-20B.i1-IQ1_M.gguf) | i1-IQ1_M | 4.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/psyonic-cetacean-20B-i1-GGUF/resolve/main/psyonic-cetacean-20B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/psyonic-cetacean-20B-i1-GGUF/resolve/main/psyonic-cetacean-20B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.3 | | | [GGUF](https://huggingface.co/mradermacher/psyonic-cetacean-20B-i1-GGUF/resolve/main/psyonic-cetacean-20B.i1-IQ2_S.gguf) | i1-IQ2_S | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/psyonic-cetacean-20B-i1-GGUF/resolve/main/psyonic-cetacean-20B.i1-IQ2_M.gguf) | i1-IQ2_M | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/psyonic-cetacean-20B-i1-GGUF/resolve/main/psyonic-cetacean-20B.i1-Q2_K.gguf) | i1-Q2_K | 7.7 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/psyonic-cetacean-20B-i1-GGUF/resolve/main/psyonic-cetacean-20B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/psyonic-cetacean-20B-i1-GGUF/resolve/main/psyonic-cetacean-20B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.5 | | | [GGUF](https://huggingface.co/mradermacher/psyonic-cetacean-20B-i1-GGUF/resolve/main/psyonic-cetacean-20B.i1-IQ3_S.gguf) | i1-IQ3_S | 9.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/psyonic-cetacean-20B-i1-GGUF/resolve/main/psyonic-cetacean-20B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 9.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/psyonic-cetacean-20B-i1-GGUF/resolve/main/psyonic-cetacean-20B.i1-IQ3_M.gguf) | i1-IQ3_M | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/psyonic-cetacean-20B-i1-GGUF/resolve/main/psyonic-cetacean-20B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 10.0 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/psyonic-cetacean-20B-i1-GGUF/resolve/main/psyonic-cetacean-20B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 10.8 | | | [GGUF](https://huggingface.co/mradermacher/psyonic-cetacean-20B-i1-GGUF/resolve/main/psyonic-cetacean-20B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 10.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/psyonic-cetacean-20B-i1-GGUF/resolve/main/psyonic-cetacean-20B.i1-Q4_0.gguf) | i1-Q4_0 | 11.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/psyonic-cetacean-20B-i1-GGUF/resolve/main/psyonic-cetacean-20B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 11.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/psyonic-cetacean-20B-i1-GGUF/resolve/main/psyonic-cetacean-20B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 12.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/psyonic-cetacean-20B-i1-GGUF/resolve/main/psyonic-cetacean-20B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 14.1 | | | [GGUF](https://huggingface.co/mradermacher/psyonic-cetacean-20B-i1-GGUF/resolve/main/psyonic-cetacean-20B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 14.5 | | | [GGUF](https://huggingface.co/mradermacher/psyonic-cetacean-20B-i1-GGUF/resolve/main/psyonic-cetacean-20B.i1-Q6_K.gguf) | i1-Q6_K | 16.5 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Solstice-Mixtral-v1-i1-GGUF
mradermacher
2024-05-10T16:43:20Z
352
1
transformers
[ "transformers", "gguf", "en", "dataset:Himitsui/Lewd-Assistant-v1", "base_model:Sao10K/Solstice-Mixtral-v1", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-03-14T11:14:46Z
--- base_model: Sao10K/Solstice-Mixtral-v1 datasets: - Himitsui/Lewd-Assistant-v1 language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About weighted/imatrix quants of https://huggingface.co/Sao10K/Solstice-Mixtral-v1 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Solstice-Mixtral-v1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Solstice-Mixtral-v1-i1-GGUF/resolve/main/Solstice-Mixtral-v1.i1-IQ1_S.gguf) | i1-IQ1_S | 10.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Solstice-Mixtral-v1-i1-GGUF/resolve/main/Solstice-Mixtral-v1.i1-IQ1_M.gguf) | i1-IQ1_M | 10.8 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Solstice-Mixtral-v1-i1-GGUF/resolve/main/Solstice-Mixtral-v1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.8 | | | [GGUF](https://huggingface.co/mradermacher/Solstice-Mixtral-v1-i1-GGUF/resolve/main/Solstice-Mixtral-v1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.2 | | | [GGUF](https://huggingface.co/mradermacher/Solstice-Mixtral-v1-i1-GGUF/resolve/main/Solstice-Mixtral-v1.i1-IQ2_S.gguf) | i1-IQ2_S | 14.4 | | | [GGUF](https://huggingface.co/mradermacher/Solstice-Mixtral-v1-i1-GGUF/resolve/main/Solstice-Mixtral-v1.i1-IQ2_M.gguf) | i1-IQ2_M | 15.8 | | | [GGUF](https://huggingface.co/mradermacher/Solstice-Mixtral-v1-i1-GGUF/resolve/main/Solstice-Mixtral-v1.i1-Q2_K.gguf) | i1-Q2_K | 17.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Solstice-Mixtral-v1-i1-GGUF/resolve/main/Solstice-Mixtral-v1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Solstice-Mixtral-v1-i1-GGUF/resolve/main/Solstice-Mixtral-v1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/Solstice-Mixtral-v1-i1-GGUF/resolve/main/Solstice-Mixtral-v1.i1-IQ3_S.gguf) | i1-IQ3_S | 20.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Solstice-Mixtral-v1-i1-GGUF/resolve/main/Solstice-Mixtral-v1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.7 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Solstice-Mixtral-v1-i1-GGUF/resolve/main/Solstice-Mixtral-v1.i1-IQ3_M.gguf) | i1-IQ3_M | 21.7 | | | [GGUF](https://huggingface.co/mradermacher/Solstice-Mixtral-v1-i1-GGUF/resolve/main/Solstice-Mixtral-v1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.8 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Solstice-Mixtral-v1-i1-GGUF/resolve/main/Solstice-Mixtral-v1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Solstice-Mixtral-v1-i1-GGUF/resolve/main/Solstice-Mixtral-v1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | | | [GGUF](https://huggingface.co/mradermacher/Solstice-Mixtral-v1-i1-GGUF/resolve/main/Solstice-Mixtral-v1.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Solstice-Mixtral-v1-i1-GGUF/resolve/main/Solstice-Mixtral-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 27.0 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Solstice-Mixtral-v1-i1-GGUF/resolve/main/Solstice-Mixtral-v1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Solstice-Mixtral-v1-i1-GGUF/resolve/main/Solstice-Mixtral-v1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.5 | | | [GGUF](https://huggingface.co/mradermacher/Solstice-Mixtral-v1-i1-GGUF/resolve/main/Solstice-Mixtral-v1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.5 | | | [GGUF](https://huggingface.co/mradermacher/Solstice-Mixtral-v1-i1-GGUF/resolve/main/Solstice-Mixtral-v1.i1-Q6_K.gguf) | i1-Q6_K | 38.6 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/HermesFlashback-7B.1-GGUF
mradermacher
2024-05-06T05:37:36Z
352
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "mlabonne/NeuralHermes-2.5-Mistral-7B", "timpal0l/Mistral-7B-v0.1-flashback-v2", "en", "base_model:FredrikBL/HermesFlashback-7B.1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-29T10:30:02Z
--- base_model: FredrikBL/HermesFlashback-7B.1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - mlabonne/NeuralHermes-2.5-Mistral-7B - timpal0l/Mistral-7B-v0.1-flashback-v2 --- ## About static quants of https://huggingface.co/FredrikBL/HermesFlashback-7B.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/HermesFlashback-7B.1-GGUF/resolve/main/HermesFlashback-7B.1.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
dmis-lab/meerkat-7b-v1.0
dmis-lab
2024-06-30T12:43:46Z
352
15
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "medical", "small LM", "instruction-tuned", "usmle", "chain-of-thought", "synthetic data", "conversational", "arxiv:2404.00376", "arxiv:2009.13081", "arxiv:2402.18060", "arxiv:2203.14371", "arxiv:2009.03300", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-03T12:37:36Z
--- license: cc-by-nc-4.0 pipeline_tag: text-generation tags: - medical - small LM - instruction-tuned - usmle - chain-of-thought - synthetic data --- # Meerkat-7B (Version 1.0) <center><img src = "https://cdn-uploads.huggingface.co/production/uploads/5efbdc4ac3896117eab961a9/IH0nR9HxYwNvrJBjP2dYQ.png" width="200" height="200"></center> 🚀 Meerkat-7B-v1.0 is an instruction-tuned medical AI system that surpasses the passing threshold of 60% for the United States Medical Licensing Examination (USMLE) for the first time among all 7B-parameter models. The model was trained using our new synthetic dataset consisting of high-quality chain-of-thought reasoning paths sourced from 18 medical textbooks, along with diverse instruction-following datasets. This equips the model with high-level medical reasoning capabilities required for solving complex medical problems. For further insights into our model, please refer to our paper! 📄 **Paper**: [Small Language Models Learn Enhanced Reasoning Skills from Medical Textbooks](https://arxiv.org/abs/2404.00376) ## Quick Start The input query should always end with "ASSISTANT:" as shown below. ``` query = "USER: What should I do when I get cold? ASSISTANT:" ``` We can use our model using the [apply_chat_template](https://huggingface.co/docs/transformers/main/chat_templating) function as follows: ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # cuda or cpu checkpoint = "dmis-lab/meerkat-7b-v1.0" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained( checkpoint, torch_dtype=torch.bfloat16, # You can choose to use this when there's not enough GPU memory available. ) # Multi-turn dialogue example messages = [ {"role": "system", "content": "You are a helpful doctor or healthcare professional. Guide the conversation to provide useful, complete, and scientifically-grounded answers to user questions. You have the option to compose a concise, single-turn conversation if the user's input is comprehensive to provide accurate answers. However, if essential details are missing, you should engage in a multi-turn dialogue, asking follow-up questions to gather a thorough medical history and records.\n\n"}, {"role": "user", "content": "Hello, doctor. I'm really concerned about my 10-year-old son. We recently discovered a painless mass in his left testicle, so we brought him to the pediatrician."}, {"role": "assistant", "content": "I understand your concern. Let's gather some more information. Has your son experienced any other symptoms along with the mass?"}, {"role": "user", "content": "Other than the mass, my son hasn't shown any symptoms. He's been his usual self, playing and eating normally."} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True, pad_token_id=tokenizer.eos_token_id) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## Prompt Details To reproduce the results reported in our paper, it is advisable to utilize the identical system messages used during model training. Please refer to the guidelines detailed below. ### USMLE or Clinical Cases When solving USMLE-style questions such as [MedQA](https://arxiv.org/abs/2009.13081) and [MedBullets](https://arxiv.org/abs/2402.18060), or dealing with complex clinical cases like the [JAMA Clinical Challenge](https://arxiv.org/abs/2402.18060), use the following system message: ``` messages = [ {"role": "system", "content": "The following is a multiple-choice question about medical knowledge. Solve this in a step-by-step fashion, starting by summarizing the available information. Output a single option from the given options as the final answer. You are strongly required to follow the specified output format; conclude your response with the phrase \"the answer is ([option_id]) [answer_string]\".\n\n"}, {"role": "user", "content": "Two weeks after undergoing an emergency cardiac catherization with stenting for unstable angina pectoris, a 61-year-old man has decreased urinary output and malaise. He has type 2 diabetes mellitus and osteoarthritis of the hips. Prior to admission, his medications were insulin and naproxen. He was also started on aspirin, clopidogrel, and metoprolol after the coronary intervention. His temperature is 38\u00b0C (100.4\u00b0F), pulse is 93/min, and blood pressure is 125/85 mm Hg. Examination shows mottled, reticulated purplish discoloration of the feet. Laboratory studies show:\nHemoglobin count 14 g/dL\nLeukocyte count 16,400/mm3\nSegmented neutrophils 56%\nEosinophils 11%\nLymphocytes 31%\nMonocytes 2%\nPlatelet count 260,000/mm3\nErythrocyte sedimentation rate 68 mm/h\nSerum\nUrea nitrogen 25 mg/dL\nCreatinine 4.2 mg/dL\nRenal biopsy shows intravascular spindle-shaped vacuoles. Which of the following is the most likely cause of this patient's symptoms?\" (A) Renal papillary necrosis (B) Cholesterol embolization (C) Eosinophilic granulomatosis with polyangiitis (D) Polyarteritis nodosa"}, ] ``` The model generates reasoning paths to solve the problem and then sequentially provides the predicted answers. Since the model ends its response with "the answer is," it is straightforward to extract the predicted answer for comparison with the actual answer. ### Multiple-choice Exams For other types of multiple-choice exams such as [MedMCQA](https://arxiv.org/abs/2203.14371) or [MMLU](https://arxiv.org/abs/2009.03300), use the following simple system message: ``` messages = [ {"role": "system", "content": "Answer the multiple-choice question about medical knowledge.\n\n"}, {"role": "user", "content": "In a Robertsonian translocation fusion occurs at the: (A) telomeres. (B) centromeres. (C) histones. (D) ends of the long arms."}, ] ``` ### Other Use Cases Our model was trained using the [AlpaCare](https://github.com/xzhang97666/alpacare) instruction dataset comprising 52K examples, to enhance its generalization capabilities across diverse user prompts. Feel free to design and test your prompts and to share your thoughts with us, whether the model exceeds expectations or falls short! ## Evaluation We tested models on seven medical benchmarks: [MedQA](https://arxiv.org/abs/2009.13081), [USMLE sample test](https://www.usmle.org/prepare-your-exam), [Medbullets-4](https://arxiv.org/abs/2402.18060), [Medbullets-5](https://arxiv.org/abs/2402.18060) , [MedMCQA](https://arxiv.org/abs/2203.14371), [MMLU-Medical](https://arxiv.org/abs/2009.03300), and [JAMA Clinical Challenge](https://arxiv.org/abs/2402.18060). | **Model** | **Average** | **MedQA** | **USMLE** | **Medbullets-4** | **Medbullets-5** | **MedMCQA** | **MMLU-Medical** | **JAMA** | |:--------------------------------|:-----------:|:---------:|:---------:|:----------------:|:----------------:|:-----------:|:----------------:|:--------:| | GPT-4 | 75.2 | 81.4 | 86.6 | 68.8 | 63.3 | 72.4 | 87.1 | 67.1 | | GPT-3.5 | 54.1 | 53.6 | 58.5 | 51.0 | 47.4 | 51.0 | 67.3 | 50.1 | | MediTron-70B (Ensemble, 5 runs) | - | 70.2 | - | - | - | 66.0 | 78.0 | - | |*Open-source (7B)*| | MediTron-7B | 50.8 | 50.2 | 44.6 | 51.1 | 45.5 | 57.9 | 56.7 | 49.3 | | BioMistral-7B | 54.4 | 54.3 | 51.4 | 52.3 | 48.7 | **61.1** | 64.6 | 48.6 | | Meerkat-7B | 62.4 | 70.6 | 70.3 | 58.7 | 52.9 | 60.6 | 70.5 | 53.1 | | Meerkat-7B (Ensemble, 5 runs) | **64.2** | **74.3** | **71.4** | **61.0** | **55.3** | 60.7 | **72.4** | **54.0** | Please note that the scores in MMLU-Medical were calculated based on the average accuracies across six medical-related subjects in the original MMLU benchmark, and each result for a single subject is presented below. | **Model** | **Average** | **Cliniq Knowledge** | **Medical Genetics** | **Anatomy** | **Professional Medicine** | **College Biology** | **College Medicine** | |:--------------------------------|:-----------:|:--------------------:|:--------------------:|:-----------:|:-------------------------:|:-------------------:|:--------------------:| | GPT-4 | 87.1 | 86.4 | 92.0 | 80.0 | 93.8 | 93.8 | 76.3 | | GPT-3.5 | 67.3 | 68.7 | 68.0 | 60.7 | 69.9 | 72.9 | 63.6 | | MediTron-70B (Ensemble, 5 runs) | 78.0 | 75.5 | 85.9 | 69.4 | 82.3 | 86.7 | 68.0 | |*Open-source (7B)*| | MediTron-7B | 56.7 | 57.7 | 63.8 | 56.9 | 56.0 | 57.1 | 48.9 | | BioMistral-7B | 64.6 | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | | Meerkat-7B | 70.5 | 71.6 | 74.8 | 63.2 | 77.3 | 70.8 | **65.2** | | Meerkat-7B (Ensemble, 5 runs) | **72.4** | **74.1** | **79.4** | **64.1** | **78.8** | **75.8** | 62.4 | ## Model Architecture Our model was based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) because of its accuracy and run-time efficiency. ## Training Data We plan to release our training dataset publicly. ## Contact Feel free to email `[email protected]` if you have any questions.
llmixer/command-r-plus-6.0bpw-h8-exl2
llmixer
2024-04-08T17:28:15Z
352
1
transformers
[ "transformers", "safetensors", "cohere", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "6-bit", "exl2", "region:us" ]
text-generation
2024-04-08T17:05:44Z
--- pipeline_tag: text-generation --- 6.00 bpw exl2 quant of [c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus) More quants: https://huggingface.co/turboderp/command-r-plus-103B-exl2
PrunaAI/phi-2-GGUF-smashed
PrunaAI
2024-04-13T13:27:04Z
352
0
null
[ "gguf", "pruna-ai", "region:us" ]
null
2024-04-11T22:46:33Z
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with GGUF. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***What is the model format?*** We use GGUF format. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). # Downloading and running the models You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/): | Quant type | Description | |------------|--------------------------------------------------------------------------------------------| | Q5_K_M | High quality, recommended. | | Q5_K_S | High quality, recommended. | | Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. | | Q4_K_S | Slightly lower quality with more space savings, recommended. | | IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. | | IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. | | Q3_K_L | Lower quality but usable, good for low RAM availability. | | Q3_K_M | Even lower quality. | | IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | Q3_K_S | Low quality, not recommended. | | IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | Q2_K | Very low quality but surprisingly usable. | ## How to download GGUF files ? **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev - **Option A** - Downloading in `text-generation-webui`: - **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/phi-2-GGUF-smashed-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf. - **Step 2**: Then click Download. - **Option B** - Downloading on the command line (including multiple files at once): - **Step 1**: We recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` - **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download PrunaAI/phi-2-GGUF-smashed-smashed phi-2.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> Alternatively, you can also download multiple files at once with a pattern: ```shell huggingface-cli download PrunaAI/phi-2-GGUF-smashed-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/phi-2-GGUF-smashed-smashed phi-2.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## How to run model in GGUF format? - **Option A** - Introductory example with `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m phi-2.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) - **Option B** - Running in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). - **Option C** - Running from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./phi-2.IQ3_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<s>[INST] {prompt} [/INST]", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./phi-2.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` - **Option D** - Running with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
malteklaes/based-CodeBERTa-language-id-llm-module_uniVienna
malteklaes
2024-04-18T23:50:50Z
352
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "dataset:malteklaes/cpp-code-code_search_net-style", "base_model:malteklaes/based-CodeBERTa-language-id-llm-module", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-04-17T11:11:46Z
--- license: apache-2.0 base_model: malteklaes/based-CodeBERTa-language-id-llm-module tags: - generated_from_trainer model-index: - name: based-CodeBERTa-language-id-llm-module_uniVienna results: [] datasets: - malteklaes/cpp-code-code_search_net-style widget: - text: package main import ( "fmt" "math/rand" "openspiel") func main() {game := openspiel.LoadGame("breakthrough")} output: - label: Go score: 1.0 example_title: Go example code - text: public static void malmoCliffWalk() throws MalmoConnectionError, IOException {DQNPolicy<MalmoBox> pol = dql.getPolicy();} output: - label: Java score: 1.0 example_title: Java example code - text: var Window = require('../math/window.js') class Agent { constructor(opt) {this.states = this.options.states}} output: - label: Javascript score: 1.0 example_title: Javascript example code - text: $x = 5; echo $x * 2; output: - label: PHP score: 1.0 example_title: PHP example code - text: from stable_baselines3 import PPO if __name__ == '__main__' output: - label: Python score: 1.0 example_title: Python example code - text: x = 5; y = 3; puts x + y output: - label: Ruby score: 1.0 example_title: Ruby example code - text: "#include 'dqn.h' int main(int argc, char *argv[]) { rlop::Timer timer;}" output: - label: C++ score: 1.0 example_title: C++ example code --- # based-CodeBERTa-language-id-llm-module_uniVienna This model is a fine-tuned version of [malteklaes/based-CodeBERTa-language-id-llm-module](https://huggingface.co/malteklaes/based-CodeBERTa-language-id-llm-module). ## Model description and Framework version - based on model [malteklaes/based-CodeBERTa-language-id-llm-module](https://huggingface.co/malteklaes/based-CodeBERTa-language-id-llm-module) (7 programming languages), which in turn is based on [huggingface/CodeBERTa-language-id](https://huggingface.co/huggingface/CodeBERTa-language-id) (6 programming languages) - model details: ``` RobertaTokenizerFast(name_or_path='malteklaes/based-CodeBERTa-language-id-llm-module_uniVienna', vocab_size=52000, model_max_length=512, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>', 'sep_token': '</s>', 'pad_token': '<pad>', 'cls_token': '<s>', 'mask_token': '<mask>'}, clean_up_tokenization_spaces=True), added_tokens_decoder={ 0: AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=True, special=True), 1: AddedToken("<pad>", rstrip=False, lstrip=False, single_word=False, normalized=True, special=True), 2: AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True, special=True), 3: AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=True, special=True), 4: AddedToken("<mask>", rstrip=False, lstrip=True, single_word=False, normalized=False, special=True), } ``` - complete model-config: ``` RobertaConfig { "_name_or_path": "malteklaes/based-CodeBERTa-language-id-llm-module_uniVienna", "_num_labels": 7, "architectures": [ "RobertaForSequenceClassification" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "classifier_dropout": null, "eos_token_id": 2, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "id2label": { "0": "go", "1": "java", "2": "javascript", "3": "php", "4": "python", "5": "ruby", "6": "cpp" }, "initializer_range": 0.02, "intermediate_size": 3072, "label2id": { "cpp": 6, "go": 0, "java": 1, "javascript": 2, "php": 3, "python": 4, "ruby": 5 }, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 12, "num_hidden_layers": 6, "pad_token_id": 1, "position_embedding_type": "absolute", "problem_type": "single_label_classification", "torch_dtype": "float32", "transformers_version": "4.39.3", "type_vocab_size": 1, "use_cache": true, "vocab_size": 52000 } ``` ## Intended uses & limitations For a given code, the following programming language can be determined: - Go - Java - Javascript - PHP - Python - Ruby - C++ ## Usage ```python checkpoint = "malteklaes/based-CodeBERTa-language-id-llm-module_uniVienna" tokenizer = AutoTokenizer.from_pretrained(checkpoint) modelPOST = AutoTokenizer.from_pretrained(checkpoint) myPipeline = TextClassificationPipeline( model=AutoModelForSequenceClassification.from_pretrained(checkpoint, ignore_mismatched_sizes=True), tokenizer=AutoTokenizer.from_pretrained(checkpoint) ) CODE_TO_IDENTIFY_py = """ def is_prime(n): if n <= 1: return False if n == 2 or n == 3: return True if n % 2 == 0: return False max_divisor = int(n ** 0.5) for i in range(3, max_divisor + 1, 2): if n % i == 0: return False return True number = 17 if is_prime(number): print(f"{number} is a prime number.") else: print(f"{number} is not a prime number.") """ myPipeline(CODE_TO_IDENTIFY_py) # output: [{'label': 'python', 'score': 0.9999967813491821}] ``` ## Training and evaluation data ### Training-Datasets used - for Go, Java, Javascript, PHP, Python, Ruby: [code_search_net](https://huggingface.co/datasets/code_search_net) - for C++: [malteklaes/cpp-code-code_search_net-style](https://huggingface.co/datasets/malteklaes/cpp-code-code_search_net-style) ### Training procedure - machine: GPU T4 (Google Colab) - system-RAM: 4.7/12.7 GB (during training) - GPU-RAM: 2.8/15.0GB - Drive: 69.5/78.5 GB (during training due to complete ) - trainer.train(): [x/24136 xx:xx < 31:12, 12.92 it/s, Epoch 0.01/1] - total 24136 iterations ### Training note - Although this model is based on the predecessors mentioned above, this model had to be trained from scratch because the [config.json](https://huggingface.co/malteklaes/based-CodeBERTa-language-id-llm-module_uniVienna/blob/main/config.json) and labels of the original model were changed from 6 to 7 programming languages. ### Training hyperparameters The following hyperparameters were used during training (training args): ``` training_args = TrainingArguments( output_dir="./based-CodeBERTa-language-id-llm-module_uniVienna", overwrite_output_dir=True, num_train_epochs=0.1, per_device_train_batch_size=8, save_steps=500, save_total_limit=2, ) ``` ### Training results - output: ``` TrainOutput(global_step=24136, training_loss=0.005988701689750161, metrics={'train_runtime': 1936.0586, 'train_samples_per_second': 99.731, 'train_steps_per_second': 12.467, 'total_flos': 3197518224531456.0, 'train_loss': 0.005988701689750161, 'epoch': 0.1}) ```
duyntnet/Octopus-v2-imatrix-GGUF
duyntnet
2024-04-26T07:08:05Z
352
0
transformers
[ "transformers", "gguf", "imatrix", "NexaAIDev", "Octopus-v2", "text-generation", "en", "license:other", "region:us" ]
text-generation
2024-04-24T19:28:44Z
--- license: other inference: false language: - en pipeline_tag: text-generation tags: - transformers - gguf - imatrix - NexaAIDev - Octopus-v2 --- Quantizations of https://huggingface.co/NexaAIDev/Octopus-v2 # From original readme ## Example Use Cases You can run the model on a GPU using the following code. ```python from transformers import AutoTokenizer, GemmaForCausalLM import torch import time def inference(input_text): start_time = time.time() input_ids = tokenizer(input_text, return_tensors="pt").to(model.device) input_length = input_ids["input_ids"].shape[1] outputs = model.generate( input_ids=input_ids["input_ids"], max_length=1024, do_sample=False) generated_sequence = outputs[:, input_length:].tolist() res = tokenizer.decode(generated_sequence[0]) end_time = time.time() return {"output": res, "latency": end_time - start_time} model_id = "NexaAIDev/Octopus-v2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = GemmaForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto" ) input_text = "Take a selfie for me with front camera" nexa_query = f"Below is the query from the users, please call the correct function and generate the parameters to call the function.\n\nQuery: {input_text} \n\nResponse:" start_time = time.time() print("nexa model result:\n", inference(nexa_query)) print("latency:", time.time() - start_time," s") ```
nestauk/en_skillner
nestauk
2024-06-27T09:43:09Z
352
0
spacy
[ "spacy", "token-classification", "en", "license:mit", "model-index", "region:us" ]
token-classification
2024-05-03T09:22:16Z
--- tags: - spacy - token-classification language: - en license: mit model-index: - name: en_skillner results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.5919354839 - name: NER Recall type: recall value: 0.5758368201 - name: NER F Score type: f_score value: 0.5837751856 --- A Named Entity Recognition (NER) model to extract SKILL, EXPERIENCE and BENEFIT from job adverts. | Feature | Description | | --- | --- | | **Name** | `en_skillner` | | **Version** | `3.7.1` | | **spaCy** | `>=3.7.4,<3.8.0` | | **Default Pipeline** | `tok2vec`, `tagger`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` | | **Components** | `tok2vec`, `tagger`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` | | **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) | | **Sources** | [OntoNotes 5](https://catalog.ldc.upenn.edu/LDC2013T19) (Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston)<br>[ClearNLP Constituent-to-Dependency Conversion](https://github.com/clir/clearnlp-guidelines/blob/master/md/components/dependency_conversion.md) (Emory University)<br>[WordNet 3.0](https://wordnet.princeton.edu/) (Princeton University)<br>[Explosion Vectors (OSCAR 2109 + Wikipedia + OpenSubtitles + WMT News Crawl)](https://github.com/explosion/spacy-vectors-builder) (Explosion) | | **License** | `MIT` | | **Author** | [nestauk](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (3 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `SKILL`, `EXPERIENCE`, `BENEFIT` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_P` | 59.19 | | `ENTS_R` | 57.58 | | `ENTS_F` | 58.38 | | `SKILL_P` | 72.19 | | `SKILL_R` | 72.62 | | `SKILL_F` | 72.40 | | `EXPERIENCE_P` | 52.14 | | `EXPERIENCE_R` | 41.48 | | `EXPERIENCE_F` | 46.20 | | `BENEFIT_P` | 75.61 | | `BENEFIT_R` | 46.27 | | `BENEFIT_F` | 57.41 |
MaziyarPanahi/Inex12M7-7B-GGUF
MaziyarPanahi
2024-05-21T15:06:28Z
352
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:liminerity/M7-7b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:automerger/Inex12M7-7B" ]
text-generation
2024-05-21T14:36:24Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - merge - mergekit - lazymergekit - automerger - base_model:liminerity/M7-7b - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: Inex12M7-7B-GGUF base_model: automerger/Inex12M7-7B inference: false model_creator: automerger pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Inex12M7-7B-GGUF](https://huggingface.co/MaziyarPanahi/Inex12M7-7B-GGUF) - Model creator: [automerger](https://huggingface.co/automerger) - Original model: [automerger/Inex12M7-7B](https://huggingface.co/automerger/Inex12M7-7B) ## Description [MaziyarPanahi/Inex12M7-7B-GGUF](https://huggingface.co/MaziyarPanahi/Inex12M7-7B-GGUF) contains GGUF format model files for [automerger/Inex12M7-7B](https://huggingface.co/automerger/Inex12M7-7B). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
QuantFactory/Yi-1.5-6B-GGUF
QuantFactory
2024-05-24T03:48:05Z
352
0
transformers
[ "transformers", "gguf", "llama", "text-generation", "base_model:01-ai/Yi-1.5-6B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T03:05:54Z
--- license: apache-2.0 library_name: transformers pipeline_tag: text-generation tags: - llama base_model: 01-ai/Yi-1.5-6B --- # Yi-1.5-6B-GGUF - This is quantized version of [01-ai/Yi-1.5-6B](https://huggingface.co/01-ai/Yi-1.5-6B) created using llama.cpp # Model Description Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples. Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension. <div align="center"> Model | Context Length | Pre-trained Tokens | :------------: | :------------: | :------------: | | Yi-1.5 | 4K, 16K, 32K | 3.6T </div> # Models - Chat models <div align="center"> | Name | Download | | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)| | Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)| | Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)| | Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)| | Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)| </div> - Base models <div align="center"> | Name | Download | | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)| | Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)| | Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)| | Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)| | Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)| </div> # Benchmarks - Chat models Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/KcsJ9Oc1VnEmfCDEJc5cd.png) Yi-1.5-9B-Chat is the top performer among similarly sized open-source models. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/xf6pLg5jqRCwjlh6m3t6_.png) - Base models Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/BwU7QM-03dZvZzwdIE1xY.png) Yi-1.5-9B is the top performer among similarly sized open-source models. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/y-EYSYPT-3aWLJ0x8R94F.png) # Quick Start For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf
RichardErkhov
2024-05-31T17:57:55Z
352
0
null
[ "gguf", "region:us" ]
null
2024-05-31T15:08:07Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) malaysian-mistral-7b-32k-instructions - GGUF - Model creator: https://huggingface.co/mesolitica/ - Original model: https://huggingface.co/mesolitica/malaysian-mistral-7b-32k-instructions/ | Name | Quant method | Size | | ---- | ---- | ---- | | [malaysian-mistral-7b-32k-instructions.Q2_K.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf/blob/main/malaysian-mistral-7b-32k-instructions.Q2_K.gguf) | Q2_K | 2.53GB | | [malaysian-mistral-7b-32k-instructions.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf/blob/main/malaysian-mistral-7b-32k-instructions.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [malaysian-mistral-7b-32k-instructions.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf/blob/main/malaysian-mistral-7b-32k-instructions.IQ3_S.gguf) | IQ3_S | 2.96GB | | [malaysian-mistral-7b-32k-instructions.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf/blob/main/malaysian-mistral-7b-32k-instructions.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [malaysian-mistral-7b-32k-instructions.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf/blob/main/malaysian-mistral-7b-32k-instructions.IQ3_M.gguf) | IQ3_M | 3.06GB | | [malaysian-mistral-7b-32k-instructions.Q3_K.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf/blob/main/malaysian-mistral-7b-32k-instructions.Q3_K.gguf) | Q3_K | 3.28GB | | [malaysian-mistral-7b-32k-instructions.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf/blob/main/malaysian-mistral-7b-32k-instructions.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [malaysian-mistral-7b-32k-instructions.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf/blob/main/malaysian-mistral-7b-32k-instructions.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [malaysian-mistral-7b-32k-instructions.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf/blob/main/malaysian-mistral-7b-32k-instructions.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [malaysian-mistral-7b-32k-instructions.Q4_0.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf/blob/main/malaysian-mistral-7b-32k-instructions.Q4_0.gguf) | Q4_0 | 3.83GB | | [malaysian-mistral-7b-32k-instructions.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf/blob/main/malaysian-mistral-7b-32k-instructions.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [malaysian-mistral-7b-32k-instructions.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf/blob/main/malaysian-mistral-7b-32k-instructions.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [malaysian-mistral-7b-32k-instructions.Q4_K.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf/blob/main/malaysian-mistral-7b-32k-instructions.Q4_K.gguf) | Q4_K | 4.07GB | | [malaysian-mistral-7b-32k-instructions.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf/blob/main/malaysian-mistral-7b-32k-instructions.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [malaysian-mistral-7b-32k-instructions.Q4_1.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf/blob/main/malaysian-mistral-7b-32k-instructions.Q4_1.gguf) | Q4_1 | 4.24GB | | [malaysian-mistral-7b-32k-instructions.Q5_0.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf/blob/main/malaysian-mistral-7b-32k-instructions.Q5_0.gguf) | Q5_0 | 4.65GB | | [malaysian-mistral-7b-32k-instructions.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf/blob/main/malaysian-mistral-7b-32k-instructions.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [malaysian-mistral-7b-32k-instructions.Q5_K.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf/blob/main/malaysian-mistral-7b-32k-instructions.Q5_K.gguf) | Q5_K | 4.78GB | | [malaysian-mistral-7b-32k-instructions.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf/blob/main/malaysian-mistral-7b-32k-instructions.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [malaysian-mistral-7b-32k-instructions.Q5_1.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf/blob/main/malaysian-mistral-7b-32k-instructions.Q5_1.gguf) | Q5_1 | 5.07GB | | [malaysian-mistral-7b-32k-instructions.Q6_K.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf/blob/main/malaysian-mistral-7b-32k-instructions.Q6_K.gguf) | Q6_K | 5.53GB | | [malaysian-mistral-7b-32k-instructions.Q8_0.gguf](https://huggingface.co/RichardErkhov/mesolitica_-_malaysian-mistral-7b-32k-instructions-gguf/blob/main/malaysian-mistral-7b-32k-instructions.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- language: - ms --- # Full Parameter Finetuning 7B 32768 context length Mistral on Malaysian instructions dataset README at https://github.com/mesolitica/malaya/tree/5.1/session/mistral#instructions-7b-16384-context-length We use exact Mistral Instruct chat template. WandB, https://wandb.ai/mesolitica/fpf-mistral-7b-hf-instructions-16k?workspace=user-husein-mesolitica ## how-to ```python from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig import torch import json def parse_mistral_chat(messages, function_call = None): user_query = messages[-1]['content'] users, assistants = [], [] for q in messages[:-1]: if q['role'] == 'user': users.append(q['content']) elif q['role'] == 'assistant': assistants.append(q['content']) texts = ['<s>'] if function_call: fs = [] for f in function_call: f = json.dumps(f, indent=4) fs.append(f) fs = '\n\n'.join(fs) texts.append(f'\n[FUNCTIONCALL]\n{fs}\n') for u, a in zip(users, assistants): texts.append(f'[INST] {u.strip()} [/INST] {a.strip()}</s>') texts.append(f'[INST] {user_query.strip()} [/INST]') prompt = ''.join(texts).strip() return prompt TORCH_DTYPE = 'bfloat16' nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type='nf4', bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=getattr(torch, TORCH_DTYPE) ) tokenizer = AutoTokenizer.from_pretrained('mesolitica/malaysian-mistral-7b-32k-instructions') model = AutoModelForCausalLM.from_pretrained( 'mesolitica/malaysian-mistral-7b-32k-instructions', use_flash_attention_2 = True, quantization_config = nf4_config ) messages = [ {'role': 'user', 'content': 'kwsp tu apa'} ] prompt = parse_mistral_chat(messages) inputs = tokenizer([prompt], return_tensors='pt', add_special_tokens=False).to('cuda') generate_kwargs = dict( inputs, max_new_tokens=1024, top_p=0.95, top_k=50, temperature=0.9, do_sample=True, num_beams=1, ) r = model.generate(**generate_kwargs) tokenizer.decode(r[0]) ``` ```text <s> [INST] kwsp tu apa [/INST]KWSP bermaksud Kumpulan Wang Simpanan Pekerja. Ia adalah sebuah institusi simpanan persaraan yang ditubuhkan oleh Kementerian Kewangan Malaysia untuk tujuan mengumpul simpanan ahli untuk dibayar pada umur persaraan, penuh atau penuh persaraan penuh. KWSP ditubuhkan pada tahun 1951 dan mula beroperasi pada tahun 1952. KWSP adalah salah satu institusi simpanan persaraan terbesar di dunia, dengan pangkalan ahli sekitar 14 juta ahli.</s> ``` ```python messages = [ {'role': 'user', 'content': 'camne nak kill pid dalam linux'} ] prompt = parse_mistral_chat(messages) inputs = tokenizer([prompt], return_tensors='pt', add_special_tokens=False).to('cuda') generate_kwargs = dict( inputs, max_new_tokens=1024, top_p=0.95, top_k=50, temperature=0.9, do_sample=True, num_beams=1, ) r = model.generate(**generate_kwargs) print(tokenizer.decode(r[0])) ``` ```text <s> [INST] camne nak kill pid dalam linux [/INST]Untuk membunuh pid dalam Linux, ikuti langkah-langkah berikut: 1. Cari pid menggunakan perintah `ps aux | grep 'pid'`. Pilih pid yang anda ingin bunuh. 2. Gunakan perintah `kill` diikuti dengan nombor pid. Sebagai contoh, jika anda ingin membunuh pid dengan nombor 1234, gunakan perintah `kill -1234`. 3. Sahkan pembunuhan dengan menjalankan perintah `kill -id`. Jika perintah tidak mengembalikan sebarang ralat, maka pid telah dibunuh dengan berjaya. 4. Anda juga boleh menggunakan perintah `kill -s` untuk membunuh semua pid dengan nama atau atribut tertentu.</s> ``` ```python f = { 'name': 'parse_entities', 'description': 'extract entities from the text', 'parameters': { 'type': 'object', 'properties': { 'drink': { 'type': 'string', 'description': 'drink name', }, 'event': { 'type': 'string', 'description': 'event name', }, 'person_name': { 'type': 'string', 'description': 'person name', } }, 'required': [ 'drink', 'event', 'person_name' ] } } messages = [ {'role': 'user', 'content': 'nama saya husein bin zolkepli, saya sekarang berada di jomheboh 2023 sambil minum starbucks'} ] prompt = parse_mistral_chat(messages, function_call = [f]) inputs = tokenizer([prompt], return_tensors='pt', add_special_tokens=False).to('cuda') generate_kwargs = dict( inputs, max_new_tokens=128, top_p=0.95, top_k=50, temperature=0.9, do_sample=True, num_beams=1, ) r = model.generate(**generate_kwargs) print(tokenizer.decode(r[0])) ``` ```text <s> [FUNCTIONCALL] { "name": "parse_entities", "description": "extract entities from the text", "parameters": { "type": "object", "properties": { "drink": { "type": "string", "description": "drink name" }, "event": { "type": "string", "description": "event name" }, "person_name": { "type": "string", "description": "person name" } }, "required": [ "drink", "event", "person_name" ] } } [INST] nama saya husein bin zolkepli, saya sekarang berada di jomheboh 2023 sambil minum starbucks [/INST] <functioncall> {"name": "parse_entities", "arguments": '{ "drink": "Starbucks", "event": "Jom Heboh 2023", "person_name": "Husein Bin Zolkepli" }'}</s> ```
ahmedgongi/Llama_devops_2bit_gguf_q2_k
ahmedgongi
2024-06-02T18:05:04Z
352
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-02T18:03:27Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** ahmedgongi - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/Venus-103b-v1.0-i1-GGUF
mradermacher
2024-06-08T08:25:45Z
352
0
transformers
[ "transformers", "gguf", "not-for-all-audiences", "en", "base_model:nsfwthrowitaway69/Venus-103b-v1.0", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-06-05T23:28:04Z
--- base_model: nsfwthrowitaway69/Venus-103b-v1.0 language: - en library_name: transformers license: llama2 quantized_by: mradermacher tags: - not-for-all-audiences --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/nsfwthrowitaway69/Venus-103b-v1.0 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Venus-103b-v1.0-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-IQ1_S.gguf) | i1-IQ1_S | 21.8 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-IQ1_M.gguf) | i1-IQ1_M | 23.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 27.4 | | | [GGUF](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 30.5 | | | [GGUF](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-IQ2_S.gguf) | i1-IQ2_S | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-IQ2_M.gguf) | i1-IQ2_M | 34.8 | | | [GGUF](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-Q2_K.gguf) | i1-Q2_K | 38.0 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 39.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 42.3 | | | [GGUF](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 44.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-IQ3_S.gguf) | i1-IQ3_S | 44.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-IQ3_M.gguf) | i1-IQ3_M | 46.2 | | | [GGUF](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 49.7 | IQ3_S probably better | | [PART 1](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 54.2 | IQ3_M probably better | | [PART 1](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 55.2 | | | [PART 1](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 58.4 | fast, low quality | | [PART 1](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 58.7 | optimal size/speed/quality | | [PART 1](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 62.0 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 71.1 | | | [PART 1](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 73.0 | | | [PART 1](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Venus-103b-v1.0-i1-GGUF/resolve/main/Venus-103b-v1.0.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 84.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
EPFL-VILAB/4M_tokenizers_human-poses_1k_8
EPFL-VILAB
2024-06-14T08:22:47Z
352
1
ml-4m
[ "ml-4m", "safetensors", "arxiv:2312.06647", "arxiv:2406.09406", "license:other", "region:us" ]
null
2024-06-12T08:48:41Z
--- license: other license_name: sample-code-license license_link: LICENSE library_name: ml-4m --- # 4M: Massively Multimodal Masked Modeling *A framework for training any-to-any multimodal foundation models. <br>Scalable. Open-sourced. Across tens of modalities and tasks.* [`Website`](https://4m.epfl.ch) | [`GitHub`](https://github.com/apple/ml-4m) | [`BibTeX`](#citation) Official implementation and pre-trained models for : [**4M: Massively Multimodal Masked Modeling**](https://arxiv.org/abs/2312.06647), NeurIPS 2023 (Spotlight) <br> *[David Mizrahi](https://dmizrahi.com/)\*, [Roman Bachmann](https://roman-bachmann.github.io/)\*, [Oğuzhan Fatih Kar](https://ofkar.github.io/), [Teresa Yeo](https://aserety.github.io/), [Mingfei Gao](https://fly6464.github.io/), [Afshin Dehghan](https://www.afshindehghan.com/), [Amir Zamir](https://vilab.epfl.ch/zamir/)* [**4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities**](https://arxiv.org/abs/2406.09406), arXiv 2024 <br> *[Roman Bachmann](https://roman-bachmann.github.io/)\*, [Oğuzhan Fatih Kar](https://ofkar.github.io/)\*, [David Mizrahi](https://dmizrahi.com/)\*, [Ali Garjani](https://garjania.github.io/), [Mingfei Gao](https://fly6464.github.io/), [David Griffiths](https://www.dgriffiths.uk/), [Jiaming Hu](https://scholar.google.com/citations?user=vm3imKsAAAAJ&hl=en), [Afshin Dehghan](https://www.afshindehghan.com/), [Amir Zamir](https://vilab.epfl.ch/zamir/)* 4M is a framework for training "any-to-any" foundation models, using tokenization and masking to scale to many diverse modalities. Models trained using 4M can perform a wide range of vision tasks, transfer well to unseen tasks and modalities, and are flexible and steerable multimodal generative models. We are releasing code and models for "4M: Massively Multimodal Masked Modeling" (here denoted 4M-7), as well as "4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities" (here denoted 4M-21). ## Installation For install instructions, please see https://github.com/apple/ml-4m. ## Usage The human pose tokenizer can be loaded from Hugging Face Hub as follows: ```python from fourm.vq.vqvae import VQVAE tok_human_poses = VQVAE.from_pretrained('EPFL-VILAB/4M_tokenizers_human-poses_1k_8') ``` Please see https://github.com/apple/ml-4m/blob/main/README_TOKENIZATION.md for more detailed instructions and https://github.com/apple/ml-4m for other tokenizer and 4M model checkpoints. ## Citation If you find this repository helpful, please consider citing our work: ``` @inproceedings{4m, title={{4M}: Massively Multimodal Masked Modeling}, author={David Mizrahi and Roman Bachmann and O{\u{g}}uzhan Fatih Kar and Teresa Yeo and Mingfei Gao and Afshin Dehghan and Amir Zamir}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, } @article{4m21, title={{4M-21}: An Any-to-Any Vision Model for Tens of Tasks and Modalities}, author={Roman Bachmann and O{\u{g}}uzhan Fatih Kar and David Mizrahi and Ali Garjani and Mingfei Gao and David Griffiths and Jiaming Hu and Afshin Dehghan and Amir Zamir}, journal={arXiv 2024}, year={2024}, } ``` ## License The model weights in this repository are released under the Sample Code license as found in the [LICENSE](LICENSE) file.
akreal/tiny-random-gpt2
akreal
2021-08-18T15:07:44Z
351
0
transformers
[ "transformers", "pytorch", "tf", "gpt2", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05Z
This is a copy of: https://huggingface.co/hf-internal-testing/tiny-random-gpt2 Changes: use old format for `pytorch_model.bin`.
gigant/romanian-wav2vec2
gigant
2023-09-13T11:45:00Z
351
2
transformers
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "hf-asr-leaderboard", "robust-speech-event", "ro", "dataset:mozilla-foundation/common_voice_8_0", "dataset:gigant/romanian_speech_synthesis_0_8_1", "base_model:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - ro license: apache-2.0 tags: - automatic-speech-recognition - hf-asr-leaderboard - robust-speech-event datasets: - mozilla-foundation/common_voice_8_0 - gigant/romanian_speech_synthesis_0_8_1 base_model: facebook/wav2vec2-xls-r-300m model-index: - name: wav2vec2-ro-300m_01 results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: Robust Speech Event type: speech-recognition-community-v2/dev_data args: ro metrics: - type: wer value: 46.99 name: Dev WER (without LM) - type: cer value: 16.04 name: Dev CER (without LM) - type: wer value: 38.63 name: Dev WER (with LM) - type: cer value: 14.52 name: Dev CER (with LM) - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: Common Voice type: mozilla-foundation/common_voice_8_0 args: ro metrics: - type: wer value: 11.73 name: Test WER (without LM) - type: cer value: 2.93 name: Test CER (without LM) - type: wer value: 7.31 name: Test WER (with LM) - type: cer value: 2.17 name: Test CER (with LM) - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: ro metrics: - type: wer value: 43.23 name: Test WER --- You can test this model online with the [**Space for Romanian Speech Recognition**](https://huggingface.co/spaces/gigant/romanian-speech-recognition) The model ranked **TOP-1** on Romanian Speech Recognition during HuggingFace's Robust Speech Challenge : * [**The 🤗 Speech Bench**](https://huggingface.co/spaces/huggingface/hf-speech-bench) * [**Speech Challenge Leaderboard**](https://huggingface.co/spaces/speech-recognition-community-v2/FinalLeaderboard) # Romanian Wav2Vec2 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [Common Voice 8.0 - Romanian subset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) dataset, with extra training data from [Romanian Speech Synthesis](https://huggingface.co/datasets/gigant/romanian_speech_synthesis_0_8_1) dataset. Without the 5-gram Language Model optimization, it achieves the following results on the evaluation set (Common Voice 8.0, Romanian subset, test split): - Loss: 0.1553 - Wer: 0.1174 - Cer: 0.0294 ## Model description The architecture is based on [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) with a speech recognition CTC head and an added 5-gram language model (using [pyctcdecode](https://github.com/kensho-technologies/pyctcdecode) and [kenlm](https://github.com/kpu/kenlm)) trained on the [Romanian Corpora Parliament](gigant/ro_corpora_parliament_processed) dataset. Those libraries are needed in order for the language model-boosted decoder to work. ## Intended uses & limitations The model is made for speech recognition in Romanian from audio clips sampled at **16kHz**. The predicted text is lowercased and does not contain any punctuation. ## How to use Make sure you have installed the correct dependencies for the language model-boosted version to work. You can just run this command to install the `kenlm` and `pyctcdecode` libraries : ```pip install https://github.com/kpu/kenlm/archive/master.zip pyctcdecode``` With the framework `transformers` you can load the model with the following code : ``` from transformers import AutoProcessor, AutoModelForCTC processor = AutoProcessor.from_pretrained("gigant/romanian-wav2vec2") model = AutoModelForCTC.from_pretrained("gigant/romanian-wav2vec2") ``` Or, if you want to test the model, you can load the automatic speech recognition pipeline from `transformers` with : ``` from transformers import pipeline asr = pipeline("automatic-speech-recognition", model="gigant/romanian-wav2vec2") ``` ## Example use with the `datasets` library First, you need to load your data We will use the [Romanian Speech Synthesis](https://huggingface.co/datasets/gigant/romanian_speech_synthesis_0_8_1) dataset in this example. ``` from datasets import load_dataset dataset = load_dataset("gigant/romanian_speech_synthesis_0_8_1") ``` You can listen to the samples with the `IPython.display` library : ``` from IPython.display import Audio i = 0 sample = dataset["train"][i] Audio(sample["audio"]["array"], rate = sample["audio"]["sampling_rate"]) ``` The model is trained to work with audio sampled at 16kHz, so if the sampling rate of the audio in the dataset is different, we will have to resample it. In the example, the audio is sampled at 48kHz. We can see this by checking `dataset["train"][0]["audio"]["sampling_rate"]` The following code resample the audio using the `torchaudio` library : ``` import torchaudio import torch i = 0 audio = sample["audio"]["array"] rate = sample["audio"]["sampling_rate"] resampler = torchaudio.transforms.Resample(rate, 16_000) audio_16 = resampler(torch.Tensor(audio)).numpy() ``` To listen to the resampled sample : ``` Audio(audio_16, rate=16000) ``` Know you can get the model prediction by running ``` predicted_text = asr(audio_16) ground_truth = dataset["train"][i]["sentence"] print(f"Predicted text : {predicted_text}") print(f"Ground truth : {ground_truth}") ``` ## Training and evaluation data Training data : - [Common Voice 8.0 - Romanian subset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) : train + validation + other splits - [Romanian Speech Synthesis](https://huggingface.co/datasets/gigant/romanian_speech_synthesis_0_8_1) : train + test splits Evaluation data : - [Common Voice 8.0 - Romanian subset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) : test split ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 48 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:| | 2.9272 | 0.78 | 500 | 0.7603 | 0.7734 | 0.2355 | | 0.6157 | 1.55 | 1000 | 0.4003 | 0.4866 | 0.1247 | | 0.4452 | 2.33 | 1500 | 0.2960 | 0.3689 | 0.0910 | | 0.3631 | 3.11 | 2000 | 0.2580 | 0.3205 | 0.0796 | | 0.3153 | 3.88 | 2500 | 0.2465 | 0.2977 | 0.0747 | | 0.2795 | 4.66 | 3000 | 0.2274 | 0.2789 | 0.0694 | | 0.2615 | 5.43 | 3500 | 0.2277 | 0.2685 | 0.0675 | | 0.2389 | 6.21 | 4000 | 0.2135 | 0.2518 | 0.0627 | | 0.2229 | 6.99 | 4500 | 0.2054 | 0.2449 | 0.0614 | | 0.2067 | 7.76 | 5000 | 0.2096 | 0.2378 | 0.0597 | | 0.1977 | 8.54 | 5500 | 0.2042 | 0.2387 | 0.0600 | | 0.1896 | 9.32 | 6000 | 0.2110 | 0.2383 | 0.0595 | | 0.1801 | 10.09 | 6500 | 0.1909 | 0.2165 | 0.0548 | | 0.174 | 10.87 | 7000 | 0.1883 | 0.2206 | 0.0559 | | 0.1685 | 11.65 | 7500 | 0.1848 | 0.2097 | 0.0528 | | 0.1591 | 12.42 | 8000 | 0.1851 | 0.2039 | 0.0514 | | 0.1537 | 13.2 | 8500 | 0.1881 | 0.2065 | 0.0518 | | 0.1504 | 13.97 | 9000 | 0.1840 | 0.1972 | 0.0499 | | 0.145 | 14.75 | 9500 | 0.1845 | 0.2029 | 0.0517 | | 0.1417 | 15.53 | 10000 | 0.1884 | 0.2003 | 0.0507 | | 0.1364 | 16.3 | 10500 | 0.2010 | 0.2037 | 0.0517 | | 0.1331 | 17.08 | 11000 | 0.1838 | 0.1923 | 0.0483 | | 0.129 | 17.86 | 11500 | 0.1818 | 0.1922 | 0.0489 | | 0.1198 | 18.63 | 12000 | 0.1760 | 0.1861 | 0.0465 | | 0.1203 | 19.41 | 12500 | 0.1686 | 0.1839 | 0.0465 | | 0.1225 | 20.19 | 13000 | 0.1828 | 0.1920 | 0.0479 | | 0.1145 | 20.96 | 13500 | 0.1673 | 0.1784 | 0.0446 | | 0.1053 | 21.74 | 14000 | 0.1802 | 0.1810 | 0.0456 | | 0.1071 | 22.51 | 14500 | 0.1769 | 0.1775 | 0.0444 | | 0.1053 | 23.29 | 15000 | 0.1920 | 0.1783 | 0.0457 | | 0.1024 | 24.07 | 15500 | 0.1904 | 0.1775 | 0.0446 | | 0.0987 | 24.84 | 16000 | 0.1793 | 0.1762 | 0.0446 | | 0.0949 | 25.62 | 16500 | 0.1801 | 0.1766 | 0.0443 | | 0.0942 | 26.4 | 17000 | 0.1731 | 0.1659 | 0.0423 | | 0.0906 | 27.17 | 17500 | 0.1776 | 0.1698 | 0.0424 | | 0.0861 | 27.95 | 18000 | 0.1716 | 0.1600 | 0.0406 | | 0.0851 | 28.73 | 18500 | 0.1662 | 0.1630 | 0.0410 | | 0.0844 | 29.5 | 19000 | 0.1671 | 0.1572 | 0.0393 | | 0.0792 | 30.28 | 19500 | 0.1768 | 0.1599 | 0.0407 | | 0.0798 | 31.06 | 20000 | 0.1732 | 0.1558 | 0.0394 | | 0.0779 | 31.83 | 20500 | 0.1694 | 0.1544 | 0.0388 | | 0.0718 | 32.61 | 21000 | 0.1709 | 0.1578 | 0.0399 | | 0.0732 | 33.38 | 21500 | 0.1697 | 0.1523 | 0.0391 | | 0.0708 | 34.16 | 22000 | 0.1616 | 0.1474 | 0.0375 | | 0.0678 | 34.94 | 22500 | 0.1698 | 0.1474 | 0.0375 | | 0.0642 | 35.71 | 23000 | 0.1681 | 0.1459 | 0.0369 | | 0.0661 | 36.49 | 23500 | 0.1612 | 0.1411 | 0.0357 | | 0.0629 | 37.27 | 24000 | 0.1662 | 0.1414 | 0.0355 | | 0.0587 | 38.04 | 24500 | 0.1659 | 0.1408 | 0.0351 | | 0.0581 | 38.82 | 25000 | 0.1612 | 0.1382 | 0.0352 | | 0.0556 | 39.6 | 25500 | 0.1647 | 0.1376 | 0.0345 | | 0.0543 | 40.37 | 26000 | 0.1658 | 0.1335 | 0.0337 | | 0.052 | 41.15 | 26500 | 0.1716 | 0.1369 | 0.0343 | | 0.0513 | 41.92 | 27000 | 0.1600 | 0.1317 | 0.0330 | | 0.0491 | 42.7 | 27500 | 0.1671 | 0.1311 | 0.0328 | | 0.0463 | 43.48 | 28000 | 0.1613 | 0.1289 | 0.0324 | | 0.0468 | 44.25 | 28500 | 0.1599 | 0.1260 | 0.0315 | | 0.0435 | 45.03 | 29000 | 0.1556 | 0.1232 | 0.0308 | | 0.043 | 45.81 | 29500 | 0.1588 | 0.1240 | 0.0309 | | 0.0421 | 46.58 | 30000 | 0.1567 | 0.1217 | 0.0308 | | 0.04 | 47.36 | 30500 | 0.1533 | 0.1198 | 0.0302 | | 0.0389 | 48.14 | 31000 | 0.1582 | 0.1185 | 0.0297 | | 0.0387 | 48.91 | 31500 | 0.1576 | 0.1187 | 0.0297 | | 0.0376 | 49.69 | 32000 | 0.1560 | 0.1182 | 0.0295 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Tokenizers 0.11.0 - pyctcdecode 0.3.0 - kenlm
microsoft/unispeech-sat-large
microsoft
2021-12-14T19:17:12Z
351
1
transformers
[ "transformers", "pytorch", "unispeech-sat", "pretraining", "speech", "en", "arxiv:1912.07875", "arxiv:2106.06909", "arxiv:2101.00390", "arxiv:2110.05752", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - en datasets: tags: - speech --- # UniSpeech-SAT-Large [Microsoft's UniSpeech](https://www.microsoft.com/en-us/research/publication/unispeech-unified-speech-representation-learning-with-labeled-and-unlabeled-data/) The large model pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. The model was pre-trained on: - 60,000 hours of [Libri-Light](https://arxiv.org/abs/1912.07875) - 10,000 hours of [GigaSpeech](https://arxiv.org/abs/2106.06909) - 24,000 hours of [VoxPopuli](https://arxiv.org/abs/2101.00390) [Paper: UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) Authors: Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu **Abstract** *Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisely and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks..* The original model can be found under https://github.com/microsoft/UniSpeech/tree/main/UniSpeech-SAT. # Usage This is an English pre-trained speech model that has to be fine-tuned on a downstream task like speech recognition or audio classification before it can be used in inference. The model was pre-trained in English and should therefore perform well only in English. The model has been shown to work well on task such as speaker verification, speaker identification, and speaker diarization. **Note**: The model was pre-trained on phonemes rather than characters. This means that one should make sure that the input text is converted to a sequence of phonemes before fine-tuning. ## Speech Recognition To fine-tune the model for speech recognition, see [the official speech recognition example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition). ## Speech Classification To fine-tune the model for speech classification, see [the official audio classification example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/audio-classification). ## Speaker Verification TODO ## Speaker Diarization TODO # Contribution The model was contributed by [cywang](https://huggingface.co/cywang) and [patrickvonplaten](https://huggingface.co/patrickvonplaten). # License The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE) ![design](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/UniSpeechSAT.png)
MariaK/layoutlmv2-base-uncased_finetuned_docvqa_v2
MariaK
2023-02-09T15:42:05Z
351
2
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlmv2", "document-question-answering", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
document-question-answering
2023-02-09T15:31:45Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer model-index: - name: layoutlmv2-base-uncased_finetuned_docvqa_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv2-base-uncased_finetuned_docvqa_v2 This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
AdamG012/chat-opt-350m-reward-deepspeed
AdamG012
2023-04-25T04:42:33Z
351
7
transformers
[ "transformers", "pytorch", "opt", "text-generation", "deepspeed", "chatgpt", "reward-model", "en", "dataset:Dahoas/full-hh-rlhf", "dataset:Dahoas/synthetic-instruct-gptj-pairwise", "dataset:yitingxie/rlhf-reward-datasets", "dataset:openai/webgpt_comparisons", "dataset:stanfordnlp/SHP", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-04-15T00:03:03Z
--- language: - en tags: - deepspeed - chatgpt - pytorch - opt - reward-model license: apache-2.0 datasets: - Dahoas/full-hh-rlhf - Dahoas/synthetic-instruct-gptj-pairwise - yitingxie/rlhf-reward-datasets - openai/webgpt_comparisons - stanfordnlp/SHP --- --- # ChatGPT OPT 350M DeepSpeed Reward Model *chat-opt-350m-reward-deepspeed* This model consists of the second step of a modified pipeline the to the traditional training process of Chat-GPT models, which is comprised of a three-step procedure of [supervised fine tuning](https://huggingface.co/AdamG012/chat-opt-1.3b-sft-deepspeed), **reward model** and reinforcement learning from human feedback models; [actor](https://huggingface.co/AdamG012/chat-opt-1.3b-rlhf-actor-deepspeed), [actor EMA](https://huggingface.co/AdamG012/chat-opt-1.3b-rlhf-actor-ema-deepspeed) and [critic](https://huggingface.co/AdamG012/chat-opt-1.3b-rlhf-critic-deepspeed) models. This project's main goal was to make proper use of existing frameworks that revolve around the minimisation of training costs and thus the eventual improvements towards both the feasibility and usability of ChatGPT-like models. The framework selected here is DeepSpeed which has been instrumental in the development of this model and through this framework it was possible to train the ChatGPT-like model on much larger data-sets with a reasonable number of GPUs and consequently achieve significantly better performance. This model follows the blog of ChatGPT and the paper of InstructGPT and especially the [Microsoft DeepSpeed Chat Blog](https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/DeepSpeed-Chat). ## Our Training Methodology and Speedup Recipes The training process simply involves a single python run of DeepSpeed-Chat which initiates the whole 3-step pipeline, saving all models in the process: ``` bash python train.py --actor-model facebook/opt-1.3b --reward-model facebook/opt-350m --deployment-type single_node ``` This pipeline can be broken up into three key steps: 1. **Supervised fine-tuning (SFT):** See [here](https://huggingface.co/AdamG012/chat-opt-1.3b-sft-deepspeed/). 2. **Reward Model (RM) fine-tuning:** In parallel or after the model has been trained under supervised conditions, the RM fine tuning step takes the pre-trained models (or the model trained from step 1, if you choose so) and uses small learning rates that were tuned on the data-set with comparisons (accept and reject). 3. **Reinforcement-learning from Human feedback (RLHF) fine-tuning:** At the completion of the prior two steps, the final RLHF fine-tuning can be initiated. This involves the collection of both the *fine-tuned model* from step 1 and the *reward model* from step 2 and train them on the data-set with comparisons. This generates both an [actor](https://huggingface.co/AdamG012/chat-opt-1.3b-rlhf-actor-deepspeed) and [critic](https://huggingface.co/AdamG012/chat-opt-1.3b-rlhf-critic-deepspeed). I also generate an [actor model with an exponential moving average (EMA)](https://huggingface.co/AdamG012/chat-opt-1.3b-rlhf-actor-ema-deepspeed) which is known to improve conversational response quality. To view the details behind each step head into their respective links and view the model card there. ## Reward Model Configurations **Model Configurations:** | Parameter | Value | |:-----------------------|:------| | Parameters | 350M | | Model type | OPT | | FFN Dimensions | 4096 | | Hidden Size | 1024 | | Max Position Embedding | 2048 | | Attention Heads | 16 | | Hidden layers | 24 | **Training Configurations:** | Parameter | Value | |:-----------------------|:------| | Train Batch size | 64 | | Train micro batch size | 8 | | ZeRO stage | 0 | | FP16 | True | | Gradient clipping | 1.0 | | Dropout | 0.1 | | Prescale gradients | True | ## Installation If using through the HuggingFace transformers library: ``` python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("AdamG012/chat-opt-350m-reward-deepspeed") model = AutoModelForCausalLM.from_pretrained("AdamG012/chat-opt-350m-reward-deepspeed") ``` If you would like to clone from source: ```bash # Make sure you have git-lfs installed (https://git-lfs.github.com) git lfs install git clone https://huggingface.co/AdamG012/chat-opt-350m-reward-deepspeed # if you want to clone without large files – just their pointers # prepend your git clone with the following env var: GIT_LFS_SKIP_SMUDGE=1 ``` ## **Acknowledgements** We thank the following papers and open-source repositories. We especially thank DeepSpeed for their frameworks as well. * [1] Schulman, John, et al. "Introducing ChatGPT", https://openai.com/blog/chatgpt (2022). * [2] Transformers [Hugging Face (github.com)](https://github.com/huggingface) * [3] DeepSpeed Chat [DeepSpeed Chat](https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/DeepSpeed-Chat)
TheBloke/Falcon-7B-Instruct-GPTQ
TheBloke
2023-08-21T11:21:22Z
351
64
transformers
[ "transformers", "safetensors", "RefinedWebModel", "text-generation", "custom_code", "en", "dataset:tiiuae/falcon-refinedweb", "arxiv:2205.14135", "arxiv:1911.02150", "arxiv:2005.14165", "arxiv:2104.09864", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-05-27T09:03:00Z
--- datasets: - tiiuae/falcon-refinedweb license: apache-2.0 language: - en inference: false --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Falcon-7B-Instruct GPTQ This repo contains an experimantal GPTQ 4bit model for [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct). It is the result of quantising to 4bit using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ). ## PERFORMANCE Please note that performance with this GPTQ is currently very slow with AutoGPTQ. It may perform better with the latest GPTQ-for-LLaMa code, but I havne't tested that personally yet. ## Prompt template ``` A helpful assistant who helps the user with any questions asked. User: prompt Assistant: ``` ## AutoGPTQ AutoGPTQ is required: `GITHUB_ACTIONS=true pip install auto-gptq` AutoGPTQ provides pre-compiled wheels for Windows and Linux, with CUDA toolkit 11.7 or 11.8. If you are running CUDA toolkit 12.x, you will need to compile your own by following these instructions: ``` git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ pip install . ``` These manual steps will require that you have the [Nvidia CUDA toolkit](https://developer.nvidia.com/cuda-12-0-1-download-archive) installed. ## How to download and use this model in text-generation-webui 1. Launch text-generation-webui 2. Click the **Model tab**. 3. Untick **Autoload model** 4. Under **Download custom model or LoRA**, enter `TheBloke/falcon-7B-instruct-GPTQ`. 5. Click **Download**. 6. Wait until it says it's finished downloading. 7. Click the **Refresh** icon next to **Model** in the top left. 8. In the **Model drop-down**: choose the model you just downloaded, `falcon-7B-instruct-GPTQ`. 9. Set **Loader** to **AutoGPTQ**. This model will not work with ExLlama. It might work with recent GPTQ-for-LLaMa but I haven't tested that. 10. Tick **Trust Remote Code**, followed by **Save Settings** 11. Click **Reload**. 12. Once it says it's loaded, click the **Text Generation tab** and enter a prompt! ## About `trust_remote_code` Please be aware that this command line argument causes Python code provided by Falcon to be executed on your machine. This code is required at the moment because Falcon is too new to be supported by Hugging Face transformers. At some point in the future transformers will support the model natively, and then `trust_remote_code` will no longer be needed. In this repo you can see two `.py` files - these are the files that get executed. They are copied from the base repo at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct). ## Simple Python example code To run this code you need to install AutoGPTQ and einops: ``` GITHUB_ACTIONS=true pip install auto-gptq pip install einops ``` You can then run this example code: ```python from transformers import AutoTokenizer, pipeline, logging from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig import argparse model_name_or_path = "TheBloke/falcon-7b-instruct-GPTQ" # You could also download the model locally, and access it there # model_name_or_path = "/path/to/TheBloke_falcon-7b-instruct-GPTQ" model_basename = "model" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, model_basename=model_basename, use_safetensors=True, trust_remote_code=True, device="cuda:0", use_triton=use_triton, quantize_config=None) prompt = "Tell me about AI" prompt_template=f'''A helpful assistant who helps the user with any questions asked. User: {prompt} Assistant:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline # Note that if you use pipeline, you will see a spurious error message saying the model type is not supported # This can be ignored! Or you can hide it with the following logging line: # Prevent printing spurious transformers error when using pipeline with AutoGPTQ logging.set_verbosity(logging.CRITICAL) print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Provided files **gptq_model-4bit-64g.safetensors** This will work with AutoGPTQ 0.2.0 and later. It was created with groupsize 64 to give higher inference quality, and without `desc_act` (act-order) to increase inference speed. * `gptq_model-4bit-64g.safetensors` * Works with AutoGPTQ CUDA 0.2.0 and later. * At this time it does not work with AutoGPTQ Triton, but support will hopefully be added in time. * Works with text-generation-webui using `--trust-remote-code` * Does not work with any version of GPTQ-for-LLaMa * Parameters: Groupsize = 64. No act-order. <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # ✨ Original model card: Falcon-7B-Instruct **Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets. It is made available under the [TII Falcon LLM License](https://huggingface.co/tiiuae/falcon-7b-instruct/blob/main/LICENSE.txt).** *Paper coming soon 😊.* ## Why use Falcon-7B-Instruct? * **You are looking for a ready-to-use chat/instruct model based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).** * **Falcon-7B is a strong base model, outperforming comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). * **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)). 💬 **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b). 🔥 **Looking for an even more powerful model?** [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) is Falcon-7B-Instruct's big brother! ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-7b-instruct" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` 💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!** # Model Card for Falcon-7B-Instruct ## Model Details ### Model Description - **Developed by:** [https://www.tii.ae](https://www.tii.ae); - **Model type:** Causal decoder-only; - **Language(s) (NLP):** English and French; - **License:** [TII Falcon LLM License](https://huggingface.co/tiiuae/falcon-7b-instruct/blob/main/LICENSE.txt); - **Finetuned from model:** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b). ### Model Source - **Paper:** *coming soon*. ## Uses ### Direct Use Falcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets. ### Out-of-Scope Use Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. ## Bias, Risks, and Limitations Falcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online. ### Recommendations We recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use. ## How to Get Started with the Model ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-7b-instruct" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Training Details ### Training Data Falcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets. | **Data source** | **Fraction** | **Tokens** | **Description** | |--------------------|--------------|------------|-----------------------------------| | [Bai ze](https://github.com/project-baize/baize-chatbot) | 65% | 164M | chat | | [GPT4All](https://github.com/nomic-ai/gpt4all) | 25% | 62M | instruct | | [GPTeacher](https://github.com/teknium1/GPTeacher) | 5% | 11M | instruct | | [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 5% | 13M | massive web crawl | The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer. ## Evaluation *Paper coming soon.* See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results. Note that this model variant is not optimized for NLP benchmarks. ## Technical Specifications For more information about pretraining, see [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b). ### Model Architecture and Objective Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences: * **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864)); * **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)); * **Decoder-block:** parallel attention/MLP with a single layer norm. | **Hyperparameter** | **Value** | **Comment** | |--------------------|-----------|----------------------------------------| | Layers | 32 | | | `d_model` | 4544 | Increased to compensate for multiquery | | `head_dim` | 64 | Reduced to optimise for FlashAttention | | Vocabulary | 65024 | | | Sequence length | 2048 | | ### Compute Infrastructure #### Hardware Falcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances. #### Software Falcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.) ## Citation *Paper coming soon 😊.* ## License Falcon-7B-Instruct is made available under the [TII Falcon LLM License](https://huggingface.co/tiiuae/falcon-7b-instruct/blob/main/LICENSE.txt). Broadly speaking, * You can freely use our models for research and/or personal purpose; * You are allowed to share and build derivatives of these models, but you are required to give attribution and to share-alike with the same license; * For commercial use, you are exempt from royalties payment if the attributable revenues are inferior to $1M/year, otherwise you should enter in a commercial agreement with TII. ## Contact [email protected]
PFcoding/medicare-gpt2-test1
PFcoding
2023-05-30T21:51:16Z
351
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "dataset:pubmed-summarization", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-05-30T21:04:10Z
--- license: mit tags: - generated_from_trainer datasets: - pubmed-summarization model-index: - name: medicare-gpt2-test1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medicare-gpt2-test1 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the pubmed-summarization dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
kazzand/ru-longformer-tiny-16384
kazzand
2023-12-29T02:04:31Z
351
15
transformers
[ "transformers", "pytorch", "longformer", "fill-mask", "ru", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-12T12:07:43Z
--- language: - ru - en --- This is a tiny Longformer model designed for Russian language. It was initialized from [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) weights and has been modified to support a context length of up to 16384 tokens. We fine-tuned it on a dataset of Russian books, news, wiki and habr, however it still undrestands English, thanks to the source model. For a detailed information check out our [post](https://habr.com/ru/companies/ru_mts/articles/761116/) on Habr. Model attributes: - 12 attention heads - 3 hidden layers - 16384 tokens length of context The model can be used as-is to produce text embeddings or it can be further fine-tuned for a specific downstream task. Text embeddings can be produced as follows: ```python # pip install transformers sentencepiece import torch from transformers import LongformerModel, LongformerTokenizerFast model = LongformerModel.from_pretrained('kazzand/ru-longformer-tiny-16384') tokenizer = LongformerTokenizerFast.from_pretrained('kazzand/ru-longformer-tiny-16384') def get_cls_embedding(text, model, tokenizer, device='cuda'): model.to(device) batch = tokenizer(text, return_tensors='pt') #set global attention for cls token global_attention_mask = [ [1 if token_id == tokenizer.cls_token_id else 0 for token_id in input_ids] for input_ids in batch["input_ids"] ] #add global attention mask to batch batch["global_attention_mask"] = torch.tensor(global_attention_mask) with torch.no_grad(): output = model(**batch.to(device)) return output.last_hidden_state[:,0,:] ``` P.S. Thanks for moral and technical support [AbstractDL](https://t.me/abstractDL)