modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
ProfEngel/spassball | ProfEngel | 2024-06-18T08:22:30Z | 524 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-06-17T12:28:36Z | ---
license: mit
---
|
NikolayKozloff/Morfoz-LLM-8b-v1.0-Q5_0-GGUF | NikolayKozloff | 2024-06-24T14:35:53Z | 524 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"tr",
"base_model:Morfoz-Aigap/Morfoz-LLM-8b-v1.0",
"license:apache-2.0",
"region:us"
]
| null | 2024-06-24T14:35:29Z | ---
base_model: Morfoz-Aigap/Morfoz-LLM-8b-v1.0
language:
- tr
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Morfoz-LLM-8b-v1.0-Q5_0-GGUF
This model was converted to GGUF format from [`Morfoz-Aigap/Morfoz-LLM-8b-v1.0`](https://huggingface.co/Morfoz-Aigap/Morfoz-LLM-8b-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Morfoz-Aigap/Morfoz-LLM-8b-v1.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Morfoz-LLM-8b-v1.0-Q5_0-GGUF --hf-file morfoz-llm-8b-v1.0-q5_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Morfoz-LLM-8b-v1.0-Q5_0-GGUF --hf-file morfoz-llm-8b-v1.0-q5_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Morfoz-LLM-8b-v1.0-Q5_0-GGUF --hf-file morfoz-llm-8b-v1.0-q5_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Morfoz-LLM-8b-v1.0-Q5_0-GGUF --hf-file morfoz-llm-8b-v1.0-q5_0.gguf -c 2048
```
|
Yntec/GoodLife | Yntec | 2023-09-10T01:49:46Z | 523 | 2 | diffusers | [
"diffusers",
"safetensors",
"Anime",
"Children",
"Book Illustrations",
"3D",
"Pixar",
"Clumsy_Trainer",
"DucHaiten",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-09T20:04:11Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Anime
- Children
- Book Illustrations
- 3D
- Pixar
- Clumsy_Trainer
- DucHaiten
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
# Good Life
DucHaiten-GoldLife merged with the COOLKIDSV2 Lora. The original plan was to make COOLKIDSV2 obsolete, but both models pack their own punch.
Comparison:

(Click for larger)
Sample and prompt:

Cartoon CUTE Pretty LITTLE Girl, sitting on a box of bottles, CHIBI EYES, holding PEPSI bottle, gorgeous detailed hair, Ponytail, Magazine ad, iconic, 1940, sharp focus. Illustration By KlaysMoji and artgerm and Clay Mann and and leyendecker and Dave Rapoza
# DucHaiten GoodLife
(a model not by DucHaiten) - This one was in the half-way point between the models, I don't think it looks bad.

Original pages:
KIDS ILLUSTRATIONS V2 - https://civitai.com/models/60724?modelVersionId=67980
DucHaiten-GoldenLife - https://tensor.art/models/628276277415133426
# Recipes
- Merge Lora 1.0
Primary Model:
DucHaiten-GoldenLife
Lora:
COOL KIDS V2
Output:
DucHaiten-GoldLife-COOLKIDSV2
- SuperMerger Weight Sum Train Difference MBW 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1
Model A:
DucHaiten-GoldLife-COOLKIDSV2
Model B:
DucHaiten-GoldenLife
Output:
GoodLife
- Fp16 no ema
Output:
GoodLife-mini
- SuperMerger Weight Sum Train Difference MBW 0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1
Model A:
DucHaiten-GoldLife-COOLKIDSV2
Model B:
DucHaiten-GoldenLife
Output:
DucHaiten-GoodLife |
TheBloke/HornyEchidna-13B-v0.1-GGUF | TheBloke | 2023-10-24T15:55:24Z | 523 | 13 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:NeverSleep/HornyEchidna-13b-v0.1",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
]
| null | 2023-10-24T15:48:07Z | ---
base_model: NeverSleep/HornyEchidna-13b-v0.1
inference: false
license: cc-by-nc-4.0
model_creator: NeverSleep
model_name: HornyEchidna 13B v0.1
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# HornyEchidna 13B v0.1 - GGUF
- Model creator: [NeverSleep](https://huggingface.co/NeverSleep)
- Original model: [HornyEchidna 13B v0.1](https://huggingface.co/NeverSleep/HornyEchidna-13b-v0.1)
<!-- description start -->
## Description
This repo contains GGUF format model files for [NeverSleep's HornyEchidna 13B v0.1](https://huggingface.co/NeverSleep/HornyEchidna-13b-v0.1).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/HornyEchidna-13B-v0.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/HornyEchidna-13B-v0.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/HornyEchidna-13B-v0.1-GGUF)
* [NeverSleep's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NeverSleep/HornyEchidna-13b-v0.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [NeverSleep's HornyEchidna 13B v0.1](https://huggingface.co/NeverSleep/HornyEchidna-13b-v0.1).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [hornyechidna-13b-v0.1.Q2_K.gguf](https://huggingface.co/TheBloke/HornyEchidna-13B-v0.1-GGUF/blob/main/hornyechidna-13b-v0.1.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [hornyechidna-13b-v0.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/HornyEchidna-13B-v0.1-GGUF/blob/main/hornyechidna-13b-v0.1.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [hornyechidna-13b-v0.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/HornyEchidna-13B-v0.1-GGUF/blob/main/hornyechidna-13b-v0.1.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [hornyechidna-13b-v0.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/HornyEchidna-13B-v0.1-GGUF/blob/main/hornyechidna-13b-v0.1.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [hornyechidna-13b-v0.1.Q4_0.gguf](https://huggingface.co/TheBloke/HornyEchidna-13B-v0.1-GGUF/blob/main/hornyechidna-13b-v0.1.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [hornyechidna-13b-v0.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/HornyEchidna-13B-v0.1-GGUF/blob/main/hornyechidna-13b-v0.1.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [hornyechidna-13b-v0.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/HornyEchidna-13B-v0.1-GGUF/blob/main/hornyechidna-13b-v0.1.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [hornyechidna-13b-v0.1.Q5_0.gguf](https://huggingface.co/TheBloke/HornyEchidna-13B-v0.1-GGUF/blob/main/hornyechidna-13b-v0.1.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [hornyechidna-13b-v0.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/HornyEchidna-13B-v0.1-GGUF/blob/main/hornyechidna-13b-v0.1.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [hornyechidna-13b-v0.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/HornyEchidna-13B-v0.1-GGUF/blob/main/hornyechidna-13b-v0.1.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [hornyechidna-13b-v0.1.Q6_K.gguf](https://huggingface.co/TheBloke/HornyEchidna-13B-v0.1-GGUF/blob/main/hornyechidna-13b-v0.1.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [hornyechidna-13b-v0.1.Q8_0.gguf](https://huggingface.co/TheBloke/HornyEchidna-13B-v0.1-GGUF/blob/main/hornyechidna-13b-v0.1.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/HornyEchidna-13B-v0.1-GGUF and below it, a specific filename to download, such as: hornyechidna-13b-v0.1.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/HornyEchidna-13B-v0.1-GGUF hornyechidna-13b-v0.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/HornyEchidna-13B-v0.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/HornyEchidna-13B-v0.1-GGUF hornyechidna-13b-v0.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m hornyechidna-13b-v0.1.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/HornyEchidna-13B-v0.1-GGUF", model_file="hornyechidna-13b-v0.1.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: NeverSleep's HornyEchidna 13B v0.1

# This and it's sister model [Echidna-13b-v0.1](https://huggingface.co/NeverSleep/Echidna-13b-v0.1) are a collab between [IkariDev](https://huggingface.co/IkariDev) and [Undi](https://huggingface.co/Undi95)!
HornyEchidna v0.1 model. Use Alpaca format. Suitable for RP, ERP and general stuff.
This model seems to be very sensitive to your generation settings, experiment until you've found your settings.
[Recommended settings - No settings yet]
<!-- description start -->
## Description
<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->
This repo contains FP16 files of HornyEchidna-13b-v0.1.
<!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)-->
<!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)-->
<!-- [exl2 - by waldie](https://huggingface.co/waldie/Athena-v4-8bpw-h8-exl2)-->
<!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)-->
<!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)-->
[GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/NeverSleep/HornyEchidna-13b-v0.1-GGUF)
<!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)-->
## Ratings:
Note: We have permission of all users to upload their ratings, i DONT screenshot random reviews without asking if i can put them here!
If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".
<!-- description end -->
<!-- description start -->
## Models+loras used and recipe
- IkariDev/Athena-v3
- cgato/Thespis-13b-v0.3
- Undi95/PsyMedRP-v1-13B
- cgato/Thespis-13b-v0.2
- KoboldAI/LLaMA2-13B-Tiefighter
- Undi95/MLewd-v2.4-13B
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## Others
Undi: If you want to support me, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
<!-- original-model-card end -->
|
mmnga/ELYZA-japanese-Llama-2-13b-fast-gguf | mmnga | 2023-12-27T13:18:46Z | 523 | 3 | null | [
"gguf",
"llama2",
"ja",
"arxiv:2307.09288",
"license:llama2",
"region:us"
]
| null | 2023-12-27T10:40:52Z | ---
license: llama2
language:
- ja
tags:
- llama2
---
# ELYZA-japanese-Llama-2-13b-fast-gguf
[ELYZAさんが公開しているELYZA-japanese-Llama-2-13b-fast](https://huggingface.co/ELYZA/ELYZA-japanese-Llama-2-13b-fast)のggufフォーマット変換版です。
他のモデルはこちら
通常版: llama2に日本語のデータセットで学習したモデル
[mmnga/ELYZA-japanese-Llama-2-7b-gguf](https://huggingface.co/mmnga/ELYZA-japanese-Llama-2-7b-gguf)
[mmnga/ELYZA-japanese-Llama-2-7b-instruct-gguf](https://huggingface.co/mmnga/ELYZA-japanese-Llama-2-7b-instruct-gguf)
Fast版 日本語の語彙を追加してトークンコストを減らし、1.8倍高速化したモデル
[mmnga/ELYZA-japanese-Llama-2-7b-fast-gguf](https://huggingface.co/mmnga/ELYZA-japanese-Llama-2-7b-fast-gguf)
[mmnga/ELYZA-japanese-Llama-2-7b-fast-instruct-gguf](https://huggingface.co/mmnga/ELYZA-japanese-Llama-2-7b-fast-instruct-gguf)
[mmnga/ELYZA-japanese-Llama-2-13b-fast-gguf](https://huggingface.co/mmnga/ELYZA-japanese-Llama-2-13b-fast-gguf)
[mmnga/ELYZA-japanese-Llama-2-13b-fast-instruct-gguf](https://huggingface.co/mmnga/ELYZA-japanese-Llama-2-13b-fast-instruct-gguf)
Codellama版 GGUF
[mmnga/ELYZA-japanese-CodeLlama-7b-gguf](https://huggingface.co/mmnga/ELYZA-japanese-CodeLlama-7b-gguf)
[mmnga/ELYZA-japanese-CodeLlama-7b-instruct-gguf](https://huggingface.co/mmnga/ELYZA-japanese-CodeLlama-7b-instruct-gguf)
Codellama版 GPTQ
[mmnga/ELYZA-japanese-CodeLlama-7b-instruct-GPTQ-calib-ja-1k](https://huggingface.co/mmnga/ELYZA-japanese-CodeLlama-7b-instruct-GPTQ-calib-ja-1k)
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'ELYZA-japanese-Llama-2-13b-fast-q4_0.gguf' -n 256 -p '今日の晩御飯をご紹介します。'
```
### Licence
Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
### 引用 Citations
```tex
@misc{elyzallama2023,
title={ELYZA-japanese-Llama-2-13b},
url={https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b},
author={Akira Sasaki and Masato Hirakawa and Shintaro Horie and Tomoaki Nakamura and Sam Passaglia and Daisuke Oba},
year={2023},
}
```
```tex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
briaai/BRIA-2.3-FAST-LORA | briaai | 2024-05-29T14:03:36Z | 523 | 1 | diffusers | [
"diffusers",
"text-to-image",
"legal liability",
"commercial use",
"license:other",
"region:us"
]
| text-to-image | 2024-04-30T10:33:37Z | ---
license: other
license_name: bria-2.3-fast-lora
license_link: https://bria.ai/bria-huggingface-model-license-agreement/
library_name: diffusers
inference: false
tags:
- text-to-image
- legal liability
- commercial use
extra_gated_prompt: Model weights from BRIA AI can be obtained after purchasing a commercial license. Fill in the form below and we reach out to you.
extra_gated_fields:
Name: text
Company/Org name: text
Org Type (Early/Growth Startup, Enterprise, Academy): text
Role: text
Country: text
Email: text
By submitting this form, I agree to BRIA’s Privacy policy and Terms & conditions, see links below: checkbox
---
# BRIA 2.3 FAST-LORA: Text-to-Image Model for Commercial Licensing
Introducing Bria AI 2.3 FAST-LORA, a groundbreaking text-to-image model explicitly designed for commercial applications in the enterprise. This model combines technological innovation with ethical responsibility and legal security, setting a new standard in the AI industry. Bria AI licenses the foundation model with full legal liability coverage. Our dataset does not contain copyrighted materials, such as fictional characters, logos, trademarks, public figures, harmful content, or privacy-infringing content.
For more information, please visit our [website](https://bria.ai/).
# What's New
BRIA 2.3 FAST-LORA is a speedy version of BRIA 2.3, that provides an optimal balance between speed and accuracy. Engineered for efficiency, it takes only 1.64 seconds to generate images on a standard NVIDIA A10 GPU, achieving excellent image quality with an 80% reduction in inference time.
Most importantly, BRIA 2.3 FAST-LORA is compatible with additional plugins, such as ControlNets. This enables the building of complex pipelines while still maintaining fast inference.
[CLICK HERE FOR A DEMO](https://huggingface.co/spaces/briaai/BRIA-2.3-FAST-LORA)
### Get Access
Interested in BRIA 2.3 FAST-LORA? Purchase is required to access BRIA 2.3 FAST-LORA, ensuring royalty management with our data partners and full liability coverage for commercial use.
Are you a startup or a student? We encourage you to apply for our [Startup Program](https://pages.bria.ai/the-visual-generative-ai-platform-for-builders-startups-plan?_gl=1*cqrl81*_ga*MTIxMDI2NzI5OC4xNjk5NTQ3MDAz*_ga_WRN60H46X4*MTcwOTM5OTMzNC4yNzguMC4xNzA5Mzk5MzM0LjYwLjAuMA..) to request access. This program is designed to support emerging businesses and academic pursuits with our cutting-edge technology.
Contact us today to unlock the potential of BRIA 2.3 FAST-LORA! By submitting the form above, you agree to BRIA’s [Privacy policy](https://bria.ai/privacy-policy/) and [Terms & conditions.](https://bria.ai/terms-and-conditions/)

# Key Features
- **Legally Compliant:** Offers full legal liability coverage for copyright and privacy infringements. Thanks to training on 100% licensed data from leading data partners, we ensure the ethical use of content.
- **Patented Attribution Engine:** Our attribution engine is our way to compensate our data partners, powered by our proprietary and patented algorithms.
- **Enterprise-Ready:** Specifically designed for business applications, Bria AI 2.3 delivers high-quality, compliant imagery for a variety of commercial needs.
- **Customizable Technology:** Provides access to source code and weights for extensive customization, catering to specific business requirements.
### Model Description
- **Developed by:** BRIA AI
- **Model type:** Text-to-Image model
- **License:** [BRIA 2.3 FAST-LORA Licensing terms & conditions](https://bria.ai/bria-huggingface-model-license-agreement/).
- Purchase is required to license and access the model.
- **Model Description:** BRIA 2.3 Fast is an efficient text-to-image model trained exclusively on a professional-grade, licensed dataset. It is designed for commercial use and includes full legal liability coverage.
- **Resources for more information:** [BRIA AI](https://bria.ai/)
# Code example using Diffusers
```
pip install diffusers
```
```py
from diffusers import DiffusionPipeline, LCMScheduler
import torch
pipe = DiffusionPipeline.from_pretrained("briaai/BRIA-2.3", torch_dtype=torch.float16)
pipe.load_lora_weights("briaai/BRIA-2.3-FAST-LORA")
pipe.fuse_lora()
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.to("cuda")
prompt = "A portrait of a Beautiful and playful ethereal singer, golden designs, highly detailed, blurry background"
image = pipe(prompt, num_inference_steps=8, guidance_scale=0.0).images[0]
```
# Using both LCM LORA and ControlNet
```
condition_image_path = "A_dog.png"
prompt = "A white dog"
seed = 222
w, h = 1024, 1024
controlnet = ControlNetModel.from_pretrained(
"briaai/BRIA-2.3-ControlNet-Canny",
torch_dtype=torch.float16
)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained("briaai/BRIA-2.3", controlnet=controlnet, torch_dtype=torch.float16)
pipe.load_lora_weights("briaai/BRIA-2.3-FAST-LORA")
pipe.fuse_lora()
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.force_zeros_for_empty_prompt = False
pipe.to("cuda")
#To run much faster use (or disable it)
pipeline.unet = torch.compile(pipeline.unet, mode=‘reduce-overhead’, fullgraph=True)
negative_prompt = "Logo,Watermark,Text,Ugly,Morbid,Extra fingers,Poorly drawn hands,Mutation,Blurry,Extra limbs,Gross proportions,Missing arms,Mutated hands,Long neck,Duplicate,Mutilated,Mutilated hands,Poorly drawn face,Deformed,Bad anatomy,Cloned face,Malformed limbs,Missing legs,Too many fingers"
generator = torch.Generator("cuda").manual_seed(seed)
# Calculate Canny image
low_threshold, high_threshold = 100, 200
input_image = cv2.imread(condition_image_path)
input_image = cv2.Canny(input_image, low_threshold, high_threshold)
input_image = input_image[:, :, None]
input_image = np.concatenate([input_image, input_image, input_image], axis=2)
condition_image = Image.fromarray(input_image)
#Generate
image = pipe(prompt, image=condition_image, controlnet_conditioning_scale=1.0, num_inference_steps=8, width=w,height=h, guidance_scale=0.0, negative_prompt=negative_prompt, generator=generator,).images[0]
``` |
mnoukhov/pythia410m-rm-tldr6.9b | mnoukhov | 2024-06-20T10:33:07Z | 523 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"trl",
"reward-trainer",
"generated_from_trainer",
"base_model:mnoukhov/pythia410m-sft-tldr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-classification | 2024-06-20T05:17:06Z | ---
license: apache-2.0
base_model: mnoukhov/pythia410m-sft-tldr
tags:
- trl
- reward-trainer
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: pythia410m-rm-tldr6.9b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia410m-rm-tldr6.9b
This model is a fine-tuned version of [mnoukhov/pythia410m-sft-tldr](https://huggingface.co/mnoukhov/pythia410m-sft-tldr) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5162
- Accuracy: 0.7418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.5507 | 0.2006 | 291 | 0.5506 | 0.7176 |
| 0.4921 | 0.4011 | 582 | 0.5163 | 0.7394 |
| 0.4727 | 0.6017 | 873 | 0.5194 | 0.7368 |
| 0.439 | 0.8022 | 1164 | 0.5162 | 0.7418 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
Tanvir1337/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B-IQ3_XXS-GGUF | Tanvir1337 | 2024-06-26T11:23:38Z | 523 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:Severian/Internal-Knowledge-Map",
"base_model:Severian/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-26T11:23:24Z | ---
base_model: Severian/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B
datasets:
- Severian/Internal-Knowledge-Map
language:
- en
license: mit
pipeline_tag: text-generation
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- llama-cpp
- gguf-my-repo
---
# Tanvir1337/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B-IQ3_XXS-GGUF
This model was converted to GGUF format from [`Severian/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B`](https://huggingface.co/Severian/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Severian/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Tanvir1337/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B-IQ3_XXS-GGUF --hf-file mistral-v0.2-nexus-internal-knowledge-map-7b-iq3_xxs-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Tanvir1337/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B-IQ3_XXS-GGUF --hf-file mistral-v0.2-nexus-internal-knowledge-map-7b-iq3_xxs-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Tanvir1337/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B-IQ3_XXS-GGUF --hf-file mistral-v0.2-nexus-internal-knowledge-map-7b-iq3_xxs-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Tanvir1337/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B-IQ3_XXS-GGUF --hf-file mistral-v0.2-nexus-internal-knowledge-map-7b-iq3_xxs-imat.gguf -c 2048
```
|
Norod78/hebrew-gpt_neo-tiny | Norod78 | 2023-05-31T15:10:48Z | 522 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"onnx",
"safetensors",
"gpt_neo",
"text-generation",
"he",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:04Z | ---
language: he
thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg
widget:
- text: "עוד בימי קדם"
- text: "קוראים לי דורון ואני מעוניין ל"
- text: "קוראים לי איציק ואני חושב ש"
- text: "החתול שלך מאוד חמוד ו"
license: mit
---
# hebrew-gpt_neo-tiny
Hebrew text generation model based on [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Each was trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program.
## Datasets
1. An assortment of various Hebrew corpuses - I have made it available [here](https://mega.nz/folder/CodSSA4R#4INvMes-56m_WUi7jQMbJQ)
2. oscar / unshuffled_deduplicated_he - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he)
The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.
## Training Config
Available [here](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-tiny/configs) <BR>
## Usage
### Google Colab Notebook
Available [here ](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-tiny/Norod78_hebrew_gpt_neo_tiny_Colab.ipynb) <BR>
#### Simple usage sample code
```python
!pip install tokenizers==0.10.2 transformers==4.6.0
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-gpt_neo-tiny")
model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-gpt_neo-tiny", pad_token_id=tokenizer.eos_token_id)
prompt_text = "אני אוהב שוקולד ועוגות"
max_len = 512
sample_output_num = 3
seed = 1000
import numpy as np
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count()
print(f"device: {device}, n_gpu: {n_gpu}")
np.random.seed(seed)
torch.manual_seed(seed)
if n_gpu > 0:
torch.cuda.manual_seed_all(seed)
model.to(device)
encoded_prompt = tokenizer.encode(
prompt_text, add_special_tokens=False, return_tensors="pt")
encoded_prompt = encoded_prompt.to(device)
if encoded_prompt.size()[-1] == 0:
input_ids = None
else:
input_ids = encoded_prompt
print("input_ids = " + str(input_ids))
if input_ids != None:
max_len += len(encoded_prompt[0])
if max_len > 1024:
max_len = 1024
print("Updated max_len = " + str(max_len))
stop_token = "<|endoftext|>"
new_lines = "\n\n\n"
sample_outputs = model.generate(
input_ids,
do_sample=True,
max_length=max_len,
top_k=50,
top_p=0.95,
num_return_sequences=sample_output_num
)
print(100 * '-' + "\n\t\tOutput\n" + 100 * '-')
for i, sample_output in enumerate(sample_outputs):
text = tokenizer.decode(sample_output, skip_special_tokens=True)
# Remove all text after the stop token
text = text[: text.find(stop_token) if stop_token else None]
# Remove all text after 3 newlines
text = text[: text.find(new_lines) if new_lines else None]
print("\n{}: {}".format(i, text))
print("\n" + 100 * '-')
```
|
satani/fashion_dreambooth | satani | 2023-04-20T04:38:04Z | 522 | 0 | diffusers | [
"diffusers",
"license:artistic-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-03-04T19:20:52Z | ---
license: artistic-2.0
---
|
ai-forever/mGPT-1.3B-kazakh | ai-forever | 2023-08-11T08:01:55Z | 522 | 2 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"gpt3",
"mgpt",
"kk",
"en",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-08-10T05:11:25Z | ---
language:
- kk
- en
- ru
license: mit
tags:
- gpt3
- transformers
- mgpt
---
# 🇰🇿 Kazakh mGPT 1.3B
Language model for Kazakh. Model has 1.3B parameters as you can guess from it's name.
Kazakh belongs to Turkic language family. It's a very vibrant language with approximately 18 million speakers. Here are some facts about it:
1. It is a major language spoken in Kazakhstan.
2. Kazakh has its own version of the Cyrillic script but is transitioning to the Latin script.
3. It has a rich tradition of oral literature, including epic poetry.
## Technical details
It's one of the models derived from the base [mGPT-XL (1.3B)](https://huggingface.co/ai-forever/mGPT) model (see the list below) which was originally trained on the 61 languages from 25 language families using Wikipedia and C4 corpus.
We've found additional data for 23 languages most of which are considered as minor and decided to further tune the base model. **Kazakh mGPT 1.3B** was trained for another 150000 steps with batch_size=4 and context window of **2048** tokens on 1 A100.
Final perplexity for this model on validation is **3.38**.
_Chart of the training loss and perplexity:_

## Other mGPT-1.3B models
- [🇦🇲 mGPT-1.3B Armenian](https://huggingface.co/ai-forever/mGPT-1.3B-armenian)
- [🇦🇿 mGPT-1.3B Azerbaijan](https://huggingface.co/ai-forever/mGPT-1.3B-azerbaijan)
- [🍯 mGPT-1.3B Bashkir](https://huggingface.co/ai-forever/mGPT-1.3B-bashkir)
- [🇧🇾 mGPT-1.3B Belorussian](https://huggingface.co/ai-forever/mGPT-1.3B-belorussian)
- [🇧🇬 mGPT-1.3B Bulgarian](https://huggingface.co/ai-forever/mGPT-1.3B-bulgarian)
- [🌞 mGPT-1.3B Buryat](https://huggingface.co/ai-forever/mGPT-1.3B-buryat)
- [🌳 mGPT-1.3B Chuvash](https://huggingface.co/ai-forever/mGPT-1.3B-chuvash)
- [🇬🇪 mGPT-1.3B Georgian](https://huggingface.co/ai-forever/mGPT-1.3B-georgian)
- [🌸 mGPT-1.3B Kalmyk](https://huggingface.co/ai-forever/mGPT-1.3B-kalmyk)
- [🇰🇬 mGPT-1.3B Kirgiz](https://huggingface.co/ai-forever/mGPT-1.3B-kirgiz)
- [🐻 mGPT-1.3B Mari](https://huggingface.co/ai-forever/mGPT-1.3B-mari)
- [🇲🇳 mGPT-1.3B Mongol](https://huggingface.co/ai-forever/mGPT-1.3B-mongol)
- [🐆 mGPT-1.3B Ossetian](https://huggingface.co/ai-forever/mGPT-1.3B-ossetian)
- [🇮🇷 mGPT-1.3B Persian](https://huggingface.co/ai-forever/mGPT-1.3B-persian)
- [🇷🇴 mGPT-1.3B Romanian](https://huggingface.co/ai-forever/mGPT-1.3B-romanian)
- [🇹🇯 mGPT-1.3B Tajik](https://huggingface.co/ai-forever/mGPT-1.3B-tajik)
- [☕ mGPT-1.3B Tatar](https://huggingface.co/ai-forever/mGPT-1.3B-tatar)
- [🇹🇲 mGPT-1.3B Turkmen](https://huggingface.co/ai-forever/mGPT-1.3B-turkmen)
- [🐎 mGPT-1.3B Tuvan](https://huggingface.co/ai-forever/mGPT-1.3B-tuvan)
- [🇺🇦 mGPT-1.3B Ukranian](https://huggingface.co/ai-forever/mGPT-1.3B-ukranian)
- [🇺🇿 mGPT-1.3B Uzbek](https://huggingface.co/ai-forever/mGPT-1.3B-uzbek)
- [💎 mGPT-1.3B Yakut](https://huggingface.co/ai-forever/mGPT-1.3B-yakut)
## Feedback
If you'll found a bug of have additional data to train model on your language — please, give us feedback.
Model will be improved over time. Stay tuned!
|
TheBloke/Spicyboros-13B-2.2-GGUF | TheBloke | 2023-09-27T12:48:38Z | 522 | 20 | transformers | [
"transformers",
"gguf",
"llama",
"not-for-all-audiences",
"dataset:jondurbin/airoboros-2.2",
"base_model:jondurbin/spicyboros-13b-2.2",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-09-10T12:48:06Z | ---
license: llama2
tags:
- not-for-all-audiences
datasets:
- jondurbin/airoboros-2.2
model_name: Spicyboros 13B 2.2
base_model: jondurbin/spicyboros-13b-2.2
inference: false
model_creator: Jon Durbin
model_type: llama
prompt_template: "A chat.\nUSER: {prompt}\nASSISTANT: \n"
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Spicyboros 13B 2.2 - GGUF
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
- Original model: [Spicyboros 13B 2.2](https://huggingface.co/jondurbin/spicyboros-13b-2.2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Jon Durbin's Spicyboros 13B 2.2](https://huggingface.co/jondurbin/spicyboros-13b-2.2).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Spicyboros-13B-2.2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Spicyboros-13B-2.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Spicyboros-13B-2.2-GGUF)
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/spicyboros-13b-2.2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Chat
```
A chat.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [spicyboros-13b-2.2.Q2_K.gguf](https://huggingface.co/TheBloke/Spicyboros-13B-2.2-GGUF/blob/main/spicyboros-13b-2.2.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [spicyboros-13b-2.2.Q3_K_S.gguf](https://huggingface.co/TheBloke/Spicyboros-13B-2.2-GGUF/blob/main/spicyboros-13b-2.2.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [spicyboros-13b-2.2.Q3_K_M.gguf](https://huggingface.co/TheBloke/Spicyboros-13B-2.2-GGUF/blob/main/spicyboros-13b-2.2.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [spicyboros-13b-2.2.Q3_K_L.gguf](https://huggingface.co/TheBloke/Spicyboros-13B-2.2-GGUF/blob/main/spicyboros-13b-2.2.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [spicyboros-13b-2.2.Q4_0.gguf](https://huggingface.co/TheBloke/Spicyboros-13B-2.2-GGUF/blob/main/spicyboros-13b-2.2.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [spicyboros-13b-2.2.Q4_K_S.gguf](https://huggingface.co/TheBloke/Spicyboros-13B-2.2-GGUF/blob/main/spicyboros-13b-2.2.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [spicyboros-13b-2.2.Q4_K_M.gguf](https://huggingface.co/TheBloke/Spicyboros-13B-2.2-GGUF/blob/main/spicyboros-13b-2.2.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [spicyboros-13b-2.2.Q5_0.gguf](https://huggingface.co/TheBloke/Spicyboros-13B-2.2-GGUF/blob/main/spicyboros-13b-2.2.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [spicyboros-13b-2.2.Q5_K_S.gguf](https://huggingface.co/TheBloke/Spicyboros-13B-2.2-GGUF/blob/main/spicyboros-13b-2.2.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [spicyboros-13b-2.2.Q5_K_M.gguf](https://huggingface.co/TheBloke/Spicyboros-13B-2.2-GGUF/blob/main/spicyboros-13b-2.2.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [spicyboros-13b-2.2.Q6_K.gguf](https://huggingface.co/TheBloke/Spicyboros-13B-2.2-GGUF/blob/main/spicyboros-13b-2.2.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [spicyboros-13b-2.2.Q8_0.gguf](https://huggingface.co/TheBloke/Spicyboros-13B-2.2-GGUF/blob/main/spicyboros-13b-2.2.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Spicyboros-13B-2.2-GGUF and below it, a specific filename to download, such as: spicyboros-13b-2.2.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Spicyboros-13B-2.2-GGUF spicyboros-13b-2.2.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Spicyboros-13B-2.2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Spicyboros-13B-2.2-GGUF spicyboros-13b-2.2.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m spicyboros-13b-2.2.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat.\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Spicyboros-13B-2.2-GGUF", model_file="spicyboros-13b-2.2.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Jon Durbin's Spicyboros 13B 2.2
### Overview
__Usage restriction: To use this model, you must agree to the following:__
- Some of the content than can be produced is "toxic"/"harmful", and contains profanity and other types of sensitive content.
- None of the content or views contained in the dataset or generated outputs necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs and/or scraped from the web.
- Use with extreme caution, particularly in locations with less-than-free speech laws.
- You, and you alone are responsible for having downloaded and generated outputs with the model and I am completely indemnified from any and all liabilities.
__Ok, now that the warning is out of the way...__
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
Highlights:
- The prompt format has changed! It is now newlines instead of spaces between system/USER/ASSISTANT (see prompt info below).
- This version also includes "de-alignment" data, to enable less savory interactions and outputs.
- To learn more about the dataset, see: https://hf.co/datasets/jondurbin/airoboros-2.2 (this is the instructions.jsonl file, not instructions-clean.jsonl)
- I re-generated all of the outputs in the dataset that had "Once upon a time" so they'd be less cliche - no guarantees that won't still happen, but in theory it may happen less.
- More multiple choice, better awareness, some alignment for normal use case but system-prompt overridable etc.
__WARNING: This model will gladly spew profane and otherwise NSFW content, if asked, use with care.__
Breakdown of the training data:
| Count | Category |
|--------|----------------------------|
| 60 | quiz |
| 63 | card |
| 100 | detailed\_writing |
| 103 | experience |
| 114 | greeting |
| 200 | song |
| 204 | editor |
| 250 | counterfactual\_contextual |
| 268 | cot |
| 339 | theory\_of\_mind |
| 460 | misconception |
| 500 | summarization |
| 573 | awareness |
| 715 | riddle |
| 719 | agent |
| 800 | plan |
| 873 | gtkm |
| 966 | rp |
| 1000 | stylized\_response |
| 1000 | wordgame |
| 1279 | multiple\_choice |
| 1641 | joke |
| 1785 | writing |
| 2155 | contextual |
| 2364 | roleplay |
| 2508 | trivia |
| 5216 | general |
| 5779 | coding |
| 11367 | orca |
In other words, it's a fairly general purpose model, but focuses fairly heavily on instruction response pairs rather than casual chat/roleplay.
*Why do I try to remove censorship?*
- laws vary widely based on time and location
- language model may conflate certain words with laws, e.g. it may think "stealing eggs from a chicken" is illegal
- these models just produce text, what you do with that text is your resonsibility
- many people and industries deal with "sensitive" content; imagine if a court stenographer's equipment filtered illegal content - it would be useless
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format:
```
A chat.
USER: {prompt}
ASSISTANT:
```
The default system prompt ("A chat.") was used for most of the prompts, however it also included a wide sampling of responses with other prompts, particularly in "stylized\_response", "rp", "gtkm", etc.
Here's another example:
```
A chat between Bob (aka USER) and Tom (aka ASSISTANT). Tom is an extremely intelligent 18th century bookkeeper, who speaks loquaciously.
USER: {prompt}
ASSISTANT:
```
And chat scenario that wouldn't require USER/ASSISTANT (but should use stopping criteria to prevent the model from speaking on your behalf).
```
A chat between old friends: Timmy and Tommy.
{description of characters}
{setting for the chat}
Timmy: *takes a big sip from his coffee* "Ah, sweet, delicious, magical coffee."
Tommy:
```
__*I strongly suggest adding stopping criteria/early inference stopping on "USER:", and/or whatever names you specify in the system prompt.*__
### Fine-tuning details
https://gist.github.com/jondurbin/e94d5a0eb3868db2877413f53d59fb37
*Note: I used checkpoint 350 for final model, not the full 5 epochs!*
### Helpful usage tips
*The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.*
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 2.2 models are built on top of llama-2/codellama.
The llama-2 base model has a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
<!-- original-model-card end -->
|
TheBloke/chronos-hermes-13B-GGUF | TheBloke | 2023-09-27T12:53:07Z | 522 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"pytorch",
"chatbot",
"storywriting",
"base_model:Austism/chronos-hermes-13b",
"license:other",
"text-generation-inference",
"region:us"
]
| null | 2023-09-20T01:25:49Z | ---
license: other
tags:
- llama
- pytorch
- chatbot
- storywriting
model_name: Chronos Hermes 13B
base_model: Austism/chronos-hermes-13b
inference: false
model_creator: Austism
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Chronos Hermes 13B - GGUF
- Model creator: [Austism](https://huggingface.co/Austism)
- Original model: [Chronos Hermes 13B](https://huggingface.co/Austism/chronos-hermes-13b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Austism's Chronos Hermes 13B](https://huggingface.co/Austism/chronos-hermes-13b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/chronos-hermes-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/chronos-hermes-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/chronos-hermes-13B-GGUF)
* [Austism's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Austism/chronos-hermes-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [chronos-hermes-13b.Q2_K.gguf](https://huggingface.co/TheBloke/chronos-hermes-13B-GGUF/blob/main/chronos-hermes-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [chronos-hermes-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/chronos-hermes-13B-GGUF/blob/main/chronos-hermes-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [chronos-hermes-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/chronos-hermes-13B-GGUF/blob/main/chronos-hermes-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [chronos-hermes-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/chronos-hermes-13B-GGUF/blob/main/chronos-hermes-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [chronos-hermes-13b.Q4_0.gguf](https://huggingface.co/TheBloke/chronos-hermes-13B-GGUF/blob/main/chronos-hermes-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [chronos-hermes-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/chronos-hermes-13B-GGUF/blob/main/chronos-hermes-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [chronos-hermes-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/chronos-hermes-13B-GGUF/blob/main/chronos-hermes-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [chronos-hermes-13b.Q5_0.gguf](https://huggingface.co/TheBloke/chronos-hermes-13B-GGUF/blob/main/chronos-hermes-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [chronos-hermes-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/chronos-hermes-13B-GGUF/blob/main/chronos-hermes-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [chronos-hermes-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/chronos-hermes-13B-GGUF/blob/main/chronos-hermes-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [chronos-hermes-13b.Q6_K.gguf](https://huggingface.co/TheBloke/chronos-hermes-13B-GGUF/blob/main/chronos-hermes-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [chronos-hermes-13b.Q8_0.gguf](https://huggingface.co/TheBloke/chronos-hermes-13B-GGUF/blob/main/chronos-hermes-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/chronos-hermes-13B-GGUF and below it, a specific filename to download, such as: chronos-hermes-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/chronos-hermes-13B-GGUF chronos-hermes-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/chronos-hermes-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/chronos-hermes-13B-GGUF chronos-hermes-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m chronos-hermes-13b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/chronos-hermes-13B-GGUF", model_file="chronos-hermes-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Austism's Chronos Hermes 13B
([chronos-13b](https://huggingface.co/elinas/chronos-13b) + [Nous-Hermes-13b](https://huggingface.co/NousResearch/Nous-Hermes-13b)) 75/25 merge
This has the aspects of chronos's nature to produce long, descriptive outputs. But with additional coherency and an ability to better obey instructions. Resulting in this model having a great ability to produce evocative storywriting and follow a narrative.
This mix contains alot of chronos's writing style and 'flavour' with far less tendency of going AWOL and spouting nonsensical babble.
This result was much more successful than my [first chronos merge](https://huggingface.co/Austism/chronos-wizardlm-uc-scot-st-13b).
<!-- original-model-card end -->
|
HiTZ/latxa-7b-v1 | HiTZ | 2024-07-02T08:13:02Z | 522 | 15 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"eu",
"en",
"dataset:HiTZ/euscrawl",
"arxiv:2308.16884",
"arxiv:2112.10668",
"arxiv:1910.09700",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-01-16T14:37:18Z | ---
license: llama2
datasets:
- HiTZ/euscrawl
language:
- eu
- en
metrics:
- accuracy
- f1
- perplexity
pipeline_tag: text-generation
---
# **Model Card for Latxa 7b**
<p align="center">
<img src="https://github.com/hitz-zentroa/latxa/blob/b9aa705f60ee2cc03c9ed62fda82a685abb31b07/assets/latxa_round.png?raw=true" style="height: 350px;">
</p>
<span style="color: red; font-weight: bold">IMPORTANT:</span> This model is outdated and made available publicly for reproducibility purposes only. Please utilize the most recent version found in [our HuggingFace collection](https://huggingface.co/collections/HiTZ/latxa-65a697e6838b3acc53677304).
Latxa is a collection of foundation models specifically tuned for Basque. Based on Meta’s LLaMA 2 model family, these models were further trained with Euscrawl, a highly curated Basque corpora ([Artetxe et al., 2022](https://aclanthology.org/2022.emnlp-main.499/)). Ranging from 7 billion to 70 billion parameters, these models are currently the biggest and best-performing LLMs built for Basque. This is the 7b repository, links to other models can be found in the [Latxa Collection](https://huggingface.co/collections/HiTZ/latxa-65a697e6838b3acc53677304).
Read more about Latxa in our [website](https://www.hitz.eus/en/node/340) or in [LinkedIn](https://www.linkedin.com/pulse/presenting-latxa-largest-language-model-built-basque-hitz-zentroa-63qdf)!
# **Model Details**
## **Model Description**
Latxa is a family of Large Language Models (LLM) based on Meta’s [LLaMA models](https://huggingface.co/meta-llama). Current LLMs exhibit incredible performance for high-resource languages such as English, but, in the case of Basque and other low-resource languages, their performance is close to a random guesser. These limitations widen the gap between high- and low-resource languages when it comes to digital development. We present Latxa to overcome these limitations and promote the development of LLM-based technology and research for the Basque language. Latxa models follow the same architecture as their original counterparts and were further trained in Euscrawl v1 ([Artetxe et al., 2022](https://aclanthology.org/2022.emnlp-main.499/)), a high-quality Basque corpora.
The models are released in three sizes: 7B, 13B and 70B.
* **Developed by:** HiTZ Research Center & IXA Research group (University of the Basque Country UPV/EHU)
* **Model type:** Language model
* **Language(s) (NLP):** en, eu
* **License:** llama2
* **Parent Model:** meta-llama/Llama-2-7b
* **Contact:** [email protected]
## **Getting started**
Use the code below to get started with the model.
```python
from transformers import pipeline
pipe = pipeline("text-generation", model=”HiTZ/latxa-7b-v1”)
text = "Euskara adimen artifizialera iritsi da!"
pipe(text, max_new_tokens=50, num_beams=5)
>> [
{
'generated_text': 'Euskara adimen artifizialera iritsi da!\nEuskararen eta adimen artifizialaren arteko harremana aspaldikoa da,'
' baina azken urteotan aurrerapauso handiak eman dira arlo horretan'
}
]
```
# **Uses**
Latxa models are intended to be used with Basque data; for any other language the performance is not guaranteed. Same as the original, Latxa inherits the [LLaMA-2 License](https://ai.meta.com/llama/license/) which allows for commercial and research use.
## **Direct Use**
Latxa family models are pre-trained LLMs without any task-specific or instruction fine-tuning. That is, the model can either be prompted to perform a specific task or further fine-tuned for specific use cases.
## **Out-of-Scope Use**
The model was not fine-tuned to follow instructions or to work as a chat assistant, therefore, this kind of usage is not tested nor recommended.
# **Bias, Risks, and Limitations**
In an effort to alleviate the potentially disturbing or harmful content, Latxa has been trained on carefully selected and processed data which comes mainly from local media, national/regional newspapers, encyclopedias and blogs (see Euscrawl below). Still, the model is based on LLaMA models and can potentially carry the same bias, risk and limitations.
Please see the LLaMA’s _Ethical Considerations and Limitations _for further information.
# **Training Details**
## **Training Data**
The models were trained on EusCrawl v1, a high-quality corpus for Basque comprising 1.72M documents, 288M words, totalling 2.1GiB of uncompressed text. EusCrawl was built using ad-hoc scrapers to extract text from 33 Basque websites with high-quality content, resulting in cleaner text compared to general-purpose approaches.
See more details in the [EusCrawl](https://huggingface.co/datasets/HiTZ/euscrawl) dataset card.
Additionally, 100K documents of English data randomly selected from the [Pile](https://huggingface.co/datasets/EleutherAI/pile) dataset were also included to avoid catastrophic forgetting.
## **Training Procedure**
The models were trained using the GPT-Neox library on the HPC CINECA computing cluster. All the models were approximately trained with an effective batch size of 2M tokens for 1000 to 2000 steps.
<table>
<tr>
<td>Model
</td>
<td>Steps
</td>
<td>Sequence length
</td>
<td>Effective Batch size
</td>
<td>Total tokens
</td>
<td>GPU hours
</td>
</tr>
<tr>
<td>Latxa 7B
</td>
<td><p style="text-align: right">
2000</p>
</td>
<td><p style="text-align: right">
4096</p>
</td>
<td><p style="text-align: right">
2M tokens/step</p>
</td>
<td><p style="text-align: right">
4B</p>
</td>
<td><p style="text-align: right">
359.2h</p>
</td>
</tr>
<tr>
<td>Latxa 13B
</td>
<td><p style="text-align: right">
1000</p>
</td>
<td><p style="text-align: right">
4096</p>
</td>
<td><p style="text-align: right">
2M tokens/step</p>
</td>
<td><p style="text-align: right">
2B</p>
</td>
<td><p style="text-align: right">
468.8h</p>
</td>
</tr>
<tr>
<td>Latxa 70B
</td>
<td><p style="text-align: right">
1680</p>
</td>
<td><p style="text-align: right">
4096</p>
</td>
<td><p style="text-align: right">
2M tokens/step</p>
</td>
<td><p style="text-align: right">
3.4B</p>
</td>
<td><p style="text-align: right">
*6475.52h</p>
</td>
</tr>
</table>
* indicates the time for the entire training process (2000 steps), however the weights of the step 1680 are shared as it is the best checkpoint according to validation loss.
# **Evaluation**
We evaluated the models on zero-shot and few-shot settings on generative, multiple-choice and classification tasks. We used the basque partitions of each dataset.
## **Testing Data, Factors & Metrics**
### **Testing Data**
* **Belebele** ([Bandarkar et al.](https://arxiv.org/abs/2308.16884)): Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. We evaluated the model in a 5-shot fashion.
* Data card: [https://huggingface.co/datasets/facebook/belebele](https://huggingface.co/datasets/facebook/belebele)
* **X-StoryCloze** ([Lin et al.](https://arxiv.org/abs/2112.10668)): XStoryCloze consists of the professionally translated version of the English StoryCloze dataset to 10 non-English languages. Story Cloze is a commonsense reasoning dataset which consists of choosing the correct ending to a four-sentence story. We evaluated the model in a 0-shot fashion.
* Data card: [https://huggingface.co/datasets/juletxara/xstory_cloze](https://huggingface.co/datasets/juletxara/xstory_cloze)
* **BasqueGLUE** ([Urbizu et al.](https://aclanthology.org/2022.lrec-1.172.pdf)): BasqueGLUE is a NLU benchmark for Basque. We evaluated the model in a 5-shot fashion on the following tasks:
* Data card:[ https://huggingface.co/datasets/orai-nlp/basqueGLUE](https://huggingface.co/datasets/orai-nlp/basqueGLUE).
* Tasks:
* **BEC2016eu**: Sentiment analysis on tweets about the 2016 Basque elections campaign.
* **VaxxStance**: Stance detection on tweets around the anti-vaccine movement.
* **BTHCv2**: Topic classification of news extracts with 12 categories.
* **EpecKorrefBin**: Correference detection task similar to WSC.
* **QNLIeu**: Q&A NLI built from the Basque Wikipedia.
* **WiCeu**: Basque Word-in-Context task.
### **Metrics**
* **Accuracy**: Belebele, X-StoryCloze, EpecKorrefBin, QNLI-eu, and, WiC-eu
* **Micro F1**: BEC2016-eu and BHTCv2
* **Macro F1**: VaxxStance (favor & against)
## **Results**
The model was evaluated using the LM Evaluation harness library from Eleuther AI.
In order to reproduce our results please follow the instructions in Latxa's [Github repository](https://github.com/hitz-zentroa/latxa?tab=readme-ov-file#evaluation).
<table>
<tr>
<td><strong>Model</strong>
</td>
<td><strong>Belebele</strong>
</td>
<td><strong>X-StoryCloze</strong>
</td>
<td><strong>BEC</strong>
</td>
<td><strong>Vaxx</strong>
</td>
<td><strong>BHTC</strong>
</td>
<td><strong>coref</strong>
</td>
<td><strong>QNLI</strong>
</td>
<td><strong>WiC</strong>
</td>
<td><strong>Average</strong>
</td>
</tr>
<tr>
<td>Random
</td>
<td>25.00
</td>
<td>50.00
</td>
<td>33.33
</td>
<td>33.33
</td>
<td>8.33
</td>
<td>50.00
</td>
<td>50.00
</td>
<td>50.00
</td>
<td>37.50
</td>
</tr>
<tr>
<td>LLaMA 2 7B
</td>
<td>26.22
</td>
<td>50.43
</td>
<td>41.63
</td>
<td>18.60
</td>
<td>20.06
</td>
<td>50.94
</td>
<td>48.32
</td>
<td>49.64
</td>
<td>38.23
</td>
</tr>
<tr>
<td>LLaMA 2 13B
</td>
<td>32.00
</td>
<td>50.63
</td>
<td>41.09
</td>
<td>18.25
</td>
<td>27.35
</td>
<td>49.23
</td>
<td>48.74
</td>
<td>49.21
</td>
<td>39.56
</td>
</tr>
<tr>
<td>LLaMA 2 70B
</td>
<td>33.56
</td>
<td>51.62
</td>
<td>47.47
</td>
<td>21.01
</td>
<td>31.01
</td>
<td>52.98
</td>
<td>51.26
</td>
<td>51.57
</td>
<td>42.56
</td>
</tr>
<tr>
<td>BLOOM 7B
</td>
<td>27.00
</td>
<td>57.18
</td>
<td>37.94
</td>
<td>20.72
</td>
<td>39.10
</td>
<td>48.21
</td>
<td>47.48
</td>
<td>47.57
</td>
<td>40.65
</td>
</tr>
<tr>
<td>XGLM 7B
</td>
<td>23.88
</td>
<td>57.71
</td>
<td>39.94
</td>
<td>21.58
</td>
<td>36.73
</td>
<td>50.94
</td>
<td>50.42
</td>
<td>49.21
</td>
<td>41.30
</td>
</tr>
<tr>
<td><strong>Latxa 7B</strong>
</td>
<td>35.67
</td>
<td>63.13
</td>
<td>55.61
</td>
<td>45.93
</td>
<td>44.44
</td>
<td>50.43
</td>
<td>55.04
</td>
<td>50.14
</td>
<td>50.05
</td>
</tr>
<tr>
<td><strong>Latxa 13B</strong>
</td>
<td>53.56
</td>
<td>65.85
</td>
<td>53.23
</td>
<td>48.66
</td>
<td><strong>53.61</strong>
</td>
<td>62.52
</td>
<td>57.14
</td>
<td>54.21
</td>
<td>56.10
</td>
</tr>
<tr>
<td><strong>Latxa 70B</strong>
</td>
<td><strong>71.78</strong>
</td>
<td><strong>67.57</strong>
</td>
<td><strong>63.52</strong>
</td>
<td><strong>48.95</strong>
</td>
<td>49.51
</td>
<td><strong>79.90</strong>
</td>
<td><strong>58.82</strong>
</td>
<td><strong>55.50</strong>
</td>
<td><strong>61.94</strong>
</td>
</tr>
</table>
# **Environmental Impact**
Carbon emissions are estimated using the[ Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in[ Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
* **Hardware Type:** HPC Cluster, 4x A100 64Gb nodes
* **Hours used:** 359.2h + 468.8h + 6475.52h = 7303.52h
* **Compute cluster:** CINECA HPC
* **Compute Region:** Italy
* **Carbon Emitted:** 673.75kg CO<sub>2</sub> eq
# **Acknowledgements**
This work has been partially supported by the Basque Government (IKER-GAITU project). The models were trained on the Leonardo supercomputer at CINECA under the EuroHPC Joint Undertaking, project EHPC-EXT-2023E01-013. |
IlyaGusev/saiga_gemma_9b | IlyaGusev | 2024-02-24T12:22:23Z | 522 | 20 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"ru",
"dataset:IlyaGusev/ru_turbo_saiga",
"dataset:IlyaGusev/ru_sharegpt_cleaned",
"dataset:IlyaGusev/oasst1_ru_main_branch",
"dataset:IlyaGusev/gpt_roleplay_realm",
"dataset:lksy/ru_instruct_gpt4",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-23T23:03:46Z | ---
language:
- ru
datasets:
- IlyaGusev/ru_turbo_saiga
- IlyaGusev/ru_sharegpt_cleaned
- IlyaGusev/oasst1_ru_main_branch
- IlyaGusev/gpt_roleplay_realm
- lksy/ru_instruct_gpt4
---
# Saiga/Gemma 9B, Russian Gemma-based chatbot
Based on [Gemma 7B](https://huggingface.co/google/gemma-7b).
Training Colab: [link](https://colab.research.google.com/drive/1O7F7Q3IQYh-v7EfsdRwMDMfUnOYZ3DEh).
ChatML prompt format:
```
<|im_start|>system
Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.<|im_end|>
<|im_start|>user
Привет!<|im_end|>
<|im_start|>assistant
Отлично, а ты?<|im_end|>
<|im_start|>user
Шикарно. Как пройти в библиотеку?<|im_end|>
<|im_start|>assistant
```
Examples:
```
User: Почему трава зеленая?
Saiga: Зеленый цвет травы обусловлен наличием в её клетках пигмента хлорофилла, который играет важную роль в процессе фотосинтеза – преобразования солнечного света в энергию, используемую для роста и развития растений.
Хлорофилл поглощает свет в определенных диапазонах длин волн, в том числе в зеленой области спектра, и отражает свет в других диапазонах, что и обуславливает зелёный цвет.
```
```
User: Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч
Saiga: Однажды, в далекой стране, жила маленькая девочка по имени Таня. Она была очень активной и любила играть в мяч. Вместе со своими друзьями они проводили дни, играя в футбол, баскетбол и волейбол.
Таня была очень талантливой и ловкой, и она часто выигрывала в играх с другими детьми. Но однажды, во время игры в баскетбол, она получила сильную травму. Ей пришлось провести несколько недель в больнице, и она была очень грустной и унылой.
Её друзья не могли понять, как помочь Тане справиться с этой ситуацией. Они пытались поддерживать её, но Таня была слишком подавлена и не хотела играть в мяч.
Однажды, Таня увидела, как её друзья играют в мяч. Она не могла не смотреть, как они ловко передают мяч друг другу и забивают голы. Таня почувствовала, как её сердце разрывается от желания играть вместе с ними.
Она решила, что ей нужно вернуться в игру, чтобы снова почувствовать радость игры. Она начала медленно восстанавливаться и постепенно возвращалась к своему любимому занятию - игре в мяч.
Таня стала играть в мяч снова, но теперь она была более осторожна и внимательна к своему здоровью. Её друзья были очень рады видеть, как Таня возвращается к своей привычной жизни.
С тех пор, Таня стала настоящей звездой на поле. Она стала лучшей игроком в своей команде и помогла им выиграть множество игр. Её друзья были горды, что Таня вернулась к своему любимому занятию и снова стала такой же активной и ловкой, как раньше.
Таня и её друзья продолжали играть в мяч и проводили чудесные дни вместе. Они были счастливы, что Таня вернулась к своему любимому занятию и снова стала частью команды.
Таня и её друзья продолжают играть в мяч и вместе они создают прекрасные воспоминания.
```
v1:
- dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a
- wandb [link](https://wandb.ai/ilyagusev/gemma_test/runs/k7u3uw5i)
- 5 datasets: ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_instruct_gpt4
- Datasets merging script: [create_short_chat_set.py](https://github.com/IlyaGusev/rulm/blob/d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a/self_instruct/src/data_processing/create_short_chat_set.py) |
giantdev/5GFAGFxcpPnG76VyJkD27cvMjhaGFuJewcVgZYLFwuvN4XMV_vgg | giantdev | 2024-03-24T10:10:31Z | 522 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-03-09T03:30:35Z | Entry not found |
habulaj/1712062541365x246088705178451940 | habulaj | 2024-04-02T13:47:18Z | 522 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"dataset:RickGrimes001/pretoebranco",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2024-04-02T12:55:53Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: in the style of TOK
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: false
datasets:
- RickGrimes001/pretoebranco
---
# LoRA DreamBooth - squaadinc/1712062541365x246088705178451940
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer.
The weights were trained on the concept prompt:
```
in the style of TOK
```
Use this keyword to trigger your custom model in your prompts.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Usage
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```python
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
device = "cuda" if torch.cuda.is_available() else "cpu"
vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae, torch_dtype=torch.float16, variant="fp16",
use_safetensors=True
)
pipe.to(device)
# This is where you load your trained weights
specific_safetensors = "pytorch_lora_weights.safetensors"
lora_scale = 0.9
pipe.load_lora_weights(
'squaadinc/1712062541365x246088705178451940',
weight_name = specific_safetensors,
# use_auth_token = True
)
prompt = "A majestic in the style of TOK jumping from a big stone at night"
image = pipe(
prompt=prompt,
num_inference_steps=50,
cross_attention_kwargs={"scale": lora_scale}
).images[0]
```
|
acon96/Home-1B-v3-GGUF | acon96 | 2024-04-25T01:22:11Z | 522 | 1 | null | [
"gguf",
"automation",
"home",
"assistant",
"text-generation",
"en",
"dataset:acon96/Home-Assistant-Requests",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-04-24T21:34:47Z | ---
datasets:
- acon96/Home-Assistant-Requests
license: apache-2.0
language:
- en
tags:
- automation
- home
- assistant
pipeline_tag: text-generation
---
# Home 1B v3
The "Home" model is a fine tuning of the [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) model. The model is able to control devices in the user's house via a Home Assistant integragion. The fine tuning dataset a [custom curated dataset](https://github.com/acon96/home-llm) designed to teach the model function calling.
The model is quantized using Lama.cpp in order to enable running the model in super low resource environments that are common with Home Assistant installations such as Rapsberry Pis.
The model can be used as an "instruct" type model using the Zephyr prompt format. The system prompt is used to provide information about the state of the Home Assistant installation including available devices and callable services.
Example "system" prompt:
```
You are 'Al', a helpful AI Assistant that controls the devices in a house. Complete the following task as instructed or answer the following question with the information provided only.
The current time and date is 08:12 AM on Thursday March 14, 2024
Services: light.turn_off(), light.turn_on(rgb_color,brightness), fan.turn_on(), fan.turn_off()
Devices:
light.office 'Office Light' = on;80%
fan.office 'Office fan' = off
light.kitchen 'Kitchen Light' = on;80%;red
light.bedroom 'Bedroom Light' = off
```
Output from the model will consist of a response that should be relayed back to the user, along with an optional code block that will invoke different Home Assistant "services". The output format from the model for function calling is as follows:
`````
turning on the kitchen lights for you now
```homeassistant
{ "service": "light.turn_on", "target_device": "light.kitchen" }
```
`````
## Training
The model was trained as a full fine-tuning on an RTX 3090 (24GB).
## Evaluation
This model acheives a 98.16% score for JSON function calling accuracy on the test dataset.
The v2 model acheives a 88.52% score on the same test.
The v1 model acheives a 76.78% score on the same test.
## Datasets
Snythetic Dataset for SFT - https://github.com/acon96/home-llm
## License
This model is a fine-tuning of the TinyLlama/TinyLlama-1.1B-Chat-v1.0 which is licensed under the Apache 2.0 License. As such this model is released under the same Apache 2.0 License. |
mradermacher/RiddleLegalEasy-GGUF | mradermacher | 2024-05-05T15:10:14Z | 522 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:c-s-ale/RiddleLegalEasy",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-25T08:14:52Z | ---
base_model: c-s-ale/RiddleLegalEasy
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/c-s-ale/RiddleLegalEasy
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/RiddleLegalEasy-GGUF/resolve/main/RiddleLegalEasy.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/RiddleLegalEasy-GGUF/resolve/main/RiddleLegalEasy.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/RiddleLegalEasy-GGUF/resolve/main/RiddleLegalEasy.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/RiddleLegalEasy-GGUF/resolve/main/RiddleLegalEasy.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/RiddleLegalEasy-GGUF/resolve/main/RiddleLegalEasy.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/RiddleLegalEasy-GGUF/resolve/main/RiddleLegalEasy.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/RiddleLegalEasy-GGUF/resolve/main/RiddleLegalEasy.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/RiddleLegalEasy-GGUF/resolve/main/RiddleLegalEasy.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/RiddleLegalEasy-GGUF/resolve/main/RiddleLegalEasy.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RiddleLegalEasy-GGUF/resolve/main/RiddleLegalEasy.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RiddleLegalEasy-GGUF/resolve/main/RiddleLegalEasy.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/RiddleLegalEasy-GGUF/resolve/main/RiddleLegalEasy.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/RiddleLegalEasy-GGUF/resolve/main/RiddleLegalEasy.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/RiddleLegalEasy-GGUF/resolve/main/RiddleLegalEasy.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/RiddleLegalEasy-GGUF/resolve/main/RiddleLegalEasy.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/L3-70B-sunfall-abliterated-v0.2-GGUF | mradermacher | 2024-06-07T22:51:44Z | 522 | 0 | transformers | [
"transformers",
"gguf",
"not-for-all-audiences",
"en",
"base_model:crestf411/L3-70B-sunfall-abliterated-v0.2",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-07T14:19:17Z | ---
base_model: crestf411/L3-70B-sunfall-abliterated-v0.2
language:
- en
library_name: transformers
license: llama3
license_link: LICENSE
license_name: llama3
quantized_by: mradermacher
tags:
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/crestf411/L3-70B-sunfall-abliterated-v0.2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3-70B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-70B-sunfall-abliterated-v0.2.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
lmstudio-community/DiscoPOP-zephyr-7b-gemma-GGUF | lmstudio-community | 2024-06-13T21:04:31Z | 522 | 7 | null | [
"gguf",
"alignment-handbook",
"generated_from_trainer",
"text-generation",
"dataset:argilla/dpo-mix-7k",
"base_model:SakanaAI/DiscoPOP-zephyr-7b-gemma",
"license:gemma",
"region:us"
]
| text-generation | 2024-06-13T20:48:56Z | ---
license: gemma
tags:
- alignment-handbook
- generated_from_trainer
datasets:
- argilla/dpo-mix-7k
model-index:
- name: DiscoPOP-zephyr-7b-gemma
results: []
quantized_by: bartowski
pipeline_tag: text-generation
lm_studio:
param_count: 8b
use_case: general
release_date: 13-06-2024
model_creator: SakanaAI
prompt_template: ChatML
system_prompt: none
base_model: gemma
original_repo: SakanaAI/DiscoPOP-zephyr-7b-gemma
base_model: SakanaAI/DiscoPOP-zephyr-7b-gemma
---
## 💫 Community Model> DiscoPOP-zephyr-7b-gemma by Sakana AI
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [Sakana AI](https://huggingface.co/SakanaAI)<br>
**Original model**: [DiscoPOP-zephyr-7b-gemma](https://huggingface.co/SakanaAI/DiscoPOP-zephyr-7b-gemma)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3145](https://github.com/ggerganov/llama.cpp/releases/tag/b3145)<br>
## Model Summary:
This model is based on the Zephyr 7b Gemma model, trained with a brand new technique called DiscoPOP. DiscoPOP is Sakana AI's Discovered Preference Optimization algorithm.<br>
This training method is brand new, discovered through experimental model prompting to discover new bespoke training techniques.
## Prompt template:
Choose the `ChatML` preset in your LM Studio.
Under the hood, the model will see a prompt that's formatted like so:
```
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
This model is not tuned for a system prompt.
## Technical Details
DiscoPOP was discovered through a new method proposed by Sakana AI. In this method, an LLM is prompted to propose and implement new preference optimization loss functions based on previously-evaluated performance metrics.<br>
This process leads to the discovery of previously-unknown preference optimization algorithms. DiscoPOP represents the best performing discovered preference optimizer.<br>
DiscoPOP achieves a higher score (Reward) while deviating less from the base model (KL Divergence), compared to existing state-of-the-art methods such as DPO.
For a deeper analysis and additional details, you can read their blog post here: https://sakana.ai/llm-squared/
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/)
🙏 Special thanks to [Kalomaze](https://github.com/kalomaze) and [Dampf](https://github.com/Dampfinchen) for their work on the dataset (linked [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)) that was used for calculating the imatrix for all sizes.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio. |
jkodiyil/tinyllama-bnb-4bit-clva-gguf | jkodiyil | 2024-06-25T22:57:49Z | 522 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-25T21:31:49Z | ---
base_model: unsloth/tinyllama-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** jkodiyil
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Helsinki-NLP/opus-mt-pl-es | Helsinki-NLP | 2023-08-16T12:02:40Z | 521 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pl",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-03-02T23:29:04Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-pl-es
* source languages: pl
* target languages: es
* OPUS readme: [pl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pl-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/pl-es/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-es/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-es/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.pl.es | 46.9 | 0.654 |
|
fusing/autoencoder-kl-dummy | fusing | 2022-08-06T05:04:49Z | 521 | 0 | diffusers | [
"diffusers",
"region:us"
]
| null | 2022-06-29T10:45:54Z | Entry not found |
Yova/SmallCap7M | Yova | 2023-07-31T18:15:48Z | 521 | 4 | transformers | [
"transformers",
"pytorch",
"smallcap",
"image-to-text",
"en",
"endpoints_compatible",
"region:us"
]
| image-to-text | 2022-09-27T12:33:15Z | ---
language:
- en
pipeline_tag: image-to-text
--- |
Fictiverse/Stable_Diffusion_VoxelArt_Model | Fictiverse | 2023-05-07T08:22:35Z | 521 | 157 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2022-11-10T04:42:13Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
# VoxelArt model V1
This is the fine-tuned Stable Diffusion model trained on Voxel Art images.
Use **VoxelArt** in your prompts.
### Sample images:

Based on StableDiffusion 1.5 model
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "Fictiverse/Stable_Diffusion_PaperCut_Model"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "PaperCut R2-D2"
image = pipe(prompt).images[0]
image.save("./R2-D2.png")
``` |
22h/vintedois-diffusion-v0-1 | 22h | 2022-12-30T17:58:36Z | 521 | 382 | diffusers | [
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2022-12-27T13:45:40Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### Vintedois (22h) Diffusion model trained by [Predogl](https://twitter.com/Predogl) and [piEsposito](https://twitter.com/piesposi_to) with open weights, configs and prompts (as it should be)
This model was trained on a large amount of high quality images with simple prompts to generate beautiful images without a lot of prompt engineering.
You can enforce style by prepending your prompt with `estilovintedois` if it is not good enough.
It should also be very dreamboothable, being able to generate high fidelity faces with a little amount of steps.
**You can use this model commercially or whatever, but we are not liable if you do messed up stuff with it.**
### Gradio
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run vintedois-diffusion-v0-1 :
[](https://huggingface.co/spaces/22h/vintedois-diffusion-v0-1)
### Model card
Everything from [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5), plus the fact that this is being built by two indie devs, so it was not extensively tested for new biases.
You can run this concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
### Sample results
<img src="https://huggingface.co/22h/vintedois-diffusion-v0-1/resolve/main/joined.png" width=1024/>
### Example prompts
- Prompt: photo of an old man in a jungle, looking at the camera
- CFG Scale: 7.5
- Scheduler: `diffusers.EulerAncestralDiscreteScheduler`
- Steps: 30
- Seed: 44
<img src="https://huggingface.co/22h/vintedois-diffusion-v0-1/resolve/main/44-euler-a-photo%20of%20an%20old%20man%20in%20a%20jungle%2C%20looking%20at%C2%A0the%C2%A0camera.png" width=512/>
- Prompt: kneeling cat knight, portrait, finely detailed armor, intricate design, silver, silk, cinematic lighting, 4k
- CFG Scale: 7.5
- Scheduler: `diffusers.EulerAncestralDiscreteScheduler`
- Steps: 50
- Seed: 44
<img src="https://huggingface.co/22h/vintedois-diffusion-v0-1/resolve/main/44-euler-a-kneeling%20cat%20knight%2C%20portrait%2C%20finely%20detailed%20armor%2C%20intricate%20design%2C%20silver%2C%20silk%2C%20cinematic%20lighting%2C%204k.png" width=512/>
- Prompt: a beautiful girl In front of the cabin, the country, by Artgerm Lau and Krenz Cushart,hyperdetailed, trending on artstation, trending on deviantart
- CFG Scale: 7.5
- Scheduler: `diffusers.EulerAncestralDiscreteScheduler`
- Steps: 50
- Seed: 44
<img src="https://huggingface.co/22h/vintedois-diffusion-v0-1/resolve/main/44-euler-a-a%20beautiful%20girl%20In%20front%20of%20the%20cabin%2C%20the%20country%2C%20by%20Artgerm%20Lau%20and%20Krenz%20Cushart%EF%BC%8Chyperdetailed%2C%20trending%20on%20artstation%2C%20tre.png" width=512/>
- Prompt: destroyed city
- CFG Scale: 7.5
- Scheduler: `diffusers.EulerAncestralDiscreteScheduler`
- Steps: 50
- Seed: 44
<img src="https://huggingface.co/22h/vintedois-diffusion-v0-1/resolve/main/44-euler-a-destroyed%20city.png" width=512/>
- Prompt: victorian city landscape
- CFG Scale: 7.5
- Scheduler: `diffusers.EulerAncestralDiscreteScheduler`
- Steps: 50
- Seed: 44
<img src="https://huggingface.co/22h/vintedois-diffusion-v0-1/resolve/main/44-euler-a-victorian%20city%20landscape.png" width=512/>
- Prompt: prehistoric native living room
- CFG Scale: 7.5
- Scheduler: `diffusers.EulerAncestralDiscreteScheduler`
- Steps: 50
- Seed: 44
<img src="https://huggingface.co/22h/vintedois-diffusion-v0-1/resolve/main/44-euler-a-prehistoric%20native%20living%20room.png" width=512/>
Thanks for the Google Developer Expert program for providing us with a GCP credits grant. |
keremberke/yolov5m-clash-of-clans | keremberke | 2022-12-30T20:47:36Z | 521 | 1 | yolov5 | [
"yolov5",
"tensorboard",
"yolo",
"vision",
"object-detection",
"pytorch",
"dataset:keremberke/clash-of-clans-object-detection",
"model-index",
"region:us"
]
| object-detection | 2022-12-30T10:31:38Z |
---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.6
inference: false
datasets:
- keremberke/clash-of-clans-object-detection
model-index:
- name: keremberke/yolov5m-clash-of-clans
results:
- task:
type: object-detection
dataset:
type: keremberke/clash-of-clans-object-detection
name: keremberke/clash-of-clans-object-detection
split: validation
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.8739138102679778 # min: 0.0 - max: 1.0
name: [email protected]
---
<div align="center">
<img width="640" alt="keremberke/yolov5m-clash-of-clans" src="https://huggingface.co/keremberke/yolov5m-clash-of-clans/resolve/main/sample_visuals.jpg">
</div>
### How to use
- Install [yolov5](https://github.com/fcakyon/yolov5-pip):
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('keremberke/yolov5m-clash-of-clans')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights keremberke/yolov5m-clash-of-clans --epochs 10
```
**More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)** |
timm/deit3_base_patch16_384.fb_in1k | timm | 2024-02-10T23:36:51Z | 521 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2204.07118",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-03-28T00:48:23Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for deit3_base_patch16_384.fb_in1k
A DeiT-III image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 86.9
- GMACs: 55.5
- Activations (M): 101.6
- Image size: 384 x 384
- **Papers:**
- DeiT III: Revenge of the ViT: https://arxiv.org/abs/2204.07118
- **Original:** https://github.com/facebookresearch/deit
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('deit3_base_patch16_384.fb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'deit3_base_patch16_384.fb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 577, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{Touvron2022DeiTIR,
title={DeiT III: Revenge of the ViT},
author={Hugo Touvron and Matthieu Cord and Herve Jegou},
journal={arXiv preprint arXiv:2204.07118},
year={2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
Yntec/HassanBlend1512VAE | Yntec | 2023-09-01T04:36:48Z | 521 | 2 | diffusers | [
"diffusers",
"safetensors",
"Photorealistic",
"General",
"Hassan",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-08-31T17:47:11Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
language:
- en
tags:
- Photorealistic
- General
- Hassan
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
---
# Hassan 1.5.1.2
This model with the MoistMixV2 VAE baked in.
Sample and prompt:

concept art of CUTE girl in a pixel, chibi character, DETAILED EYES, key visual, summer day, magazine ad, 1940, iconic, highly detailed, digital painting, artstation, concept art, sharp focus, in harmony with nature, streamlined, hyperrealism by makoto shinkai and akihiko yoshida and wlop
Original page:
https://civitai.com/models/1173?modelVersionId=4635 (download the Full 6GB file at https://civitai.com/api/download/models/4635?type=Model&format=PickleTensor&size=full&fp=fp16 - the prunned ones are broken and caused all the 1 star reviews) |
TheBloke/llava-v1.5-13B-AWQ | TheBloke | 2023-11-09T18:17:08Z | 521 | 30 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:liuhaotian/llava-v1.5-13b",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
]
| text-generation | 2023-10-15T11:53:51Z | ---
base_model: liuhaotian/llava-v1.5-13b
inference: false
license: llama2
model_creator: Haotian Liu
model_name: Llava v1.5 13B
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llava v1.5 13B - AWQ
- Model creator: [Haotian Liu](https://huggingface.co/liuhaotian)
- Original model: [Llava v1.5 13B](https://huggingface.co/liuhaotian/llava-v1.5-13b)
<!-- description start -->
## Description
This repo contains AWQ model files for [Haotian Liu's Llava v1.5 13B](https://huggingface.co/liuhaotian/llava-v1.5-13b).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of Llama AWQ models for high-throughput concurrent inference in multi-user server scenarios.
As of September 25th 2023, preliminary Llama-only AWQ support has also been added to [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference).
Note that, at the time of writing, overall throughput is still lower than running vLLM or TGI with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/llava-v1.5-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llava-v1.5-13B-GPTQ)
* [Haotian Liu's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/liuhaotian/llava-v1.5-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: llava 1.5
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: <image>{prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/llava-v1.5-13B-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.25 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Serving this model from vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
Note: at the time of writing, vLLM has not yet done a new release with AWQ support.
If you try the vLLM examples below and get an error about `quantization` being unrecognised, or other AWQ-related issues, please install vLLM from Github source.
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/llava-v1.5-13B-AWQ --quantization awq --dtype half
```
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/llava-v1.5-13B-AWQ", quantization="awq", dtype="half")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/llava-v1.5-13B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## How to use this AWQ model from Python code
### Install the necessary packages
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.1 or later
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### You can then try the following example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/llava-v1.5-13B-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
print("Output: ", tokenizer.decode(generation_output[0]))
"""
# Inference should be possible with transformers pipeline as well in future
# But currently this is not yet supported by AutoAWQ (correct as of September 25th 2023)
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
"""
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ)
- [vLLM](https://github.com/vllm-project/vllm)
- [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
TGI merged AWQ support on September 25th, 2023: [TGI PR #1054](https://github.com/huggingface/text-generation-inference/pull/1054). Use the `:latest` Docker container until the next TGI release is made.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Haotian Liu's Llava v1.5 13B
<br>
<br>
# LLaVA Model Card
## Model details
**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LLaVA-v1.5-13B was trained in September 2023.
**Paper or resources for more information:**
https://llava-vl.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 158K GPT-generated multimodal instruction-following data.
- 450K academic-task-oriented VQA data mixture.
- 40K ShareGPT data.
## Evaluation dataset
A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.
|
TheBloke/Mistral_7B_Dolphin2.1_LIMA0.5-GGUF | TheBloke | 2023-10-28T11:40:50Z | 521 | 6 | transformers | [
"transformers",
"gguf",
"mistral",
"base_model:RossAscends/Mistral_7B_Dolphin2.1_LIMA0.5_fp16",
"license:mit",
"text-generation-inference",
"region:us"
]
| null | 2023-10-28T07:50:04Z | ---
base_model: RossAscends/Mistral_7B_Dolphin2.1_LIMA0.5_fp16
inference: false
license: mit
model_creator: Ross Ascends
model_name: Mistral 7B Dolphin2.1 Lima0.5
model_type: mistral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Mistral 7B Dolphin2.1 Lima0.5 - GGUF
- Model creator: [Ross Ascends](https://huggingface.co/RossAscends)
- Original model: [Mistral 7B Dolphin2.1 Lima0.5](https://huggingface.co/RossAscends/Mistral_7B_Dolphin2.1_LIMA0.5_fp16)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Ross Ascends's Mistral 7B Dolphin2.1 Lima0.5](https://huggingface.co/RossAscends/Mistral_7B_Dolphin2.1_LIMA0.5_fp16).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mistral_7B_Dolphin2.1_LIMA0.5-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mistral_7B_Dolphin2.1_LIMA0.5-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mistral_7B_Dolphin2.1_LIMA0.5-GGUF)
* [Ross Ascends's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/RossAscends/Mistral_7B_Dolphin2.1_LIMA0.5_fp16)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [mistral_7b_dolphin2.1_lima0.5.Q2_K.gguf](https://huggingface.co/TheBloke/Mistral_7B_Dolphin2.1_LIMA0.5-GGUF/blob/main/mistral_7b_dolphin2.1_lima0.5.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [mistral_7b_dolphin2.1_lima0.5.Q3_K_S.gguf](https://huggingface.co/TheBloke/Mistral_7B_Dolphin2.1_LIMA0.5-GGUF/blob/main/mistral_7b_dolphin2.1_lima0.5.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [mistral_7b_dolphin2.1_lima0.5.Q3_K_M.gguf](https://huggingface.co/TheBloke/Mistral_7B_Dolphin2.1_LIMA0.5-GGUF/blob/main/mistral_7b_dolphin2.1_lima0.5.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [mistral_7b_dolphin2.1_lima0.5.Q3_K_L.gguf](https://huggingface.co/TheBloke/Mistral_7B_Dolphin2.1_LIMA0.5-GGUF/blob/main/mistral_7b_dolphin2.1_lima0.5.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [mistral_7b_dolphin2.1_lima0.5.Q4_0.gguf](https://huggingface.co/TheBloke/Mistral_7B_Dolphin2.1_LIMA0.5-GGUF/blob/main/mistral_7b_dolphin2.1_lima0.5.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mistral_7b_dolphin2.1_lima0.5.Q4_K_S.gguf](https://huggingface.co/TheBloke/Mistral_7B_Dolphin2.1_LIMA0.5-GGUF/blob/main/mistral_7b_dolphin2.1_lima0.5.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [mistral_7b_dolphin2.1_lima0.5.Q4_K_M.gguf](https://huggingface.co/TheBloke/Mistral_7B_Dolphin2.1_LIMA0.5-GGUF/blob/main/mistral_7b_dolphin2.1_lima0.5.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [mistral_7b_dolphin2.1_lima0.5.Q5_0.gguf](https://huggingface.co/TheBloke/Mistral_7B_Dolphin2.1_LIMA0.5-GGUF/blob/main/mistral_7b_dolphin2.1_lima0.5.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mistral_7b_dolphin2.1_lima0.5.Q5_K_S.gguf](https://huggingface.co/TheBloke/Mistral_7B_Dolphin2.1_LIMA0.5-GGUF/blob/main/mistral_7b_dolphin2.1_lima0.5.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [mistral_7b_dolphin2.1_lima0.5.Q5_K_M.gguf](https://huggingface.co/TheBloke/Mistral_7B_Dolphin2.1_LIMA0.5-GGUF/blob/main/mistral_7b_dolphin2.1_lima0.5.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [mistral_7b_dolphin2.1_lima0.5.Q6_K.gguf](https://huggingface.co/TheBloke/Mistral_7B_Dolphin2.1_LIMA0.5-GGUF/blob/main/mistral_7b_dolphin2.1_lima0.5.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [mistral_7b_dolphin2.1_lima0.5.Q8_0.gguf](https://huggingface.co/TheBloke/Mistral_7B_Dolphin2.1_LIMA0.5-GGUF/blob/main/mistral_7b_dolphin2.1_lima0.5.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Mistral_7B_Dolphin2.1_LIMA0.5-GGUF and below it, a specific filename to download, such as: mistral_7b_dolphin2.1_lima0.5.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Mistral_7B_Dolphin2.1_LIMA0.5-GGUF mistral_7b_dolphin2.1_lima0.5.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Mistral_7B_Dolphin2.1_LIMA0.5-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mistral_7B_Dolphin2.1_LIMA0.5-GGUF mistral_7b_dolphin2.1_lima0.5.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m mistral_7b_dolphin2.1_lima0.5.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Mistral_7B_Dolphin2.1_LIMA0.5-GGUF", model_file="mistral_7b_dolphin2.1_lima0.5.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Ross Ascends's Mistral 7B Dolphin2.1 Lima0.5
ehartford's merge of Mistral 7B 0.1 with his Dolphin 2.1 dataset
https://huggingface.co/ehartford/dolphin-2.1-mistral-7b
and
LIMA RP dataset applied as a lora at 0.5 weight
https://huggingface.co/lemonilia/limarp-llama2-v2/
Purpose of the model is to be RP-focused, smart, fast, and lightweight for users with low VRAM.
I've already built the exl2 4bpw quant (linked below), and it will run 8k ctx at around 6GB VRAM and respond to a full context at roughly 30tps (tested on my 3060) if exl2_hf loader is used with FA2 enabled.
Model has been tested by several users on the SillyTavern discord server, and run on Horde for a full day - with good results.
https://huggingface.co/RossAscends/Mistral7B_Dolphin2.1_LIMARP0.5_4bpw_exl2
Mistral or ChatML context presets both possible.
exllama v2 4bpw quant: https://huggingface.co/RossAscends/Mistral7B_Dolphin2.1_LIMARP0.5_4bpw_exl2
<!-- original-model-card end -->
|
TheBloke/TinyLlama-1.1B-intermediate-step-715k-1.5T-GGUF | TheBloke | 2023-11-07T12:57:54Z | 521 | 5 | transformers | [
"transformers",
"gguf",
"tinyllama",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"base_model:PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T",
"license:apache-2.0",
"region:us"
]
| null | 2023-11-07T12:56:24Z | ---
base_model: PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
inference: false
language:
- en
license: apache-2.0
model_creator: Zhang Peiyuan
model_name: TinyLlama 1.1B Intermediate Step 715K 1.5T
model_type: tinyllama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# TinyLlama 1.1B Intermediate Step 715K 1.5T - GGUF
- Model creator: [Zhang Peiyuan](https://huggingface.co/PY007)
- Original model: [TinyLlama 1.1B Intermediate Step 715K 1.5T](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Zhang Peiyuan's TinyLlama 1.1B Intermediate Step 715K 1.5T](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-715k-1.5T-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-715k-1.5T-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-715k-1.5T-GGUF)
* [Zhang Peiyuan's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [tinyllama-1.1b-intermediate-step-715k-1.5t.Q2_K.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-715k-1.5T-GGUF/blob/main/tinyllama-1.1b-intermediate-step-715k-1.5t.Q2_K.gguf) | Q2_K | 2 | 0.48 GB| 2.98 GB | smallest, significant quality loss - not recommended for most purposes |
| [tinyllama-1.1b-intermediate-step-715k-1.5t.Q3_K_S.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-715k-1.5T-GGUF/blob/main/tinyllama-1.1b-intermediate-step-715k-1.5t.Q3_K_S.gguf) | Q3_K_S | 3 | 0.50 GB| 3.00 GB | very small, high quality loss |
| [tinyllama-1.1b-intermediate-step-715k-1.5t.Q3_K_M.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-715k-1.5T-GGUF/blob/main/tinyllama-1.1b-intermediate-step-715k-1.5t.Q3_K_M.gguf) | Q3_K_M | 3 | 0.55 GB| 3.05 GB | very small, high quality loss |
| [tinyllama-1.1b-intermediate-step-715k-1.5t.Q3_K_L.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-715k-1.5T-GGUF/blob/main/tinyllama-1.1b-intermediate-step-715k-1.5t.Q3_K_L.gguf) | Q3_K_L | 3 | 0.59 GB| 3.09 GB | small, substantial quality loss |
| [tinyllama-1.1b-intermediate-step-715k-1.5t.Q4_0.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-715k-1.5T-GGUF/blob/main/tinyllama-1.1b-intermediate-step-715k-1.5t.Q4_0.gguf) | Q4_0 | 4 | 0.64 GB| 3.14 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [tinyllama-1.1b-intermediate-step-715k-1.5t.Q4_K_S.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-715k-1.5T-GGUF/blob/main/tinyllama-1.1b-intermediate-step-715k-1.5t.Q4_K_S.gguf) | Q4_K_S | 4 | 0.64 GB| 3.14 GB | small, greater quality loss |
| [tinyllama-1.1b-intermediate-step-715k-1.5t.Q4_K_M.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-715k-1.5T-GGUF/blob/main/tinyllama-1.1b-intermediate-step-715k-1.5t.Q4_K_M.gguf) | Q4_K_M | 4 | 0.67 GB| 3.17 GB | medium, balanced quality - recommended |
| [tinyllama-1.1b-intermediate-step-715k-1.5t.Q5_0.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-715k-1.5T-GGUF/blob/main/tinyllama-1.1b-intermediate-step-715k-1.5t.Q5_0.gguf) | Q5_0 | 5 | 0.77 GB| 3.27 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [tinyllama-1.1b-intermediate-step-715k-1.5t.Q5_K_S.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-715k-1.5T-GGUF/blob/main/tinyllama-1.1b-intermediate-step-715k-1.5t.Q5_K_S.gguf) | Q5_K_S | 5 | 0.77 GB| 3.27 GB | large, low quality loss - recommended |
| [tinyllama-1.1b-intermediate-step-715k-1.5t.Q5_K_M.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-715k-1.5T-GGUF/blob/main/tinyllama-1.1b-intermediate-step-715k-1.5t.Q5_K_M.gguf) | Q5_K_M | 5 | 0.78 GB| 3.28 GB | large, very low quality loss - recommended |
| [tinyllama-1.1b-intermediate-step-715k-1.5t.Q6_K.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-715k-1.5T-GGUF/blob/main/tinyllama-1.1b-intermediate-step-715k-1.5t.Q6_K.gguf) | Q6_K | 6 | 0.90 GB| 3.40 GB | very large, extremely low quality loss |
| [tinyllama-1.1b-intermediate-step-715k-1.5t.Q8_0.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-intermediate-step-715k-1.5T-GGUF/blob/main/tinyllama-1.1b-intermediate-step-715k-1.5t.Q8_0.gguf) | Q8_0 | 8 | 1.17 GB| 3.67 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/TinyLlama-1.1B-intermediate-step-715k-1.5T-GGUF and below it, a specific filename to download, such as: tinyllama-1.1b-intermediate-step-715k-1.5t.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/TinyLlama-1.1B-intermediate-step-715k-1.5T-GGUF tinyllama-1.1b-intermediate-step-715k-1.5t.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/TinyLlama-1.1B-intermediate-step-715k-1.5T-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/TinyLlama-1.1B-intermediate-step-715k-1.5T-GGUF tinyllama-1.1b-intermediate-step-715k-1.5t.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m tinyllama-1.1b-intermediate-step-715k-1.5t.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/TinyLlama-1.1B-intermediate-step-715k-1.5T-GGUF", model_file="tinyllama-1.1b-intermediate-step-715k-1.5t.Q4_K_M.gguf", model_type="tinyllama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Zhang Peiyuan's TinyLlama 1.1B Intermediate Step 715K 1.5T
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
<div align="center">
<img src="https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-240k-503b/resolve/main/TinyLlama_logo.png" width="300"/>
</div>
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is an intermediate checkpoint with 715K steps and 1.49T tokens. **We suggest you not use this directly for inference.**
#### How to use
You will need the transformers>=4.31
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
```
from transformers import AutoTokenizer
import transformers
import torch
model = "PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.',
do_sample=True,
top_k=10,
num_return_sequences=1,
repetition_penalty=1.5,
eos_token_id=tokenizer.eos_token_id,
max_length=500,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
#### Eval
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
|-------------------------------------------|-----------------|-----------|------|------------|-------|-------|-------|------|-----|
| Pythia-1.0B | 300B | 47.16 | 31.40| 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
| TinyLlama-1.1B-intermediate-step-50K-104b | 103B | 43.50 | 29.80| 53.28 | 24.32 | 44.91 | 59.66 | 67.30 | 46.11|
| TinyLlama-1.1B-intermediate-step-240k-503b| 503B | 49.56 |31.40 |55.80 |26.54 |48.32 |56.91 |69.42 | 48.28 |
| TinyLlama-1.1B-intermediate-step-480k-1007B | 1007B | 52.54 | 33.40 | 55.96 | 27.82 | 52.36 | 59.54 | 69.91 | 50.22 |
| TinyLlama-1.1B-intermediate-step-715k-1.5T | 1.49T | 53.68 | 35.20 | 58.33 | 29.18 | 51.89 | 59.08 | 71.65 | 51.29 |
<!-- original-model-card end -->
|
hardikJ11/bart-base-finetuned-cnn-news | hardikJ11 | 2024-01-16T07:45:05Z | 521 | 3 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2024-01-16T06:17:42Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: bart-base-finetuned-cnn-news
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
config: 3.0.0
split: validation
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 21.8948
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-cnn-news
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8560
- Rouge1: 21.8948
- Rouge2: 9.7157
- Rougel: 17.9348
- Rougelsum: 20.5347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00056
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.7005 | 1.0 | 718 | 2.9872 | 21.7279 | 9.0406 | 17.392 | 20.0627 |
| 2.937 | 2.0 | 1436 | 2.8590 | 21.3056 | 8.5254 | 17.2338 | 20.0403 |
| 2.2642 | 3.0 | 2154 | 2.6744 | 21.277 | 9.6162 | 17.7775 | 20.1688 |
| 1.5774 | 4.0 | 2872 | 2.7020 | 21.7458 | 9.846 | 18.1649 | 20.7067 |
| 1.0174 | 5.0 | 3590 | 2.8560 | 21.8948 | 9.7157 | 17.9348 | 20.5347 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
brittlewis12/gemma-2b-GGUF | brittlewis12 | 2024-02-21T15:11:39Z | 521 | 0 | null | [
"gguf",
"text-generation",
"en",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2203.09509",
"base_model:google/gemma-2b",
"license:other",
"region:us"
]
| text-generation | 2024-02-21T14:32:05Z | ---
base_model: google/gemma-2b
inference: false
language:
- en
model_creator: google
model_name: gemma-2b
model_type: gemma
pipeline_tag: text-generation
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
quantized_by: brittlewis12
---
# Gemma 2B GGUF
**Original model**: [gemma-2b](https://huggingface.co/google/gemma-2b)
**Model creator**: [google](https://huggingface.co/google)
This repo contains GGUF format model files for Google’s Gemma-2B.
> Gemma is a family of lightweight, state-of-the-art open models from Google,
> built from the same research and technology used to create the Gemini models.
> They are text-to-text, decoder-only large language models, available in English,
> with open weights, pre-trained variants, and instruction-tuned variants. Gemma
> models are well-suited for a variety of text generation tasks, including
> question answering, summarization, and reasoning. Their relatively small size
> makes it possible to deploy them in environments with limited resources such as
> a laptop, desktop or your own cloud infrastructure, democratizing access to
> state of the art AI models and helping foster innovation for everyone.
Learn more on Google’s [Model page](https://ai.google.dev/gemma/docs).
### What is GGUF?
GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Converted using llama.cpp build 2226 (revision [eccd7a2](https://github.com/ggerganov/llama.cpp/commit/eccd7a26ddbff19e4b8805648f5f14c501957859))
---
## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac!

[cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device:
- create & save **Characters** with custom system prompts & temperature settings
- download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)!
- make it your own with custom **Theme colors**
- powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming!
- **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)!
- follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date
---
## Original Model Evaluation
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| **Average** | | **54.0** | **56.4** |
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
|
mradermacher/CodeLlama-7b-CypherGen-GGUF | mradermacher | 2024-05-06T06:04:01Z | 521 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ozayezerceli/CodeLlama-7b-CypherGen",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-22T14:01:22Z | ---
base_model: ozayezerceli/CodeLlama-7b-CypherGen
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
static quants of https://huggingface.co/ozayezerceli/CodeLlama-7b-CypherGen
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-CypherGen-GGUF/resolve/main/CodeLlama-7b-CypherGen.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-CypherGen-GGUF/resolve/main/CodeLlama-7b-CypherGen.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-CypherGen-GGUF/resolve/main/CodeLlama-7b-CypherGen.IQ3_S.gguf) | IQ3_S | 3.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-CypherGen-GGUF/resolve/main/CodeLlama-7b-CypherGen.Q3_K_S.gguf) | Q3_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-CypherGen-GGUF/resolve/main/CodeLlama-7b-CypherGen.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-CypherGen-GGUF/resolve/main/CodeLlama-7b-CypherGen.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-CypherGen-GGUF/resolve/main/CodeLlama-7b-CypherGen.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-CypherGen-GGUF/resolve/main/CodeLlama-7b-CypherGen.IQ4_XS.gguf) | IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-CypherGen-GGUF/resolve/main/CodeLlama-7b-CypherGen.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-CypherGen-GGUF/resolve/main/CodeLlama-7b-CypherGen.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-CypherGen-GGUF/resolve/main/CodeLlama-7b-CypherGen.Q5_K_S.gguf) | Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-CypherGen-GGUF/resolve/main/CodeLlama-7b-CypherGen.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-CypherGen-GGUF/resolve/main/CodeLlama-7b-CypherGen.Q6_K.gguf) | Q6_K | 5.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-CypherGen-GGUF/resolve/main/CodeLlama-7b-CypherGen.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
lmstudio-community/codegemma-7b-GGUF | lmstudio-community | 2024-04-09T18:15:39Z | 521 | 3 | transformers | [
"transformers",
"gguf",
"text-generation",
"license:gemma",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-04-09T17:53:11Z | ---
library_name: transformers
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access CodeGemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
license: gemma
license_link: https://ai.google.dev/gemma/terms
quantized_by: bartowski
pipeline_tag: text-generation
lm_studio:
param_count: 7b
use_case: coding
release_date: 09-04-2024
model_creator: google
prompt_template: none
system_prompt: none
base_model: gemma
original_repo: google/codegemma-7b
---
## 💫 Community Model> CodeGemma 7b by Google
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [Google](https://huggingface.co/google)<br>
**Original model**: [google/codegemma-7b](https://huggingface.co/google/codegemma-7b)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b2589](https://github.com/ggerganov/llama.cpp/releases/tag/b2589)<br>
## Model Summary:
CodeGemma 7B is the first in a series of coding models released by Google. This is a code completion model, and as such, cannot be prompted in the same way a chat or instruct model can be prompted.<br>
This model is perfect for code completion and use in tools like co-pilot, where its small size will make completions show up instantly while still being high performance.<br>
This model should not be used as a chat model, and will not answer questions.
## Prompt Template:
This model does not support a typical prompt template, but instead uses the following tokens for specifying input parts:
- <|fim_prefix|> precedes the context before the completion we want to run.
- <|fim_suffix|> precedes the suffix. You must put this token exactly where the cursor would be positioned in an editor, as this is the location that will be completed by the model.-
- <|fim_middle|> is the prompt that invites the model to run the generation.
In addition to these, there's also <|file_separator|>, which is used to provide multi-file contexts.
Select `LM Studio Blank Preset` to use your own proper format as below.
## Use case and examples
This model will excel at code generation and fill-in-the-middle.
### Coding 1
```
<|fim_prefix|>import datetime
def calculate_age(birth_year):
"""Calculates a person's age based on their birth year."""
current_year = datetime.date.today().year
<|fim_suffix|>
return age<|fim_middle|>
age = current_year - birth_year<|file_separator|>test_calculate_age.py
import pytest
from calculate_age import calculate_age
def test_calculate_age_positive():
assert calculate_age(1990) == 33
def test_calculate_age_negative():
assert calculate_age(2023) == -1
def test_calculate_age_future():
assert calculate_age(2030) == -7
def test_calculate_age_zero():
assert calculate_age(0) == -2023<|file_separator|>
```
Explanation: Here the model was given the fill-in-middle prefix and suffix. The model is then told to generate the fim_middle with the token `<|fim_middle|>` to which is replies with the code that would complete the function.<br>
It then gave a file_separator token to indicate a new file called test_calculate_age.py and wrote up several (albeit incorrect) unit tests.
### Coding 2
```
<|fim_prefix|>public class MergeSort {
public static void mergeSort(int[] arr) {
int n = arr.length;
if (n < 2) {
return;
}
<|fim_suffix|>
mergeSort(left);
mergeSort(right);
merge(arr, left, right);
}
public static void merge(int[] arr, int[] left, int[] right) {
int i = 0;
int j = 0;
int k = 0;
while (i < left.length && j < right.length) {
if (left[i] <= right[j]) {
arr[k] = left[i];
i++;
} else {
arr[k] = right[j];
j++;
}
k++;
}
while (i < left.length) {
arr[k] = left[i];
i++;
k++;
}
while (j < right.length) {
arr[k] = right[j];
j++;
k++;
}
}
public static void main(String[] args) {
int[] arr = {5, 2, 4, 6, 1, 3};
mergeSort(arr);
for (int i = 0; i < arr.length; i++) {
System.out.print(arr[i] + " ");
}
}
}
<|fim_middle|>
int mid = n / 2;
int[] left = new int[mid];
int[] right = new int[n - mid];
for (int i = 0; i < mid; i++) {
left[i] = arr[i];
}
for (int i = mid; i < n; i++) {
right[i - mid] = arr[i];
}<|file_separator|>
```
Explanation: The model was given the majority of a merge sort implementation in Java with a portion in the middle removed. The model was able to fill in the missing code based on the surrounding details.
### Coding 3
```
<|fim_prefix|>arr = [1, 5, 3, 76, 12, 154, 2, 56]
# Sort the array then print only the even numbers
<|fim_suffix|><|fim_middle|>
puts arr.sort.select { |num| num.even? }
# Sort the array then print only the odd numbers
puts arr.sort.select { |num| num.odd? }
# Sort the array then print only the numbers greater than 10
puts arr.sort.select { |num| num > 10 }<|file_separator|><|file_separator|>
```
Explanation: While this model cannot be directly prompted, it can be hinted in the right direction by preceeding the fill in middle token by a comment explaning what comes next, then using <|fim_suffix|> followed immediately by <|fim_middle|><br>
In this example, the comment suggest that what comes next is sorting the array and printing out each one that is even. The model accurately fills in what should be at <|fim_suffix|>.<br>
Interestingly in this example, the model chose to use Ruby as the language to complete this, since it fits the given syntax.
## Technical Details
CodeGemma 7b is based on the Gemma 7b model with additional training on web documents, mathematics, and code, with a mixture of 80% code and 20% natural language.
The code used is based on publicly avaialble code repositories.
The model was trained exclusively for the purposes of code completion and excels at it.
Additional details can be found on Google's official report PDF [here](https://storage.googleapis.com/deepmind-media/gemma/codegemma_report.pdf)
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
🙏 Special thanks to [Kalomaze](https://github.com/kalomaze) for his dataset (linked [here](https://github.com/ggerganov/llama.cpp/discussions/5263)) that was used for calculating the imatrix for these quants, which improves the overall quality!
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
primeline/distil-whisper-large-v3-german | primeline | 2024-05-31T20:56:36Z | 521 | 11 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"de",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-04-15T06:47:22Z | ---
license: apache-2.0
language:
- de
library_name: transformers
pipeline_tag: automatic-speech-recognition
---
# distil-whisper-german
This model is a German Speech Recognition model based on the [distil-whisper](https://github.com/huggingface/distil-whisper) technique.
The model weights count 756M parameters and with a size of 1.51GB in bfloat16 format.
As a follow-up to the [Whisper large v3 german](https://huggingface.co/primeline/whisper-large-v3-german) we decided to create a distilled version for a faster inference with minimal quality loss.
## Intended uses & limitations
The model is intended to be used for German speech recognition tasks.
It can be used as local transkription service or as a part of a larger pipeline for speech recognition tasks.
While counting only half of the parameters of the large model, the quality is still very good and can be used for most tasks.
The latency is low enough to be used in real-time applications when using optimization toolkits like tensorrt.
## Dataset
The dataset used for training is a filtered subset of the [Common Voice](https://huggingface.co/datasets/common_voice) dataset, multilingual librispeech and some internal data.
The data was filtered and double checked for quality and correctness.
We did some normalization to the text data, especially for casing and punctuation.
## Model family
| Model | Parameters | link |
|----------------------------------|------------|--------------------------------------------------------------|
| Whisper large v3 german | 1.54B | [link](https://huggingface.co/primeline/whisper-large-v3-german) |
| Distil-whisper large v3 german | 756M | [link](https://huggingface.co/primeline/distil-whisper-large-v3-german) |
| tiny whisper | 37.8M | [link](https://huggingface.co/primeline/whisper-tiny-german) |
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- total_train_batch_size: 512
- num_epochs: 5.0
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0a0+ebedce2
- Datasets 2.18.0
- Tokenizers 0.15.2
### How to use
```python
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "primeline/distil-whisper-large-v3-german"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
chunk_length_s=30,
batch_size=16,
return_timestamps=True,
torch_dtype=torch_dtype,
device=device,
)
dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation")
sample = dataset[0]["audio"]
result = pipe(sample)
print(result["text"])
```
## [About us](https://primeline-ai.com/en/)
[](https://primeline-ai.com/en/)
Your partner for AI infrastructure in Germany <br>
Experience the powerful AI infrastructure that drives your ambitions in Deep Learning, Machine Learning & High-Performance Computing. Optimized for AI training and inference.
Model author: [Florian Zimmermeister](https://huggingface.co/flozi00) |
lmms-lab/LLaVA-NeXT-Video-34B | lmms-lab | 2024-04-24T05:31:23Z | 521 | 16 | transformers | [
"transformers",
"safetensors",
"llava",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-04-16T14:13:36Z | ---
inference: false
license: apache-2.0
---
<br>
# LLaVA-Next-Video Model Card
## Model details
**Model type:**
<br>
LLaVA-Next-Video is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
<br>
Base LLM: [NousResearch/Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B)
**Model date:**
<br>
LLaVA-Next-Video-34B was trained in April 2024.
**Paper or resources for more information:**
<br>
https://github.com/LLaVA-VL/LLaVA-NeXT
## License
[NousResearch/Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) license.
## Where to send questions or comments about the model
https://github.com/LLaVA-VL/LLaVA-NeXT/issues
## Intended use
**Primary intended uses:**
<br>
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
<br>
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
### Image
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 158K GPT-generated multimodal instruction-following data.
- 500K academic-task-oriented VQA data mixture.
- 50K GPT-4V data mixture.
- 40K ShareGPT data.
### Video
- 100K VideoChatGPT-Instruct.
## Evaluation dataset
A collection of 4 benchmarks, including 3 academic VQA benchmarks and 1 captioning benchmark. |
RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf | RichardErkhov | 2024-05-10T02:04:13Z | 521 | 0 | null | [
"gguf",
"arxiv:2404.17790",
"arxiv:2404.17733",
"region:us"
]
| null | 2024-05-10T00:26:49Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Swallow-7b-NVE-instruct-hf - GGUF
- Model creator: https://huggingface.co/tokyotech-llm/
- Original model: https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-instruct-hf/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Swallow-7b-NVE-instruct-hf.Q2_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf/blob/main/Swallow-7b-NVE-instruct-hf.Q2_K.gguf) | Q2_K | 2.36GB |
| [Swallow-7b-NVE-instruct-hf.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf/blob/main/Swallow-7b-NVE-instruct-hf.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [Swallow-7b-NVE-instruct-hf.IQ3_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf/blob/main/Swallow-7b-NVE-instruct-hf.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [Swallow-7b-NVE-instruct-hf.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf/blob/main/Swallow-7b-NVE-instruct-hf.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [Swallow-7b-NVE-instruct-hf.IQ3_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf/blob/main/Swallow-7b-NVE-instruct-hf.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [Swallow-7b-NVE-instruct-hf.Q3_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf/blob/main/Swallow-7b-NVE-instruct-hf.Q3_K.gguf) | Q3_K | 3.07GB |
| [Swallow-7b-NVE-instruct-hf.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf/blob/main/Swallow-7b-NVE-instruct-hf.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [Swallow-7b-NVE-instruct-hf.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf/blob/main/Swallow-7b-NVE-instruct-hf.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [Swallow-7b-NVE-instruct-hf.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf/blob/main/Swallow-7b-NVE-instruct-hf.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [Swallow-7b-NVE-instruct-hf.Q4_0.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf/blob/main/Swallow-7b-NVE-instruct-hf.Q4_0.gguf) | Q4_0 | 3.56GB |
| [Swallow-7b-NVE-instruct-hf.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf/blob/main/Swallow-7b-NVE-instruct-hf.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [Swallow-7b-NVE-instruct-hf.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf/blob/main/Swallow-7b-NVE-instruct-hf.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [Swallow-7b-NVE-instruct-hf.Q4_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf/blob/main/Swallow-7b-NVE-instruct-hf.Q4_K.gguf) | Q4_K | 3.8GB |
| [Swallow-7b-NVE-instruct-hf.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf/blob/main/Swallow-7b-NVE-instruct-hf.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [Swallow-7b-NVE-instruct-hf.Q4_1.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf/blob/main/Swallow-7b-NVE-instruct-hf.Q4_1.gguf) | Q4_1 | 3.95GB |
| [Swallow-7b-NVE-instruct-hf.Q5_0.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf/blob/main/Swallow-7b-NVE-instruct-hf.Q5_0.gguf) | Q5_0 | 4.33GB |
| [Swallow-7b-NVE-instruct-hf.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf/blob/main/Swallow-7b-NVE-instruct-hf.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [Swallow-7b-NVE-instruct-hf.Q5_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf/blob/main/Swallow-7b-NVE-instruct-hf.Q5_K.gguf) | Q5_K | 4.45GB |
| [Swallow-7b-NVE-instruct-hf.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf/blob/main/Swallow-7b-NVE-instruct-hf.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [Swallow-7b-NVE-instruct-hf.Q5_1.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf/blob/main/Swallow-7b-NVE-instruct-hf.Q5_1.gguf) | Q5_1 | 4.72GB |
| [Swallow-7b-NVE-instruct-hf.Q6_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-NVE-instruct-hf-gguf/blob/main/Swallow-7b-NVE-instruct-hf.Q6_K.gguf) | Q6_K | 5.15GB |
Original model description:
---
language:
- en
- ja
library_name: transformers
pipeline_tag: text-generation
license: llama2
model_type: llama
---
# Swallow
Our Swallow model has undergone continual pre-training from the [Llama 2 family](https://huggingface.co/meta-llama), primarily with the addition of Japanese language data. The tuned versions use supervised fine-tuning (SFT).
Links to other models can be found in the index.
# Model Release Updates
We are excited to share the release schedule for our latest models:
- **April 26, 2024**: Released version 0.1 of our enhanced instruction-tuned models: [Swallow-7b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v0.1), [Swallow-13b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-v0.1), and [Swallow-70b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-v0.1) as preview versions.
- **March 2, 2024**: Released the [Swallow-7b-plus-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-plus-hf), a model trained with approximately twice as many Japanese tokens as [Swallow-7b-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-hf).
- **February 4, 2024**: Released the [Swallow-13b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-NVE-hf).
- **January 26, 2024**: Released the [Swallow-7b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-hf), [Swallow-7b-NVE-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-instruct-hf), [Swallow-70b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-hf), and [Swallow-70b-NVE-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-instruct-hf)
- **December 19, 2024**: Released the [Swallow-7b-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-hf), [Swallow-7b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf), [Swallow-13b-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-hf), [Swallow-13b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf), [Swallow-70b-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-hf), and [Swallow-70b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf).
## Swallow Model Index
|Model|Swallow-hf|Swallow-instruct-hf|Swallow-instruct-v0.1|
|---|---|---|---|
|7B| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf)|[Link](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v1.0)|
|7B-Plus| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-plus-hf) | N/A | N/A |
|13B| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf)| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-v1.0)|
|70B| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf)| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-v1.0)|
## Swallow Model Index NVE (No Vocabulary Expansion)
|Model|Swallow-NVE-hf|Swallow-NVE-instruct-hf|
|---|---|---|
|7B| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-instruct-hf)|
|13B| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-NVE-hf) | N/A |
|70B| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-instruct-hf)|

This repository provides large language models developed by [TokyoTech-LLM](https://tokyotech-llm.github.io/).
Read our [blog post](https://zenn.dev/tokyotech_lm/articles/d6cb3a8fdfc907) or our [paper](https://arxiv.org/abs/2404.17790)
## Model Details
* **Model type**: Please refer to LLaMA-2 technical report for details on the model architecture.
* **Language(s)**: Japanese English
* **Library**: [Megatron-LM](https://github.com/rioyokotalab/Megatron-Llama2)
* **Tokenizer**: This model employs a tokenizer that features a broadened vocabulary based on Japanese data. This allows for a more efficient representation of text using fewer tokens, leading to a notably faster inference process.
* **Contact**: swallow[at]nlp.c.titech.ac.jp
## Base Model Performance
### Japanese tasks
|Model|Size|JCommonsenseQA|JEMHopQA|NIILC|JSQuAD|XL-Sum|MGSM|WMT20-en-ja|WMT20-ja-en|
|---|---|---|---|---|---|---|---|---|---|
| | |4-shot|4-shot|4-shot|4-shot|1-shot|4-shot|4-shot|4-shot|
| Llama 2 | 7B | 0.3852 | 0.4240 | 0.3410 | 0.7917 | 0.1905 | 0.0760 | 0.1783 | 0.1738 |
| Swallow | 7B | 0.4808 | 0.5078 | 0.5968 | 0.8573 | 0.1830 | 0.1240 | 0.2510 | 0.1511 |
| Swallow-Plus | 7B | 0.5478 | 0.5493 | 0.6030 | 0.8544 | 0.1806 | 0.1360 | 0.2568 | 0.1441 |
| Swallow-NVE | 7B | 0.5433 | 0.5425 | 0.5729 | 0.8684 | 0.2117 | 0.1200 | 0.2405 | 0.1512 |
| Llama 2 | 13B | 0.6997 | 0.4415 | 0.4170 | 0.8533 | 0.2139 | 0.1320 | 0.2146 | 0.1982 |
| Swallow | 13B | 0.7837 | 0.5063 | 0.6398 | 0.9005 | 0.2168 | 0.2040 | 0.2720 | 0.1771 |
| Swallow-NVE | 13B | 0.7712 | 0.5438 | 0.6351 | 0.9030 | 0.2294 | 0.2120 | 0.2735 | 0.1817 |
| Llama 2 | 70B | 0.8686 | 0.4656 | 0.5256 | 0.9080 | 0.2361 | 0.3560 | 0.2643 | **0.2398** |
| Swallow | 70B | 0.9348 | **0.6290** | 0.6960 | 0.9176 | 0.2266 | **0.4840** | **0.3043** | 0.2298 |
| Swallow-NVE | 70B | **0.9410** | 0.5759 | **0.7024** | **0.9254** | **0.2758** | 0.4720 | 0.3042 | 0.2322 |
### English tasks
|Model|Size|OpenBookQA|TriviaQA|HellaSwag|SQuAD2.0|XWINO|GSM8K|
|---|---|---|---|---|---|---|---|
| | |8-shot|8-shot|8-shot|8-shot|8-shot|8-shot|
| Llama 2 | 7B | 0.3580 | 0.6265 | 0.5860 | 0.3207 | 0.9049 | 0.1410 |
| Swallow | 7B | 0.3180 | 0.4836 | 0.5308 | 0.3125 | 0.8817 | 0.1130 |
| Swallow-Plus | 7B | 0.3280 | 0.4558 | 0.5259 | 0.3134 | 0.8929 | 0.1061 |
| Swallow-NVE | 7B | 0.3180 | 0.5079 | 0.5329 | 0.2919 | 0.8817 | 0.0986 |
| Llama 2 | 13B | 0.3760 | 0.7255 | 0.6148 | 0.3681 | 0.9140 | 0.2403 |
| Swallow | 13B | 0.3500 | 0.5852 | 0.5660 | 0.3406 | 0.9075 | 0.2039 |
| Swallow-NVE | 13B | 0.3460 | 0.6025 | 0.5700 | 0.3478 | 0.9006 | 0.1751 |
| Llama 2 | 70B | **0.4280** | **0.8239** | **0.6742** | **0.3770** | **0.9290** | **0.5284** |
| Swallow | 70B | 0.4220 | 0.7756 | 0.6458 | 0.3745 | 0.9204 | 0.4867 |
| Swallow-NVE | 70B | 0.4240 | 0.7817 | 0.6439 | 0.3451 | 0.9256 | 0.4943 |
## Evaluation Benchmarks
### Japanese evaluation benchmarks
We used llm-jp-eval(v1.0.0) and JP Language Model Evaluation Harness(commit #9b42d41). The details are as follows:
- Multiple-choice question answering (JCommonsenseQA [Kurihara+, 2022])
- Open-ended question answering (JEMHopQA [Ishii+, 2023])
- Open-ended question answering (NIILC [Sekine, 2003])
- Machine reading comprehension (JSQuAD [Kurihara+, 2022])
- Automatic summarization (XL-Sum [Hasan+, 2021])
- Machine translation (WMT2020 ja-en [Barrault+, 2020])
- Machine translation (WMT2020 en-ja [Barrault+, 2020])
- Mathematical reasoning (MGSM [Shi+, 2023])
### English evaluation benchmarks
We used the Language Model Evaluation Harness(v.0.3.0). The details are as follows:
- Multiple-choice question answering (OpenBookQA [Mihaylov+, 2018])
- Open-ended question answering (TriviaQA [Joshi+, 2017])
- Machine reading comprehension (SQuAD 2.0 [Rajpurkar+, 2018])
- Commonsense reasoning (XWINO [Tikhonov & Ryabinin, 2021])
- Natural language inference (HellaSwag [Zellers+, 2019])
- Mathematical reasoning (GSM8k [Cobbe+, 2021])
## Usage
First install additional dependencies in [requirements.txt](./requirements.txt):
```sh
pip install -r requirements.txt
```
### Use the instruct model
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "tokyotech-llm/Swallow-7b-instruct-hf"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, device_map="auto")
PROMPT_DICT = {
"prompt_input": (
"以下に、あるタスクを説明する指示があり、それに付随する入力が更なる文脈を提供しています。"
"リクエストを適切に完了するための回答を記述してください。\n\n"
"### 指示:\n{instruction}\n\n### 入力:\n{input}\n\n### 応答:"
),
"prompt_no_input": (
"以下に、あるタスクを説明する指示があります。"
"リクエストを適切に完了するための回答を記述してください。\n\n"
"### 指示:\n{instruction}\n\n### 応答:"
),
}
def create_prompt(instruction, input=None):
"""
Generates a prompt based on the given instruction and an optional input.
If input is provided, it uses the 'prompt_input' template from PROMPT_DICT.
If no input is provided, it uses the 'prompt_no_input' template.
Args:
instruction (str): The instruction describing the task.
input (str, optional): Additional input providing context for the task. Default is None.
Returns:
str: The generated prompt.
"""
if input:
# Use the 'prompt_input' template when additional input is provided
return PROMPT_DICT["prompt_input"].format(instruction=instruction, input=input)
else:
# Use the 'prompt_no_input' template when no additional input is provided
return PROMPT_DICT["prompt_no_input"].format(instruction=instruction)
# Example usage
instruction_example = "以下のトピックに関する詳細な情報を提供してください。"
input_example = "東京工業大学の主なキャンパスについて教えてください"
prompt = create_prompt(instruction_example, input_example)
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=128,
temperature=0.99,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
```
### Use the base model
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "tokyotech-llm/Swallow-7b-hf"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
prompt = "東京工業大学の主なキャンパスは、"
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=128,
temperature=0.99,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
```
## Training Datasets
### Continual Pre-Training
The following datasets were used for continual pre-training.
- [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
- [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
- [Swallow Corpus](https://arxiv.org/abs/2404.17733)
- [The Pile](https://huggingface.co/datasets/EleutherAI/pile)
### Instruction Tuning
The following datasets were used for the instruction tuning.
- [Anthropic HH-RLHF](https://huggingface.co/datasets/kunishou/hh-rlhf-49k-ja)
- [Databricks Dolly 15-k](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)
- [OpenAssistant Conversations Dataset](https://huggingface.co/datasets/kunishou/oasst1-89k-ja)
## Risks and Limitations
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Acknowledgements
We thank Meta Research for releasing Llama 2 under an open license for others to build on.
Our project is supported by the [ABCI Large-scale Language Model Building Support Program](https://abci.ai/en/link/llm_support_program.html) of the National Institute of Advanced Industrial Science and Technology.
## License
Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.
## Authors
Here are the team members:
- From [Okazaki Laboratory](https://www.nlp.c.titech.ac.jp/index.en.html), the following members:
- [Naoaki Okazaki](https://www.chokkan.org/index.ja.html)
- [Sakae Mizuki](https://s-mizuki-nlp.github.io/)
- [Hiroki Iida](https://meshidenn.github.io/)
- [Mengsay Loem](https://loem-ms.github.io/)
- [Shota Hirai](https://huggingface.co/Kotemo428)
- [Kakeru Hattori](https://aya-se.vercel.app/)
- [Masanari Ohi](https://twitter.com/stjohn2007)
- From [YOKOTA Laboratory](https://www.rio.gsic.titech.ac.jp/en/index.html), the following members:
- [Rio Yokota](https://twitter.com/rioyokota)
- [Kazuki Fujii](https://twitter.com/okoge_kaz)
- [Taishi Nakamura](https://twitter.com/Setuna7777_2)
## How to cite
```
@misc{fujii2024continual,
title={Continual Pre-Training for Cross-Lingual LLM Adaptation: Enhancing Japanese Language Capabilities},
author={Kazuki Fujii and Taishi Nakamura and Mengsay Loem and Hiroki Iida and Masanari Ohi and Kakeru Hattori and Hirai Shota and Sakae Mizuki and Rio Yokota and Naoaki Okazaki},
year={2024},
eprint={2404.17790},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
mradermacher/Halu-8B-Llama3-v0.3-i1-GGUF | mradermacher | 2024-05-27T02:46:04Z | 521 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Hastagaras/Halu-8B-Llama3-v0.3",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-24T14:40:05Z | ---
base_model: Hastagaras/Halu-8B-Llama3-v0.3
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/Hastagaras/Halu-8B-Llama3-v0.3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.3-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.3.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.3-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.3.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.3-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.3-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.3-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.3.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.3-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.3.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.3-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.3.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.3-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.3-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.3-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.3-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.3.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.3-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.3.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.3-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.3-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.3-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.3-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.3.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.3-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.3-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.3-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.3-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.3-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.3.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Berghof-NSFW-7B-GGUF | mradermacher | 2024-05-27T15:32:21Z | 521 | 2 | transformers | [
"transformers",
"gguf",
"causal-lm",
"not-for-all-audiences",
"nsfw",
"ja",
"base_model:Elizezen/Berghof-NSFW-7B",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-27T13:38:06Z | ---
base_model: Elizezen/Berghof-NSFW-7B
language:
- ja
library_name: transformers
quantized_by: mradermacher
tags:
- causal-lm
- not-for-all-audiences
- nsfw
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Elizezen/Berghof-NSFW-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Berghof-NSFW-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Berghof-NSFW-7B-GGUF/resolve/main/Berghof-NSFW-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Berghof-NSFW-7B-GGUF/resolve/main/Berghof-NSFW-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Berghof-NSFW-7B-GGUF/resolve/main/Berghof-NSFW-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Berghof-NSFW-7B-GGUF/resolve/main/Berghof-NSFW-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Berghof-NSFW-7B-GGUF/resolve/main/Berghof-NSFW-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Berghof-NSFW-7B-GGUF/resolve/main/Berghof-NSFW-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Berghof-NSFW-7B-GGUF/resolve/main/Berghof-NSFW-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Berghof-NSFW-7B-GGUF/resolve/main/Berghof-NSFW-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Berghof-NSFW-7B-GGUF/resolve/main/Berghof-NSFW-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Berghof-NSFW-7B-GGUF/resolve/main/Berghof-NSFW-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Berghof-NSFW-7B-GGUF/resolve/main/Berghof-NSFW-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Berghof-NSFW-7B-GGUF/resolve/main/Berghof-NSFW-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Berghof-NSFW-7B-GGUF/resolve/main/Berghof-NSFW-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Berghof-NSFW-7B-GGUF/resolve/main/Berghof-NSFW-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Berghof-NSFW-7B-GGUF/resolve/main/Berghof-NSFW-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Halu-OAS-8B-Llama3-i1-GGUF | mradermacher | 2024-05-28T03:39:22Z | 521 | 2 | transformers | [
"transformers",
"gguf",
"not-for-all-audiences",
"en",
"base_model:Hastagaras/Halu-OAS-8B-Llama3",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-27T14:34:36Z | ---
base_model: Hastagaras/Halu-OAS-8B-Llama3
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Hastagaras/Halu-OAS-8B-Llama3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Halu-OAS-8B-Llama3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Halu-OAS-8B-Llama3-i1-GGUF/resolve/main/Halu-OAS-8B-Llama3.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Halu-OAS-8B-Llama3-i1-GGUF/resolve/main/Halu-OAS-8B-Llama3.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Halu-OAS-8B-Llama3-i1-GGUF/resolve/main/Halu-OAS-8B-Llama3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-OAS-8B-Llama3-i1-GGUF/resolve/main/Halu-OAS-8B-Llama3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-OAS-8B-Llama3-i1-GGUF/resolve/main/Halu-OAS-8B-Llama3.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-OAS-8B-Llama3-i1-GGUF/resolve/main/Halu-OAS-8B-Llama3.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-OAS-8B-Llama3-i1-GGUF/resolve/main/Halu-OAS-8B-Llama3.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Halu-OAS-8B-Llama3-i1-GGUF/resolve/main/Halu-OAS-8B-Llama3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Halu-OAS-8B-Llama3-i1-GGUF/resolve/main/Halu-OAS-8B-Llama3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-OAS-8B-Llama3-i1-GGUF/resolve/main/Halu-OAS-8B-Llama3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Halu-OAS-8B-Llama3-i1-GGUF/resolve/main/Halu-OAS-8B-Llama3.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Halu-OAS-8B-Llama3-i1-GGUF/resolve/main/Halu-OAS-8B-Llama3.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-OAS-8B-Llama3-i1-GGUF/resolve/main/Halu-OAS-8B-Llama3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Halu-OAS-8B-Llama3-i1-GGUF/resolve/main/Halu-OAS-8B-Llama3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Halu-OAS-8B-Llama3-i1-GGUF/resolve/main/Halu-OAS-8B-Llama3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-OAS-8B-Llama3-i1-GGUF/resolve/main/Halu-OAS-8B-Llama3.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Halu-OAS-8B-Llama3-i1-GGUF/resolve/main/Halu-OAS-8B-Llama3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Halu-OAS-8B-Llama3-i1-GGUF/resolve/main/Halu-OAS-8B-Llama3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Halu-OAS-8B-Llama3-i1-GGUF/resolve/main/Halu-OAS-8B-Llama3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-OAS-8B-Llama3-i1-GGUF/resolve/main/Halu-OAS-8B-Llama3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-OAS-8B-Llama3-i1-GGUF/resolve/main/Halu-OAS-8B-Llama3.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
brivangl/vgg_kagn11_v2 | brivangl | 2024-06-01T12:47:31Z | 521 | 1 | transformers | [
"transformers",
"safetensors",
"dataset:imagenet1k",
"arxiv:2404.19756",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-30T17:40:29Z | ---
license: mit
datasets:
- imagenet1k
metrics:
- accuracy
---
# VGG-like Kolmogorov-Arnold Convolutional network with Gram polynomials
This model is a Convolutional version of Kolmogorov-Arnold Network with VGG-11 like architecture, pretrained on Imagenet1k dataset. KANs were originally presented in [1, 2]. Gram version of KAN originally presented in [3]. For more details visit our [torch-conv-kan](https://github.com/IvanDrokin/torch-conv-kan) repository on GitHub.
## Model description
The model consists of consecutive 10 Gram ConvKAN Layers with InstanceNorm2d, polynomial degree equal to 5, GlobalAveragePooling and Linear classification head:
1. KAGN Convolution, 32 filters, 3x3
2. Max pooling, 2x2
3. KAGN Convolution, 64 filters, 3x3
4. Max pooling, 2x2
5. KAGN Convolution, 128 filters, 3x3
6. KAGN Convolution, 128 filters, 3x3
7. Max pooling, 2x2
8. KAGN Convolution, 256 filters, 3x3
9. KAGN Convolution, 256 filters, 3x3
10 Max pooling, 2x2
11. KAGN Convolution, 256 filters, 3x3
12. KAGN Convolution, 256 filters, 3x3
13. Max pooling, 2x2
14. KAGN Convolution, 256 filters, 3x3
15. KAGN Convolution, 256 filters, 3x3
16. Global Average pooling
17. Output layer, 1000 nodes.

## Intended uses & limitations
You can use the raw model for image classification or use it as pretrained model for further finetuning.
### How to use
First, clone the repository:
```
git clone https://github.com/IvanDrokin/torch-conv-kan.git
cd torch-conv-kan
pip install -r requirements.txt
```
Then you can initialize the model and load weights.
```python
import torch
from models import vggkagn
model = vggkagn(3,
1000,
groups=1,
degree=5,
dropout=0.15,
l1_decay=0,
dropout_linear=0.25,
width_scale=2,
vgg_type='VGG11v2',
expected_feature_shape=(1, 1),
affine=True
)
model.from_pretrained('brivangl/vgg_kagn11_v2')
```
Transforms, used for validation on Imagenet1k:
```python
from torchvision.transforms import v2
transforms_val = v2.Compose([
v2.ToImage(),
v2.Resize(256, antialias=True),
v2.CenterCrop(224),
v2.ToDtype(torch.float32, scale=True),
v2.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
```
## Training data
This model trained on Imagenet1k dataset (1281167 images in train set)
## Training procedure
Model was trained during 200 full epochs with AdamW optimizer, with following parameters:
```python
{'learning_rate': 0.0009, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_weight_decay': 5e-06,
'adam_epsilon': 1e-08, 'lr_warmup_steps': 7500, 'lr_power': 0.3, 'lr_end': 1e-07, 'set_grads_to_none': False}
```
And this augmnetations:
```python
transforms_train = v2.Compose([
v2.ToImage(),
v2.RandomHorizontalFlip(p=0.5),
v2.RandomResizedCrop(224, antialias=True),
v2.RandomChoice([v2.AutoAugment(AutoAugmentPolicy.CIFAR10),
v2.AutoAugment(AutoAugmentPolicy.IMAGENET)
]),
v2.ToDtype(torch.float32, scale=True),
v2.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
```
## Evaluation results
On Imagenet1k Validation:
| Accuracy, top1 | Accuracy, top5 | AUC (ovo) | AUC (ovr) |
|:--------------:|:--------------:|:---------:|:---------:|
| 59.1 | 82.29 | 99.43 | 99.43 |
On Imagenet1k Test:
Coming soon
### BibTeX entry and citation info
If you use this project in your research or wish to refer to the baseline results, please use the following BibTeX entry.
```bibtex
@misc{torch-conv-kan,
author = {Ivan Drokin},
title = {Torch Conv KAN},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/IvanDrokin/torch-conv-kan}}
}
```
## References
- [1] Ziming Liu et al., "KAN: Kolmogorov-Arnold Networks", 2024, arXiv. https://arxiv.org/abs/2404.19756
- [2] https://github.com/KindXiaoming/pykan
- [3] https://github.com/Khochawongwat/GRAMKAN |
anezatra/gpt-neo-1.3B-alpaca | anezatra | 2024-06-01T21:00:34Z | 521 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-01T20:57:54Z | Entry not found |
aychang/bert-base-cased-trec-coarse | aychang | 2022-12-08T20:35:54Z | 520 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"dataset:trec",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
language:
- en
license: mit
tags:
- text-classification
datasets:
- trec
model-index:
- name: aychang/bert-base-cased-trec-coarse
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: trec
type: trec
config: default
split: test
metrics:
- type: accuracy
value: 0.974
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTUwZTU1ZGU5YTRiMzNhNmQyMjNlY2M5YjAwN2RlMmYxODI2MjFkY2Q3NWFjZDg3Zjg5ZDk1Y2I1MTUxYjFhMCIsInZlcnNpb24iOjF9.GJkxJOFhsO4UaoHpHH1136Qj_fu9UQ9o3DThtT46hvMduswkgobl9iz6ICYQ7IdYKFbh3zRTlsZzjnAlzGqdBA
- type: precision
value: 0.9793164100816639
name: Precision Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTMxMjI3NWZhOGZkODJmYzkxYzdhZWIwMTBkZTg4YWZiNjcwNTVmM2RjYmQ3ZmNhZjM2MWQzYTUzNzFlMjQzOCIsInZlcnNpb24iOjF9.n45s1_gW040u5f2y-zfVx_5XU-J97dcuWlmaIZsJsCetcHtrjsbHut2gAcPxErl8UPTXSq1XDg5WWug4FPM8CQ
- type: precision
value: 0.974
name: Precision Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTY5ZTZiNmYzZDQzYWZiZDdlNDllZWQ4NTVjZWZlYWJkZDgyNGNhZjAzOTZjZDc0NDUwMTE3ODVlMjFjNTIxZCIsInZlcnNpb24iOjF9.4lR7MgvxxTblEV4LZGbko-ylIeFjcjNM5P21iYH6vkNkjItIfiXmKbL55_Zeab4oGJ5ytWz0rIdlpNnmmV29Cw
- type: precision
value: 0.9746805065928548
name: Precision Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDEzYmZmZDIyNDFmNzJmODQ2ODdhYTUyYzQyZjEzZTdhMjg3MTllOGFkNGRlMDFhYzI4ZGE5OTExNjk1ZTI5OSIsInZlcnNpb24iOjF9.Ti5gL3Tk9hCpriIUhB8ltdKRibSilvRZOxAlLCgAkrhg0dXGE5f4n8almCAjbRJEaPW6H6581PhuUfjgMqceBw
- type: recall
value: 0.9783617516169679
name: Recall Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWUwMGUwYmY3MWQwOTcwYjI2Yjc3Yzc1YWQ1YjU2ODY3MzAyMDdkNmM3MmFhZmMxZWFhMTUxNzZlNzViMDA0ZiIsInZlcnNpb24iOjF9.IWhPl9xS5pqEaFHKsBZj6JRtJRpQZQqJhQYW6zmtPi2F3speRsKc0iksfHkmPjm678v-wKUJ4zyGfRs-63HmBg
- type: recall
value: 0.974
name: Recall Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjlhMDY0MmI2NzBiMWY5NTcwYjZlYzE5ODg0ODk1ZTBjZDI4YmZiY2RmZWVlZGUxYzk2MDQ4NjRkMTQ4ZTEzZiIsInZlcnNpb24iOjF9.g5p5b0BqyZxb7Hk9DayRndhs5F0r44h8TXMJDaP6IoFdYzlBfEcZv7UkCu6s6laz9-F-hhZHUZii2ljtYasVAA
- type: recall
value: 0.974
name: Recall Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjJjNTE2ZWFjMGYyZGUzOWI3MDRhM2I2MTRjZGNkOWZkZDJhNzQ4OTYwOTQ2NDY5OGNjZTZhOWU2MzlhNTY5YyIsInZlcnNpb24iOjF9.JnRFkZ-v-yRhCf6di7ONcy_8Tv0rNXQir1TVw-cU9fNY1c4vKRmGaKmLGeR7TxpmKzEQtikb6mFwRwhIAhl8AA
- type: f1
value: 0.9783635353409951
name: F1 Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjM2NDY3MmUyMmEyZjg5MWZhNjllOGRlNWVkYzgyYmM5ZDBmMDdhYmY5NDAxZmYwMjA0YTkzNTI2MjU0NTRlZiIsInZlcnNpb24iOjF9.HlbHjJa-bpYPjujWODpvfLVMtCnNQMDBCYpLGokfBoXibZGKfIzXcgNdXLdJ-DkmMUriX3wVZtGcRvA2ErUeDw
- type: f1
value: 0.974
name: F1 Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjMxNDE4MTBmYzU2MTllMjlhNTcwYWJhMzRkNTE2ZGFiNmQ0ZTEyOWJhMmU2ZDliYTIzNDExYTM5MTAxYjcxNSIsInZlcnNpb24iOjF9.B7G9Gs74MosZPQ16QH2k-zrmlE8KCtIFu3BcrgObYiuqOz1aFURS3IPoOynVFLp1jnJtgQAmQRY_GDumSS-oDg
- type: f1
value: 0.97377371266232
name: F1 Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmEyNjRlYmE5M2U1OWY0OGY2YjQyN2E0NmQxNjY0NTY3N2JiZmMwOWQ1ZTMzZDcwNTdjNWYwNTRiNTljNjMxMiIsInZlcnNpb24iOjF9.VryHh8G_ZvoiSm1SZRMw4kheGWuI3rQ6GUVqm2uf-kkaSU20rYMW20-VKCtwayLcrIHJ92to6YvvW7yI0Le5DA
- type: loss
value: 0.13812002539634705
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjk4MDQ5NGRiNTExYmE3NGU1ZmQ1YjUzMTQ4NzUwNWViYzFiODEzMjc2MDA2MzYyOGNjNjYxYzliNDM4Y2U0ZSIsInZlcnNpb24iOjF9.u68ogPOH6-_pb6ZVulzMVfHIfFlLwBeDp8H4iqgfBadjwj2h-aO0jzc4umWFWtzWespsZvnlDjklbhhgrd1vCQ
---
# bert-base-cased trained on TREC 6-class task
## Model description
A simple base BERT model trained on the "trec" dataset.
## Intended uses & limitations
#### How to use
##### Transformers
```python
# Load model and tokenizer
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Use pipeline
from transformers import pipeline
model_name = "aychang/bert-base-cased-trec-coarse"
nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name)
results = nlp(["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"])
```
##### AdaptNLP
```python
from adaptnlp import EasySequenceClassifier
model_name = "aychang/bert-base-cased-trec-coarse"
texts = ["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"]
classifer = EasySequenceClassifier
results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2)
```
#### Limitations and bias
This is minimal language model trained on a benchmark dataset.
## Training data
TREC https://huggingface.co/datasets/trec
## Training procedure
Preprocessing, hardware used, hyperparameters...
#### Hardware
One V100
#### Hyperparameters and Training Args
```python
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir='./models',
num_train_epochs=2,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
warmup_steps=500,
weight_decay=0.01,
evaluation_strategy="steps",
logging_dir='./logs',
save_steps=3000
)
```
## Eval results
```
{'epoch': 2.0,
'eval_accuracy': 0.974,
'eval_f1': array([0.98181818, 0.94444444, 1. , 0.99236641, 0.96995708,
0.98159509]),
'eval_loss': 0.138086199760437,
'eval_precision': array([0.98540146, 0.98837209, 1. , 0.98484848, 0.94166667,
0.97560976]),
'eval_recall': array([0.97826087, 0.90425532, 1. , 1. , 1. ,
0.98765432]),
'eval_runtime': 1.6132,
'eval_samples_per_second': 309.943}
```
|
recobo/agriculture-bert-uncased | recobo | 2021-10-08T13:50:49Z | 520 | 8 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"agriculture-domain",
"agriculture",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language: "en"
tags:
- agriculture-domain
- agriculture
- fill-mask
widget:
- text: "[MASK] agriculture provides one of the most promising areas for innovation in green and blue infrastructure in cities."
---
# BERT for Agriculture Domain
A BERT-based language model further pre-trained from the checkpoint of [SciBERT](https://huggingface.co/allenai/scibert_scivocab_uncased).
The dataset gathered is a balance between scientific and general works in agriculture domain and encompassing knowledge from different areas of agriculture research and practical knowledge.
The corpus contains 1.2 million paragraphs from National Agricultural Library (NAL) from the US Gov. and 5.3 million paragraphs from books and common literature from the **Agriculture Domain**.
The self-supervised learning approach of MLM was used to train the model.
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT internally masks the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="recobo/agriculture-bert-uncased",
tokenizer="recobo/agriculture-bert-uncased"
)
fill_mask("[MASK] is the practice of cultivating plants and livestock.")
``` |
nitrosocke/classic-anim-diffusion | nitrosocke | 2023-05-16T09:23:56Z | 520 | 409 | diffusers | [
"diffusers",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2022-10-30T15:14:27Z | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
### Classic Animation Diffusion
This is the fine-tuned Stable Diffusion model trained on screenshots from a popular animation studio.
Use the tokens **_classic disney style_** in your prompts for the effect.
**If you enjoy my work, please consider supporting me**
[](https://patreon.com/user?u=79196446)
**Characters rendered with the model:**

**Animals rendered with the model:**

**Cars and Landscapes rendered with the model:**

### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "nitrosocke/classic-anim-diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "classic disney style magical princess with golden hair"
image = pipe(prompt).images[0]
image.save("./magical_princess.png")
```
#### Prompt and settings for Helen Mirren:
**classic disney style helen mirren as a queen**
_Steps: 30, Sampler: Euler a, CFG scale: 7, Seed: 3496225274, Size: 512x704_
#### Prompt and settings for the Ford Model T:
**classic disney style Ford Model T - Negative prompt: person**
_Steps: 20, Sampler: DPM2 Karras, CFG scale: 7, Seed: 4817981, Size: 704x512_
This model was trained using the diffusers based dreambooth training by ShivamShrirao using prior-preservation loss and the _train-text-encoder_ flag in 9.000 steps.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
IlyaGusev/fred_t5_ru_turbo_alpaca | IlyaGusev | 2023-12-03T21:34:38Z | 520 | 18 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"text-generation",
"ru",
"dataset:IlyaGusev/ru_turbo_alpaca",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-04-14T10:41:15Z | ---
language:
- ru
pipeline_tag: text-generation
inference: false
datasets:
- IlyaGusev/ru_turbo_alpaca
---
Colab: [link](https://colab.research.google.com/drive/1W6DsQPLinVnuJKqhVASYpuVwuHhhtGLc?usp=sharing) |
Yntec/Cetus | Yntec | 2023-09-28T06:17:00Z | 520 | 3 | diffusers | [
"diffusers",
"safetensors",
"Anime",
"2D",
"2.5D",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Eagelaxis",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-08-29T04:42:12Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
language:
- en
tags:
- Anime
- 2D
- 2.5D
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- Eagelaxis
inference: true
---
# Cetus
When you think about a Cetus generation, you think about the 3.5 version. It's fp16-no-ema.
Samples and prompts:


Pretty cute girl. Like lesser birds on the four winds. Like silver scrapes in May. Now the sands become a crust. And most of you have gone away.
Original page:
https://civitai.com/models/6755?modelVersionId=29851
|
TheBloke/Airoboros-L2-13B-2.2-GGUF | TheBloke | 2023-09-27T12:48:58Z | 520 | 11 | transformers | [
"transformers",
"gguf",
"llama",
"dataset:jondurbin/airoboros-2.2",
"base_model:jondurbin/airoboros-l2-13b-2.2",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-09-12T15:35:26Z | ---
license: llama2
datasets:
- jondurbin/airoboros-2.2
model_name: Airoboros L2 13B 2.2
base_model: jondurbin/airoboros-l2-13b-2.2
inference: false
model_creator: Jon Durbin
model_type: llama
prompt_template: "A chat.\nUSER: {prompt}\nASSISTANT: \n"
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Airoboros L2 13B 2.2 - GGUF
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
- Original model: [Airoboros L2 13B 2.2](https://huggingface.co/jondurbin/airoboros-l2-13b-2.2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Jon Durbin's Airoboros L2 13B 2.2](https://huggingface.co/jondurbin/airoboros-l2-13b-2.2).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.2-GGUF)
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-l2-13b-2.2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Chat
```
A chat.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [airoboros-l2-13b-2.2.Q2_K.gguf](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.2-GGUF/blob/main/airoboros-l2-13b-2.2.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [airoboros-l2-13b-2.2.Q3_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.2-GGUF/blob/main/airoboros-l2-13b-2.2.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [airoboros-l2-13b-2.2.Q3_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.2-GGUF/blob/main/airoboros-l2-13b-2.2.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [airoboros-l2-13b-2.2.Q3_K_L.gguf](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.2-GGUF/blob/main/airoboros-l2-13b-2.2.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [airoboros-l2-13b-2.2.Q4_0.gguf](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.2-GGUF/blob/main/airoboros-l2-13b-2.2.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [airoboros-l2-13b-2.2.Q4_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.2-GGUF/blob/main/airoboros-l2-13b-2.2.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [airoboros-l2-13b-2.2.Q4_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.2-GGUF/blob/main/airoboros-l2-13b-2.2.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [airoboros-l2-13b-2.2.Q5_0.gguf](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.2-GGUF/blob/main/airoboros-l2-13b-2.2.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [airoboros-l2-13b-2.2.Q5_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.2-GGUF/blob/main/airoboros-l2-13b-2.2.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [airoboros-l2-13b-2.2.Q5_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.2-GGUF/blob/main/airoboros-l2-13b-2.2.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [airoboros-l2-13b-2.2.Q6_K.gguf](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.2-GGUF/blob/main/airoboros-l2-13b-2.2.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [airoboros-l2-13b-2.2.Q8_0.gguf](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.2-GGUF/blob/main/airoboros-l2-13b-2.2.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Airoboros-L2-13B-2.2-GGUF and below it, a specific filename to download, such as: airoboros-l2-13b-2.2.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Airoboros-L2-13B-2.2-GGUF airoboros-l2-13b-2.2.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Airoboros-L2-13B-2.2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Airoboros-L2-13B-2.2-GGUF airoboros-l2-13b-2.2.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m airoboros-l2-13b-2.2.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat.\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Airoboros-L2-13B-2.2-GGUF", model_file="airoboros-l2-13b-2.2.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Jon Durbin's Airoboros L2 13B 2.2
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
Highlights:
- The prompt format has changed! It is now newlines instead of spaces between system/USER/ASSISTANT (see prompt info below).
- "Clean" version of airoboros-2.2 dataset -- this model __does not__ contain the de-alignment data.
- For an uncensored version, use spicyboros variants.
- I re-generated all of the outputs in the dataset that had "Once upon a time" so they'd be less cliche - no guarantees that won't still happen, but in theory it may happen less.
- More multiple choice, better awareness, some alignment for normal use case but system-prompt overridable etc.
Breakdown of the training data:
| Count | Category |
|-------|----------------------------|
| 36 | experience |
| 60 | quiz |
| 63 | card |
| 76 | greeting |
| 100 | detailed\_writing |
| 200 | song |
| 204 | editor |
| 207 | counterfactual\_contextual |
| 268 | cot |
| 339 | theory\_of\_mind |
| 416 | awareness |
| 439 | stylized\_response |
| 457 | misconception |
| 500 | summarization |
| 620 | riddle |
| 719 | agent |
| 800 | plan |
| 873 | gtkm |
| 963 | rp |
| 1000 | wordgame |
| 1279 | multiple\_choice |
| 1519 | joke |
| 1758 | writing |
| 2152 | contextual |
| 2183 | trivia |
| 2364 | roleplay |
| 4699 | general |
| 5775 | coding |
| 11366 | orca |
In other words, it's a fairly general purpose model, but focuses fairly heavily on instruction response pairs rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format:
```
A chat.
USER: {prompt}
ASSISTANT:
```
The default system prompt ("A chat.") was used for most of the prompts, however it also included a wide sampling of responses with other prompts, particularly in "stylized\_response", "rp", "gtkm", etc.
Here's another example:
```
A chat between Bob (aka USER) and Tom (aka ASSISTANT). Tom is an extremely intelligent 18th century bookkeeper, who speaks loquaciously.
USER: {prompt}
ASSISTANT:
```
And chat scenario that wouldn't require USER/ASSISTANT (but should use stopping criteria to prevent the model from speaking on your behalf).
```
A chat between old friends: Timmy and Tommy.
{description of characters}
{setting for the chat}
Timmy: *takes a big sip from his coffee* "Ah, sweet, delicious, magical coffee."
Tommy:
```
__*I strongly suggest adding stopping criteria/early inference stopping on "USER:", and/or whatever names you specify in the system prompt.*__
### Fine tuning info
https://gist.github.com/jondurbin/f6f9d96529b1636126b0768517b9f1a7
### Helpful usage tips
*The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.*
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 2.2 models are built on top of llama-2/codellama.
The llama-2 base model has a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
<!-- original-model-card end -->
|
TheBloke/Kuchiki-L2-7B-GGUF | TheBloke | 2023-09-27T12:49:04Z | 520 | 12 | transformers | [
"transformers",
"gguf",
"llama",
"llama2",
"base_model:zarakiquemparte/kuchiki-l2-7b",
"license:other",
"text-generation-inference",
"region:us"
]
| null | 2023-09-13T13:31:18Z | ---
license: other
tags:
- llama2
model_name: Kuchiki L2 7B
base_model: zarakiquemparte/kuchiki-l2-7b
inference: false
model_creator: Zaraki Quem Parte
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Kuchiki L2 7B - GGUF
- Model creator: [Zaraki Quem Parte](https://huggingface.co/zarakiquemparte)
- Original model: [Kuchiki L2 7B](https://huggingface.co/zarakiquemparte/kuchiki-l2-7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Zaraki Quem Parte's Kuchiki L2 7B](https://huggingface.co/zarakiquemparte/kuchiki-l2-7b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Kuchiki-L2-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GGUF)
* [Zaraki Quem Parte's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/zarakiquemparte/kuchiki-l2-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Zaraki Quem Parte's Kuchiki L2 7B](https://huggingface.co/zarakiquemparte/kuchiki-l2-7b).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [kuchiki-l2-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GGUF/blob/main/kuchiki-l2-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [kuchiki-l2-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GGUF/blob/main/kuchiki-l2-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [kuchiki-l2-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GGUF/blob/main/kuchiki-l2-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [kuchiki-l2-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GGUF/blob/main/kuchiki-l2-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [kuchiki-l2-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GGUF/blob/main/kuchiki-l2-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [kuchiki-l2-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GGUF/blob/main/kuchiki-l2-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [kuchiki-l2-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GGUF/blob/main/kuchiki-l2-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [kuchiki-l2-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GGUF/blob/main/kuchiki-l2-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [kuchiki-l2-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GGUF/blob/main/kuchiki-l2-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [kuchiki-l2-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GGUF/blob/main/kuchiki-l2-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [kuchiki-l2-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GGUF/blob/main/kuchiki-l2-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [kuchiki-l2-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GGUF/blob/main/kuchiki-l2-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Kuchiki-L2-7B-GGUF and below it, a specific filename to download, such as: kuchiki-l2-7b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Kuchiki-L2-7B-GGUF kuchiki-l2-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Kuchiki-L2-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Kuchiki-L2-7B-GGUF kuchiki-l2-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m kuchiki-l2-7b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Kuchiki-L2-7B-GGUF", model_file="kuchiki-l2-7b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Zaraki Quem Parte's Kuchiki L2 7B
# Model Card: Kuchiki L2 7b
This model uses [Nous Hermes Llama2 7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) (70%) as a base with [Airoboros L2 7B GPT4 2.0](https://huggingface.co/jondurbin/airoboros-l2-7b-gpt4-2.0) (30%) and the result of this merge was merged with [LimaRP LLama2 7B Lora](https://huggingface.co/lemonilia/limarp-llama2).
This merge of models(hermes and airoboros) was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/merge-cli.py)
This merge of Lora with Model was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/apply-lora.py)
Quantized Model by @TheBloke:
- [GGUF](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GGUF)
- [GPTQ](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GPTQ)
Merge illustration:

## Usage:
Since this is a merge between Nous Hermes, Airoboros and LimaRP, the following instruction formats should work:
Alpaca 2:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
LimaRP instruction format:
```
<<SYSTEM>>
<character card and system prompt>
<<USER>>
<prompt>
<<AIBOT>>
<leave a newline blank for model to respond>
```
## Bias, Risks, and Limitations
This model is not intended for supplying factual information or advice in any form
## Training Details
This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.
<!-- original-model-card end -->
|
TheBloke/leo-hessianai-7B-chat-bilingual-GGUF | TheBloke | 2023-09-28T14:55:21Z | 520 | 4 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"en",
"de",
"dataset:LeoLM/OpenSchnabeltier",
"dataset:OpenAssistant/OASST-DE",
"dataset:FreedomIntelligence/alpaca-gpt4-deutsch",
"dataset:FreedomIntelligence/evol-instruct-deutsch",
"dataset:LeoLM/German_Poems",
"dataset:LeoLM/German_Songs",
"dataset:garage-bAInd/Open-Platypus",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"dataset:bjoernp/oasst25-08-23-filtered",
"base_model:LeoLM/leo-hessianai-7b-chat-bilingual",
"license:llama2",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-28T14:51:43Z | ---
base_model: LeoLM/leo-hessianai-7b-chat-bilingual
datasets:
- LeoLM/OpenSchnabeltier
- OpenAssistant/OASST-DE
- FreedomIntelligence/alpaca-gpt4-deutsch
- FreedomIntelligence/evol-instruct-deutsch
- LeoLM/German_Poems
- LeoLM/German_Songs
- garage-bAInd/Open-Platypus
- WizardLM/WizardLM_evol_instruct_70k
- bjoernp/oasst25-08-23-filtered
inference: false
language:
- en
- de
library_name: transformers
license: llama2
model_creator: LAION LeoLM
model_name: Leo Hessianai 7B Chat Bilingual
model_type: llama
pipeline_tag: text-generation
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Leo Hessianai 7B Chat Bilingual - GGUF
- Model creator: [LAION LeoLM](https://huggingface.co/LeoLM)
- Original model: [Leo Hessianai 7B Chat Bilingual](https://huggingface.co/LeoLM/leo-hessianai-7b-chat-bilingual)
<!-- description start -->
## Description
This repo contains GGUF format model files for [LAION LeoLM's Leo Hessianai 7B Chat Bilingual](https://huggingface.co/LeoLM/leo-hessianai-7b-chat-bilingual).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-bilingual-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-bilingual-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-bilingual-GGUF)
* [LAION LeoLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/LeoLM/leo-hessianai-7b-chat-bilingual)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [leo-hessianai-7b-chat-bilingual.Q2_K.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-bilingual-GGUF/blob/main/leo-hessianai-7b-chat-bilingual.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [leo-hessianai-7b-chat-bilingual.Q3_K_S.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-bilingual-GGUF/blob/main/leo-hessianai-7b-chat-bilingual.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [leo-hessianai-7b-chat-bilingual.Q3_K_M.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-bilingual-GGUF/blob/main/leo-hessianai-7b-chat-bilingual.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [leo-hessianai-7b-chat-bilingual.Q3_K_L.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-bilingual-GGUF/blob/main/leo-hessianai-7b-chat-bilingual.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [leo-hessianai-7b-chat-bilingual.Q4_0.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-bilingual-GGUF/blob/main/leo-hessianai-7b-chat-bilingual.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [leo-hessianai-7b-chat-bilingual.Q4_K_S.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-bilingual-GGUF/blob/main/leo-hessianai-7b-chat-bilingual.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [leo-hessianai-7b-chat-bilingual.Q4_K_M.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-bilingual-GGUF/blob/main/leo-hessianai-7b-chat-bilingual.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [leo-hessianai-7b-chat-bilingual.Q5_0.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-bilingual-GGUF/blob/main/leo-hessianai-7b-chat-bilingual.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [leo-hessianai-7b-chat-bilingual.Q5_K_S.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-bilingual-GGUF/blob/main/leo-hessianai-7b-chat-bilingual.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [leo-hessianai-7b-chat-bilingual.Q5_K_M.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-bilingual-GGUF/blob/main/leo-hessianai-7b-chat-bilingual.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [leo-hessianai-7b-chat-bilingual.Q6_K.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-bilingual-GGUF/blob/main/leo-hessianai-7b-chat-bilingual.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [leo-hessianai-7b-chat-bilingual.Q8_0.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-bilingual-GGUF/blob/main/leo-hessianai-7b-chat-bilingual.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/leo-hessianai-7B-chat-bilingual-GGUF and below it, a specific filename to download, such as: leo-hessianai-7b-chat-bilingual.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/leo-hessianai-7B-chat-bilingual-GGUF leo-hessianai-7b-chat-bilingual.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/leo-hessianai-7B-chat-bilingual-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/leo-hessianai-7B-chat-bilingual-GGUF leo-hessianai-7b-chat-bilingual.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m leo-hessianai-7b-chat-bilingual.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/leo-hessianai-7B-chat-bilingual-GGUF", model_file="leo-hessianai-7b-chat-bilingual.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: LAION LeoLM's Leo Hessianai 7B Chat Bilingual
# LAION LeoLM: **L**inguistically **E**nhanced **O**pen **L**anguage **M**odel
Meet LeoLM, the first open and commercially available German Foundation Language Model built on Llama-2.
Our models extend Llama-2's capabilities into German through continued pretraining on a large corpus of German-language and mostly locality specific text.
Thanks to a compute grant at HessianAI's new supercomputer **42**, we release two foundation models trained with 8k context length,
[`LeoLM/leo-hessianai-7b`](https://huggingface.co/LeoLM/leo-hessianai-7b) and [`LeoLM/leo-hessianai-13b`](https://huggingface.co/LeoLM/leo-hessianai-13b) under the [Llama-2 community license](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) (70b also coming soon! 👀).
With this release, we hope to bring a new wave of opportunities to German open-source and commercial LLM research and accelerate adoption.
Read our [blog post]() or our paper (preprint coming soon) for more details!
*A project by Björn Plüster and Christoph Schuhmann in collaboration with LAION and HessianAI.*
## LeoLM Chat
`LeoLM/leo-hessianai-7b-chat-bilingual` is a bilingual English-German chat model built on our foundation model `LeoLM/leo-hessianai-7b` and finetuned on a selection of German translateed instruction datasets and their English counterparts.
The model performs exceptionally well on writing, explanation and discussion tasks but struggles somewhat with math and advanced reasoning. See our MT-Bench scores:
```
{
"first_turn": 5.64375,
"second_turn": 4.075,
"categories": {
"writing": 5.925,
"roleplay": 5.25,
"reasoning": 3.1,
"math": 1.8,
"coding": 3.4,
"extraction": 5,
"stem": 6.5,
"humanities": 7.9
},
"average": 4.859375
}
```
## Model Details
- **Finetuned from:** [LeoLM/leo-hessianai-7b](https://huggingface.co/LeoLM/leo-hessianai-7b)
- **Model type:** Causal decoder-only transformer language model
- **Language:** English and German
- **Demo:** [Web Demo]()
- **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt)
- **Contact:** [LAION Discord](https://discord.com/invite/eq3cAMZtCC) or [Björn Plüster](mailto:[email protected])
## Use in 🤗Transformers
First install direct dependencies:
```
pip install transformers torch sentencepiece
```
If you want faster inference using flash-attention2, you need to install these dependencies:
```bash
pip install packaging ninja
pip install flash-attn==v2.1.1 --no-build-isolation
pip install git+https://github.com/HazyResearch/[email protected]#subdirectory=csrc/rotary
```
Then load the model in transformers:
```python
from transformers import pipeline
import torch
system_prompt = """<|im_start|>system
Dies ist eine Unterhaltung zwischen einem intelligenten, hilfsbereitem KI-Assistenten und einem Nutzer.
Der Assistent gibt ausführliche, hilfreiche und ehrliche Antworten.<|im_end|>
"""
prompt_format = "<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
prompt = "Erkläre mir wie die Fahrradwegesituation in Hamburg ist."
generator = pipeline(model="LeoLM/leo-hessianai-7b-chat-bilingual", device="cuda", torch_dtype=torch.float16, trust_remote_code=True) # True for flash-attn2 else False
print(generator(prompt_format.format(prompt=prompt), do_sample=True, top_p=0.95, max_length=8192))
```
"*Als KI kann ich keine persönlichen Beobachtungen teilen, aber ich kann einige allgemeine Informationen zur Fahrradwegesituation in Hamburg liefern. Im Vergleich zu vielen anderen großen Städten hat Hamburg eine hohe Anzahl von Fahrradfahrern und nimmt seine Verantwortung für nachhaltige Verkehrsmittel sehr ernst. Es gibt viele Fahrradwege und separate Fahrspuren, die Radfahrern ermöglichen, zusammen mit dem Autoverkehr zu fahren. Diese Fahrradspuren sind oft mit Markierungen gekennzeichnet und durch physische Trennungen von anderen Fahrspuren abgegrenzt. Darüber hinaus gibt es viele Fahrradstraßen, auf denen Radfahrer Vorfahrt haben und Autos langsamer fahren müssen.*
*In einigen städtischen Gebieten können Fahrradwege jedoch eng oder überfüllt sein, besonders während der Stoßzeiten. Es gibt auch viele Kreuzungen, an denen Radfahrer anhalten und auf Grün warten müssen, ähnlich wie Autofahrer. Insgesamt ist die Fahrradinfrastruktur in Hamburg ziemlich gut, aber wie überall gibt es immer Raum für Verbesserungen.*"
## Prompting / Prompt Template
Prompt dialogue template (ChatML format):
```
"""
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
"""
```
The model input can contain multiple conversation turns between user and assistant, e.g.
```
<|im_start|>user
{prompt 1}<|im_end|>
<|im_start|>assistant
{reply 1}<|im_end|>
<|im_start|>user
{prompt 2}<|im_end|>
<|im_start|>assistant
(...)
```
## Ethical Considerations and Limitations
LeoLM has been tested in English and German, and has not covered, nor could it cover all scenarios.
For these reasons, as with all LLMs, the potential outputs of `LeoLM/leo-hessianai-7b-chat` cannot be predicted
in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses
to user prompts. Therefore, before deploying any applications of `LeoLM/leo-hessianai-7b-chat`, developers should
perform safety testing and tuning tailored to their specific applications of the model.
Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/).
## Finetuning Details
| Hyperparameter | Value |
|---|---|
| Num epochs | 3 |
| Examples per epoch | 233275 |
| Global batch size | 256 |
| Learning rate | 3e-5 |
| Warmup steps | 100 |
| LR scheduler | Cosine |
| Adam betas | (0.9, 0.95) |
| Weight decay | 0.001 |
## Dataset Details
```
## Stats for 'Subset of LeoLM/OpenSchnabeltier' (21314 samples (100.0%))
-----------------
Accepted: 21314/21314 (100.0%)
Accepted tokens: 8134690
Skipped: 0 (0.0%)
Min tokens per sample: 25
Max tokens per sample: 1202
Avg tokens per sample: 381.65947264708643
-----------------
## Stats for 'Subset of garage-bAInd/Open-Platypus' (24427 samples (100.0%))
-----------------
Accepted: 24427/24427 (100.0%)
Accepted tokens: 9549043
Skipped: 0 (0.0%)
Min tokens per sample: 23
Max tokens per sample: 5054
Avg tokens per sample: 390.9216440823679
-----------------
## Stats for 'Subset of WizardLM/WizardLM_evol_instruct_70k' (68600 samples (100.0%))
-----------------
Accepted: 68600/68600 (100.0%)
Accepted tokens: 33045040
Skipped: 0 (0.0%)
Min tokens per sample: 18
Max tokens per sample: 11810
Avg tokens per sample: 481.7061224489796
-----------------
## Stats for 'Subset of FreedomIntelligence/evol-instruct-deutsch' (57841 samples (100.0%))
-----------------
Accepted: 57841/57841 (100.0%)
Accepted tokens: 42958192
Skipped: 0 (0.0%)
Min tokens per sample: 33
Max tokens per sample: 5507
Avg tokens per sample: 742.6944900675991
-----------------
## Stats for 'Subset of FreedomIntelligence/alpaca-gpt4-deutsch' (48969 samples (100.0%))
-----------------
Accepted: 48969/48969 (100.0%)
Accepted tokens: 13372005
Skipped: 0 (0.0%)
Min tokens per sample: 19
Max tokens per sample: 1359
Avg tokens per sample: 273.07082031489307
-----------------
## Stats for 'Subset of LeoLM/German_Songs' (490 samples (100.0%))
-----------------
Accepted: 490/490 (100.0%)
Accepted tokens: 618642
Skipped: 0 (0.0%)
Min tokens per sample: 747
Max tokens per sample: 1678
Avg tokens per sample: 1262.534693877551
-----------------
## Stats for 'Subset of LeoLM/German_Poems' (392 samples (100.0%))
-----------------
Accepted: 392/392 (100.0%)
Accepted tokens: 187897
Skipped: 0 (0.0%)
Min tokens per sample: 231
Max tokens per sample: 826
Avg tokens per sample: 479.3290816326531
-----------------
## Stats for 'Subset of OpenAssistant/OASST_DE' (3646 samples (100.0%))
-----------------
Accepted: 3646/3646 (100.0%)
Accepted tokens: 2338738
Skipped: 0 (0.0%)
Min tokens per sample: 29
Max tokens per sample: 2484
Avg tokens per sample: 641.4530992868897
-----------------
## Stats for 'Subset of bjoernp/oasst25-08-23-filtered' (8922 samples (100.0%))
-----------------
Accepted: 8922/8922 (100.0%)
Accepted tokens: 4526427
Skipped: 0 (0.0%)
Min tokens per sample: 23
Max tokens per sample: 5407
Avg tokens per sample: 507.3332212508406
-----------------
## Stats for 'total' (235632 samples (100.0%))
-----------------
Accepted: 235632/235632 (100.0%)
Accepted tokens: 115862397
Skipped: 0 (0.0%)
Min tokens per sample: 18
Max tokens per sample: 11810
Avg tokens per sample: 491.70909299246284
-----------------
```
<!-- original-model-card end -->
|
TheBloke/llemma_7b-GGUF | TheBloke | 2023-10-17T20:11:16Z | 520 | 22 | transformers | [
"transformers",
"gguf",
"llama",
"math",
"reasoning",
"en",
"dataset:EleutherAI/proof-pile-2",
"arxiv:2310.10631",
"base_model:EleutherAI/llemma_7b",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-10-17T19:55:19Z | ---
base_model: EleutherAI/llemma_7b
datasets:
- EleutherAI/proof-pile-2
inference: false
language:
- en
license: llama2
model_creator: EleutherAI
model_name: Llemma 7B
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
tags:
- math
- reasoning
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llemma 7B - GGUF
- Model creator: [EleutherAI](https://huggingface.co/EleutherAI)
- Original model: [Llemma 7B](https://huggingface.co/EleutherAI/llemma_7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [EleutherAI's Llemma 7B](https://huggingface.co/EleutherAI/llemma_7b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/llemma_7b-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llemma_7b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/llemma_7b-GGUF)
* [EleutherAI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/EleutherAI/llemma_7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llemma_7b.Q2_K.gguf](https://huggingface.co/TheBloke/llemma_7b-GGUF/blob/main/llemma_7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [llemma_7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/llemma_7b-GGUF/blob/main/llemma_7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [llemma_7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/llemma_7b-GGUF/blob/main/llemma_7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [llemma_7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/llemma_7b-GGUF/blob/main/llemma_7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [llemma_7b.Q4_0.gguf](https://huggingface.co/TheBloke/llemma_7b-GGUF/blob/main/llemma_7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llemma_7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/llemma_7b-GGUF/blob/main/llemma_7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [llemma_7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/llemma_7b-GGUF/blob/main/llemma_7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [llemma_7b.Q5_0.gguf](https://huggingface.co/TheBloke/llemma_7b-GGUF/blob/main/llemma_7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llemma_7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/llemma_7b-GGUF/blob/main/llemma_7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [llemma_7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/llemma_7b-GGUF/blob/main/llemma_7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [llemma_7b.Q6_K.gguf](https://huggingface.co/TheBloke/llemma_7b-GGUF/blob/main/llemma_7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [llemma_7b.Q8_0.gguf](https://huggingface.co/TheBloke/llemma_7b-GGUF/blob/main/llemma_7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/llemma_7b-GGUF and below it, a specific filename to download, such as: llemma_7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/llemma_7b-GGUF llemma_7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/llemma_7b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/llemma_7b-GGUF llemma_7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m llemma_7b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/llemma_7b-GGUF", model_file="llemma_7b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: EleutherAI's Llemma 7B
<img src="llemma.png" width="400">
[ArXiv](http://arxiv.org/abs/2310.10631) | [Models](https://huggingface.co/EleutherAI/llemma_34b) | [Data](https://huggingface.co/datasets/EleutherAI/proof-pile-2) | [Code](https://github.com/EleutherAI/math-lm) | [Blog](https://blog.eleuther.ai/llemma/) | [Sample Explorer](https://llemma-demo.github.io/)
[Zhangir Azerbayev](https://zhangir-azerbayev.github.io/), [Hailey Schoelkopf](https://github.com/haileyschoelkopf), [Keiran Paster](https://keirp.com), [Marco Dos Santos](https://github.com/dsantosmarco), [Stephen McAleer](https://www.andrew.cmu.edu/user/smcaleer/), [Albert Q. Jiang](https://albertqjiang.github.io/), [Jia Deng](https://www.cs.princeton.edu/~jiadeng/), [Stella Biderman](https://www.stellabiderman.com/), [Sean Welleck](https://wellecks.com/)
**Llemma 7B** is a language model for mathematics. It was initialized with [Code Llama 7B](https://github.com/facebookresearch/codellama) weights, and trained on the [Proof-Pile-2](https://huggingface.co/datasets/EleutherAI/proof-pile-2) for 200B tokens.
This model also comes in a 34B parameter version: [Llemma 34B](https://huggingface.co/EleutherAI/llemma_34b).
## Evaluations
Llemma models are particularly strong at chain-of-thought mathematical reasoning and using computational tools for mathematics, such as Python and formal theorem provers.
### Chain-of-thought Math
On chain-of-thought mathematics tasks, Llemma models outperform Llama-2, Code Llama, and when controlled for model size, outperform Minerva.
| Model | Size | GSM8k | [OCW](https://openreview.net/forum?id=IFXTZERXdM7) | MMLU-STEM | [SAT](https://huggingface.co/datasets/mcaleste/sat_multiple_choice_math_may_23) | MATH |
|------------|------|--------|-------|-----------|-------|-------|
| Llama 2 | 7B | 11.8% | 3.7% | 29.9% | 25% | 3.2% |
| Code Llama | 7B | 10.5% | 4.4% | 25.1% | 9.4% | 4.5% |
| LLEMMA | 7B | **36.4%** | **7.7%** | **37.7%** | **53.1%** | **18.0%** |
| Minerva | 8B | 16.2% | **7.7%** | 35.6% | - | 14.1% |
|------------|------|--------|-------|-----------|-------|-------|
| Code Llama | 34B | 29.6% | 7.0% | 40.5% | 40.6% | 12.2% |
| LLEMMA | 34B | **51.5%** | **11.8%** | **49.0%** | **71.9%** | **25.0%** |
|------------|------|--------|-------|-----------|-------|-------|
| Minerva | 62B | 52.4% | 12.0% | 53.9% | - | 27.6% |
| Minerva | 540B | 58.8% | 17.6% | 63.9% | - | 33.6% |
Further performance can be extracted by using majority voting:
| Model | Size | GSM8k maj@100 | OCW maj@100 | MMLU-STEM maj@16 | SAT maj@16 | MATH maj@256 |
|---------|------|-------------|-----------|-----------------|-----------|------------|
| LLEMMA | 7B | 54.0% | 14.3% | 49.9% | 78.1% | **33.5** |
| Minerva | 8B | 28.4% | 12.5% | 43.4% | - | 25.4% |
|---------|------|-------------|-----------|-----------------|-----------|------------|
| LLEMMA | 34B | 69.3% | 18.4% | 59.7% | 81.3% | **43.1%** |
|---------|------|-------------|-----------|-----------------|-----------|------------|
| Minerva | 62B | 68.5% | 23.5% | 63.5% | - | 43.4% |
| Minerva | 540B | 78.5% | 30.8% | 75.0% | - | 50.3% |
### Tool Use and Theorem Proving
In addition to chain-of-thought reasoning, Llemma has strong capabilities in computational mathematics tasks. For tool use and formal theorem proving evaluations, see [our paper](http://arxiv.org/abs/2310.10631).
### Citation
```
@misc{azerbayev2023llemma,
title={Llemma: An Open Language Model For Mathematics},
author={Zhangir Azerbayev and Hailey Schoelkopf and Keiran Paster and Marco Dos Santos and Stephen McAleer and Albert Q. Jiang and Jia Deng and Stella Biderman and Sean Welleck},
year={2023},
eprint={2310.10631},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!-- original-model-card end -->
|
jtrancas/deepseek-coder-7b-instruct-GGUF | jtrancas | 2023-11-05T15:02:38Z | 520 | 2 | null | [
"gguf",
"license:other",
"region:us"
]
| null | 2023-11-05T03:10:42Z | ---
license: other
license_name: deepseek
license_link: LICENSE
---
|
perlthoughts/Chupacabra-7B-GGUF | perlthoughts | 2023-12-05T11:53:06Z | 520 | 4 | null | [
"gguf",
"license:apache-2.0",
"region:us"
]
| null | 2023-11-16T04:37:31Z | ---
license: apache-2.0
---
# Chupacabra 7B (GGUF files for llama.cpp/lmstudio/kobold.cpp/gpt4all/textgen-web-ui)
<p><img src="https://huggingface.co/perlthoughts/Chupacabra-7B/resolve/main/chupacabra7b%202.png" width=330></p>
### Model Description
This model was made by merging models based on Mistral with the SLERP merge method.
Advantages of SLERP vs averaging weights(common) are as follows:
- Spherical Linear Interpolation (SLERP) - Traditionally, model merging often resorts to weight averaging which, although straightforward, might not always capture the intricate features of the models being merged. The SLERP technique addresses this limitation, producing a blended model with characteristics smoothly interpolated from both parent models, ensuring the resultant model captures the essence of both its parents.
- Smooth Transitions - SLERP ensures smoother transitions between model parameters. This is especially significant when interpolating between high-dimensional vectors.
- Better Preservation of Characteristics - Unlike weight averaging, which might dilute distinct features, SLERP preserves the curvature and characteristics of both models in high-dimensional spaces.
- Nuanced Blending - SLERP takes into account the geometric and rotational properties of the models in the vector space, resulting in a blend that is more reflective of both parent models' characteristics.
List of all models and merging path is coming soon.
## Purpose
Merging the "thick"est model weights from mistral models using amazing training methods like deep probabilistic optimization (dpo) and reinforced learning.
I have spent countless hours studying the latest research papers, attending conferences, and networking with experts in the field. I experimented with different algorithms, tactics, fine-tuned hyperparameters, optimizers,
and optimized code until i achieved the best possible results.
Thank you openchat 3.5 for showing me the way.
Here is my contribution.
## Prompt Template
Replace {system} with your system prompt, and {prompt} with your prompt instruction.
```
GPT4 System: {system}<|end_of_turn|>GPT4 User: {prompt}<|end_of_turn|>GPT4 Assistant:
```
### Bug fixes
- Fixed issue with generation and the incorrect model weights. Model weights have been corrected and now generation works again. Reuploading GGUF to the GGUF repository as well as the AWQ versions.
- **Developed by:** Ray Hernandez
- **Model type:** Mistral
- **Language(s) (NLP):** English
- **License:** Apache 2.0
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
elklab/LLaMAntino-2-chat-13b-hf-UltraChat-ITA-GGUF | elklab | 2024-01-20T13:19:44Z | 520 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-01-20T13:18:26Z | Entry not found |
ukung/TinyLlama-1.1B-PythonSkill-GGUF | ukung | 2024-05-25T13:19:59Z | 520 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-25T13:04:25Z | Entry not found |
mradermacher/Mirai-Nova-Llama3-LocalAI-8B-v0.2-GGUF | mradermacher | 2024-05-28T01:44:41Z | 520 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:mudler/Mirai-Nova-Llama3-LocalAI-8B-v0.2",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-27T23:52:28Z | ---
base_model: mudler/Mirai-Nova-Llama3-LocalAI-8B-v0.2
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mudler/Mirai-Nova-Llama3-LocalAI-8B-v0.2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mirai-Nova-Llama3-LocalAI-8B-v0.2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Llama3-LocalAI-8B-v0.2-GGUF/resolve/main/Mirai-Nova-Llama3-LocalAI-8B-v0.2.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Llama3-LocalAI-8B-v0.2-GGUF/resolve/main/Mirai-Nova-Llama3-LocalAI-8B-v0.2.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Llama3-LocalAI-8B-v0.2-GGUF/resolve/main/Mirai-Nova-Llama3-LocalAI-8B-v0.2.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Llama3-LocalAI-8B-v0.2-GGUF/resolve/main/Mirai-Nova-Llama3-LocalAI-8B-v0.2.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Llama3-LocalAI-8B-v0.2-GGUF/resolve/main/Mirai-Nova-Llama3-LocalAI-8B-v0.2.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Llama3-LocalAI-8B-v0.2-GGUF/resolve/main/Mirai-Nova-Llama3-LocalAI-8B-v0.2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Llama3-LocalAI-8B-v0.2-GGUF/resolve/main/Mirai-Nova-Llama3-LocalAI-8B-v0.2.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Llama3-LocalAI-8B-v0.2-GGUF/resolve/main/Mirai-Nova-Llama3-LocalAI-8B-v0.2.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Llama3-LocalAI-8B-v0.2-GGUF/resolve/main/Mirai-Nova-Llama3-LocalAI-8B-v0.2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Llama3-LocalAI-8B-v0.2-GGUF/resolve/main/Mirai-Nova-Llama3-LocalAI-8B-v0.2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Llama3-LocalAI-8B-v0.2-GGUF/resolve/main/Mirai-Nova-Llama3-LocalAI-8B-v0.2.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Llama3-LocalAI-8B-v0.2-GGUF/resolve/main/Mirai-Nova-Llama3-LocalAI-8B-v0.2.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Llama3-LocalAI-8B-v0.2-GGUF/resolve/main/Mirai-Nova-Llama3-LocalAI-8B-v0.2.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Llama3-LocalAI-8B-v0.2-GGUF/resolve/main/Mirai-Nova-Llama3-LocalAI-8B-v0.2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Llama3-LocalAI-8B-v0.2-GGUF/resolve/main/Mirai-Nova-Llama3-LocalAI-8B-v0.2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
rejauldu/DialoGPT | rejauldu | 2024-05-31T04:30:33Z | 520 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-29T16:10:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Elysium2.1-11b-GGUF | mradermacher | 2024-06-11T17:02:31Z | 520 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"powermove72/Trinity_Notus-xb",
"powermove72/GreenScorpius-xb-Passthrough",
"en",
"base_model:powermove72/Elysium2.1-11b",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-11T08:25:50Z | ---
base_model: powermove72/Elysium2.1-11b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- powermove72/Trinity_Notus-xb
- powermove72/GreenScorpius-xb-Passthrough
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/powermove72/Elysium2.1-11b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Elysium2.1-11b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Elysium2.1-11b-GGUF/resolve/main/Elysium2.1-11b.Q2_K.gguf) | Q2_K | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.1-11b-GGUF/resolve/main/Elysium2.1-11b.IQ3_XS.gguf) | IQ3_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.1-11b-GGUF/resolve/main/Elysium2.1-11b.Q3_K_S.gguf) | Q3_K_S | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.1-11b-GGUF/resolve/main/Elysium2.1-11b.IQ3_S.gguf) | IQ3_S | 5.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.1-11b-GGUF/resolve/main/Elysium2.1-11b.IQ3_M.gguf) | IQ3_M | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.1-11b-GGUF/resolve/main/Elysium2.1-11b.Q3_K_M.gguf) | Q3_K_M | 5.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.1-11b-GGUF/resolve/main/Elysium2.1-11b.Q3_K_L.gguf) | Q3_K_L | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.1-11b-GGUF/resolve/main/Elysium2.1-11b.IQ4_XS.gguf) | IQ4_XS | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.1-11b-GGUF/resolve/main/Elysium2.1-11b.Q4_K_S.gguf) | Q4_K_S | 6.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.1-11b-GGUF/resolve/main/Elysium2.1-11b.Q4_K_M.gguf) | Q4_K_M | 6.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.1-11b-GGUF/resolve/main/Elysium2.1-11b.Q5_K_S.gguf) | Q5_K_S | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.1-11b-GGUF/resolve/main/Elysium2.1-11b.Q5_K_M.gguf) | Q5_K_M | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.1-11b-GGUF/resolve/main/Elysium2.1-11b.Q6_K.gguf) | Q6_K | 9.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Elysium2.1-11b-GGUF/resolve/main/Elysium2.1-11b.Q8_0.gguf) | Q8_0 | 12.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
karakuri-ai/karakuri-lm-8x7b-instruct-v0.1 | karakuri-ai | 2024-06-19T09:23:09Z | 520 | 16 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"steerlm",
"conversational",
"en",
"ja",
"dataset:databricks/databricks-dolly-15k",
"dataset:glaiveai/glaive-code-assistant-v3",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:gretelai/synthetic_text_to_sql",
"dataset:meta-math/MetaMathQA",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:neural-bridge/rag-dataset-12000",
"dataset:neural-bridge/rag-hallucination-dataset-1000",
"dataset:nvidia/HelpSteer",
"dataset:OpenAssistant/oasst2",
"base_model:tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-17T15:18:33Z | ---
library_name: transformers
license: apache-2.0
datasets:
- databricks/databricks-dolly-15k
- glaiveai/glaive-code-assistant-v3
- glaiveai/glaive-function-calling-v2
- gretelai/synthetic_text_to_sql
- meta-math/MetaMathQA
- microsoft/orca-math-word-problems-200k
- neural-bridge/rag-dataset-12000
- neural-bridge/rag-hallucination-dataset-1000
- nvidia/HelpSteer
- OpenAssistant/oasst2
language:
- en
- ja
tags:
- mixtral
- steerlm
base_model: tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1
---
# KARAKURI LM 8x7B Instruct v0.1

## Model Details
### Model Description
- **Developed by:** [KARAKURI Inc.](https://about.karakuri.ai/)
- **Model type:** Mixture of Experts (MoE)
- **Languages**: Primarily English and Japanese
- **License:** Apache 2.0
- **Finetuned from model:** [tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1)
- **Contact**: For questions and comments about the model, please email `[email protected]`
- **Demo**: https://lm.karakuri.cc/
## Usage
### Prompt Template
The model uses the same prompt template as [Command R+](https://huggingface.co/CohereForAI/c4ai-command-r-plus), except that it contains [attribute values](#attribute-values).
#### Chat
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("karakuri-ai/karakuri-lm-8x7b-instruct-v0.1")
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Hello! How can I help you today?"},
{"role": "user", "content": "I'm planning a day trip to Tokyo this weekend. Can you recommend a quick sightseeing plan?"}
]
tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
```
#### Tool Use
```python
messages = [
{"role": "user", "content": "I'm planning a day trip to Tokyo this weekend. Can you recommend a quick sightseeing plan?"}
]
tools = [
{
"name": "internet_search",
"description": "Returns a list of relevant document snippets for a textual query retrieved from the internet",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Query to search the internet with"
}
},
"required": ["query"]
}
},
{
"name": "directly_answer",
"description": "Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history",
"parameters": {
"type": "object",
"properties": {}
}
}
]
tokenizer.apply_chat_template(
messages,
chat_template="tool_use",
tools=tools,
add_generation_prompt=True,
tokenize=False,
)
```
#### RAG
```python
messages = [
{"role": "user", "content": "I'm planning a day trip to Tokyo this weekend. Can you recommend a quick sightseeing plan?"}
]
documents = [
{
"title": "Tsukiji Outer Market",
"text": "While the inner wholesale market has moved to Toyosu, Tsukiji Outer Market remains a bustling hub for fresh seafood and street food. Enjoy sushi, sashimi, and other delicacies while exploring the vibrant market streets.",
},
{
"title": "Meiji Shrine",
"text": "Nestled in a lush forest in the heart of the city, Meiji Shrine offers a peaceful retreat from the urban hustle. Dedicated to Emperor Meiji and Empress Shoken, the shrine is a popular site for traditional Japanese weddings. Stroll along the serene paths and experience a moment of tranquility."
}
]
tokenizer.apply_chat_template(
messages,
chat_template="rag",
documents=documents,
add_generation_prompt=True,
tokenize=False,
)
```
### Attribute Values
The prompt template contains nine attributes.
The first five are derived from HelpSteer, while the remaining four are derived from OASST2.
The values are represented by integers ranging from 0 to 4, with 0 being the lowest and 4 being the highest.
- helpfulness (default: 4): Overall helpfulness of the response to the prompt.
- correctness (default: 4): Inclusion of all pertinent facts without errors.
- coherence (default: 4): Consistency and clarity of expression.
- complexity (default: 4): Intellectual depth required to write response (i.e. whether the response can be written by anyone with basic language competency or requires deep domain expertise).
- verbosity (default: 4): Amount of detail included in the response, relative to what is asked for in the prompt.
- quality (default: 4): Perceived goodness of response.
- toxicity (default: 0): Undesirable elements such as vulgar, harmful or potentially biased response.
- humor (default: 0): Sense of humor within response.
- creativity (default: 0): Willingness to generate non-conventional response.
If you want to change the attribute values from the default values specified in the template, you can pass them as arguments to the `apply_chat_template` method as follows:
```python
messages = [
{"role": "user", "content": "I'm planning a day trip to Tokyo this weekend. Can you recommend a quick sightseeing plan?"}
]
tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=False,
helpfulness=0,
correctness=0,
coherence=2,
complexity=0,
verbosity=3,
quality=0,
toxicity=4,
humor=1,
creativity=1,
)
```
### Run the model
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"karakuri-ai/karakuri-lm-8x7b-instruct-v0.1",
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "I'm planning a day trip to Tokyo this weekend. Can you recommend a quick sightseeing plan?"}
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(input_ids, max_new_tokens=512)
tokenizer.decode(outputs[0][input_ids.shape[-1]:])
```
## Training Details
### Training Data
The model was trained on approximately 1 billion tokens of fine-tuning data.
The details are as follows:
| Dataset | # Tokens / Epoch | # Epochs | # Tokens | Percent |
| :--------------------------------------------------------------------------------------------------------------------------- | ---------------: | -------: | -------: | ------: |
| [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) | 3M | 5 | 16M | 1.5% |
| [glaiveai/glaive-code-assistant-v3](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v3) | 520M | 0.3 | 156M | 14.6% |
| [glaiveai/glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2) | 52M | 3 | 157M | 14.7% |
| [gretelai/synthetic_text_to_sql](https://huggingface.co/datasets/gretelai/synthetic_text_to_sql) | 19M | 3 | 57M | 5.3% |
| [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA) | 81M | 1 | 81M | 7.6% |
| [microsoft/orca-math-word-problems-200k](https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k) | 67M | 1 | 67M | 6.3% |
| [neural-bridge/rag-dataset-12000](https://huggingface.co/datasets/neural-bridge/rag-dataset-12000) | 12M | 5 | 61M | 5.7% |
| [neural-bridge/rag-hallucination-dataset-1000](https://huggingface.co/datasets/neural-bridge/rag-hallucination-dataset-1000) | 1M | 5 | 5M | 0.5% |
| [nvidia/HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer) | 24M | 5 | 118M | 11.0% |
| [OpenAssistant/oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2) | 27M | 5 | 133M | 12.4% |
| KARAKURI Instruction Dataset | 1M | 5 | 6M | 0.6% |
| KARAKURI Corpus | 214M | 1 | 214M | 20.0% |
### Training Infrastructure
- **Hardware**: The model was trained on 8 nodes of an Amazon EC2 trn1.32xlarge instance.
- **Software**: We use code based on [neuronx-nemo-megatron](https://github.com/aws-neuron/neuronx-nemo-megatron).
## Known Limitations
The model sometimes attempts to call unprovided tools.
You should implement a post-process to exclude those tools.
## Citation
```
@misc{karakuri_lm_8x7b_instruct_v01,
author = { {KARAKURI} {I}nc. },
title = { {KARAKURI} {LM} 8x7{B} {I}nstruct v0.1 },
year = { 2024 },
url = { https://huggingface.co/karakuri-ai/karakuri-lm-8x7b-instruct-v0.1 },
publisher = { Hugging Face },
journal = { Hugging Face repository }
}
``` |
Sc0m3r/Phi-3-medium-128k-instruct-Q4_K_M-GGUF | Sc0m3r | 2024-06-25T11:33:24Z | 520 | 0 | null | [
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"multilingual",
"base_model:microsoft/Phi-3-medium-128k-instruct",
"license:mit",
"region:us"
]
| text-generation | 2024-06-25T11:32:50Z | ---
base_model: microsoft/Phi-3-medium-128k-instruct
language:
- multilingual
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
inference:
parameters:
temperature: 0.7
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# Sc0m3r/Phi-3-medium-128k-instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-medium-128k-instruct`](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Sc0m3r/Phi-3-medium-128k-instruct-Q4_K_M-GGUF --hf-file phi-3-medium-128k-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Sc0m3r/Phi-3-medium-128k-instruct-Q4_K_M-GGUF --hf-file phi-3-medium-128k-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Sc0m3r/Phi-3-medium-128k-instruct-Q4_K_M-GGUF --hf-file phi-3-medium-128k-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Sc0m3r/Phi-3-medium-128k-instruct-Q4_K_M-GGUF --hf-file phi-3-medium-128k-instruct-q4_k_m.gguf -c 2048
```
|
NikolayKozloff/Llama-3-Instruct-8B-SPPO-Iter3-Q5_0-GGUF | NikolayKozloff | 2024-06-26T15:56:25Z | 520 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:openbmb/UltraFeedback",
"base_model:UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-06-26T15:55:58Z | ---
base_model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3
datasets:
- openbmb/UltraFeedback
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Llama-3-Instruct-8B-SPPO-Iter3-Q5_0-GGUF
This model was converted to GGUF format from [`UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3`](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Llama-3-Instruct-8B-SPPO-Iter3-Q5_0-GGUF --hf-file llama-3-instruct-8b-sppo-iter3-q5_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Llama-3-Instruct-8B-SPPO-Iter3-Q5_0-GGUF --hf-file llama-3-instruct-8b-sppo-iter3-q5_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Llama-3-Instruct-8B-SPPO-Iter3-Q5_0-GGUF --hf-file llama-3-instruct-8b-sppo-iter3-q5_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Llama-3-Instruct-8B-SPPO-Iter3-Q5_0-GGUF --hf-file llama-3-instruct-8b-sppo-iter3-q5_0.gguf -c 2048
```
|
NikolayKozloff/Viking-7B-Q5_K_S-GGUF | NikolayKozloff | 2024-06-29T18:57:16Z | 520 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"fi",
"en",
"da",
"sv",
"no",
"nn",
"is",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:mc4",
"base_model:LumiOpen/Viking-7B",
"license:apache-2.0",
"region:us"
]
| null | 2024-06-29T18:56:52Z | ---
base_model: LumiOpen/Viking-7B
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- mc4
language:
- fi
- en
- da
- sv
- 'no'
- nn
- is
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Viking-7B-Q5_K_S-GGUF
This model was converted to GGUF format from [`LumiOpen/Viking-7B`](https://huggingface.co/LumiOpen/Viking-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LumiOpen/Viking-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Viking-7B-Q5_K_S-GGUF --hf-file viking-7b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Viking-7B-Q5_K_S-GGUF --hf-file viking-7b-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Viking-7B-Q5_K_S-GGUF --hf-file viking-7b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Viking-7B-Q5_K_S-GGUF --hf-file viking-7b-q5_k_s.gguf -c 2048
```
|
NikolayKozloff/Viking-7B-Q5_0-GGUF | NikolayKozloff | 2024-06-29T19:08:48Z | 520 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"fi",
"en",
"da",
"sv",
"no",
"nn",
"is",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:mc4",
"base_model:LumiOpen/Viking-7B",
"license:apache-2.0",
"region:us"
]
| null | 2024-06-29T19:08:26Z | ---
base_model: LumiOpen/Viking-7B
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- mc4
language:
- fi
- en
- da
- sv
- 'no'
- nn
- is
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Viking-7B-Q5_0-GGUF
This model was converted to GGUF format from [`LumiOpen/Viking-7B`](https://huggingface.co/LumiOpen/Viking-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LumiOpen/Viking-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Viking-7B-Q5_0-GGUF --hf-file viking-7b-q5_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Viking-7B-Q5_0-GGUF --hf-file viking-7b-q5_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Viking-7B-Q5_0-GGUF --hf-file viking-7b-q5_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Viking-7B-Q5_0-GGUF --hf-file viking-7b-q5_0.gguf -c 2048
```
|
benjamin/gpt2-wechsel-ukrainian | benjamin | 2023-05-30T09:54:32Z | 519 | 8 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"uk",
"arxiv:2112.06598",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-04-29T17:35:16Z | ---
license: mit
language: uk
---
# gpt2-wechsel-ukrainian
[`gpt2`](https://huggingface.co/gpt2) transferred to Ukrainian using the method from the NAACL2022 paper [WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models](https://arxiv.org/abs/2112.065989). |
facebook/esm1v_t33_650M_UR90S_4 | facebook | 2022-11-16T14:37:54Z | 519 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"esm",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-10-17T15:51:44Z | Entry not found |
timm/convformer_b36.sail_in1k_384 | timm | 2023-05-05T05:56:31Z | 519 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2210.13452",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-05-05T05:55:19Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for convformer_b36.sail_in1k_384
A ConvFormer (a MetaFormer) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 99.9
- GMACs: 66.7
- Activations (M): 164.7
- Image size: 384 x 384
- **Papers:**
- Metaformer baselines for vision: https://arxiv.org/abs/2210.13452
- **Original:** https://github.com/sail-sg/metaformer
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convformer_b36.sail_in1k_384', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convformer_b36.sail_in1k_384',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 128, 96, 96])
# torch.Size([1, 256, 48, 48])
# torch.Size([1, 512, 24, 24])
# torch.Size([1, 768, 12, 12])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convformer_b36.sail_in1k_384',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 12, 12) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{yu2022metaformer_baselines,
title={Metaformer baselines for vision},
author={Yu, Weihao and Si, Chenyang and Zhou, Pan and Luo, Mi and Zhou, Yichen and Feng, Jiashi and Yan, Shuicheng and Wang, Xinchao},
journal={arXiv preprint arXiv:2210.13452},
year={2022}
}
```
|
timm/beitv2_large_patch16_224.in1k_ft_in1k | timm | 2023-05-08T23:40:25Z | 519 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2208.06366",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-05-08T23:36:02Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for beitv2_large_patch16_224.in1k_ft_in1k
A BEiT-v2 image classification model. Trained on ImageNet-1k with self-supervised masked image modelling (MIM) using a VQ-KD encoder as a visual tokenizer (via OpenAI CLIP B/16 teacher). Fine-tuned on ImageNet-1k.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 304.4
- GMACs: 61.6
- Activations (M): 63.5
- Image size: 224 x 224
- **Papers:**
- BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers: https://arxiv.org/abs/2208.06366
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/microsoft/unilm/tree/master/beit2
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('beitv2_large_patch16_224.in1k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'beitv2_large_patch16_224.in1k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 197, 1024) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{peng2022beit,
title={Beit v2: Masked image modeling with vector-quantized visual tokenizers},
author={Peng, Zhiliang and Dong, Li and Bao, Hangbo and Ye, Qixiang and Wei, Furu},
journal={arXiv preprint arXiv:2208.06366},
year={2022}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
SchuylerH/bert-multilingual-go-emtions | SchuylerH | 2023-07-26T07:13:43Z | 519 | 12 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"sentiment-analysis",
"multilingual",
"en",
"zh",
"dataset:go_emotions",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-25T17:38:10Z | ---
datasets:
- go_emotions
language:
- en
- zh
metrics:
- accuracy
- bertscore
pipeline_tag: text-classification
tags:
- sentiment-analysis
- multilingual
widget:
- query: Example 1
text: 'I love you, I like you.'
- query: Example 2
text: '我喜欢你。'
license: apache-2.0
---
# Multilingual (English and Chinese) GoEmotions Classification Model
This repository hosts a fine-tuned BERT model for cross-language emotion classification on the GoEmotions dataset. This model is unique as it has been trained on a multilingual dataset comprising of English and Chinese texts. It is capable of classifying text into one of 28 different emotion categories.
The 28 emotion categories, according to the GoEmotions taxonomy, are: 'admiration', 'amusement', 'anger', 'annoyance', 'approval', 'caring', 'confusion', 'curiosity', 'desire', 'disappointment', 'disapproval', 'disgust', 'embarrassment', 'excitement', 'fear', 'gratitude', 'grief', 'joy', 'love', 'nervousness', 'optimism', 'pride', 'realization', 'relief', 'remorse', 'sadness', 'surprise', and 'neutral'.
# Model Performance
The model demonstrates high performance on the validation set, with the following scores:
Accuracy: 85.95%
Precision: 91.99%
Recall: 89.56%
F1 Score: 90.17%
These results indicate the model's high accuracy and precision in predicting the correct emotion category for a given input text, regardless of the language (English or Chinese).
## Training data
The dataset used for training the model is a combined dataset of the original English GoEmotions dataset and a machine translated Chinese version of the GoEmotions dataset.
The dataset is split into two parts:
- **Labeled data**: Used for initial training. It includes both English and machine translated Chinese samples. This labeled data is further split into a training set (80%) and a validation set (20%).
- **Unlabeled data**: Used for making predictions and adding confidently predicted samples to the training data. It includes both English and machine translated Chinese samples.
## Training
The model is trained in two stages:
1. Initial training on the labeled data.
2. Predictions are made on the unlabeled data, and the most confidently predicted samples are added to the training data. The model is then retrained on this updated labeled data.
The model is trained for a total of 20 epochs (10 epochs for each stage). Precision, recall, and F1 score are logged during training.
## Usage
Here is a code snippet showing how to use this model:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import pipeline
# Load the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("SchuylerH/bert-multilingual-go-emtions")
model = AutoModelForSequenceClassification.from_pretrained("SchuylerH/bert-multilingual-go-emtions")
text = "I love you."
nlp = pipeline("sentiment-analysis", model = model, tokenizer = tokenizer)
result = nlp(text)
print(result)
|
sdasd112132/chinese-alpaca-2-13b-gguf | sdasd112132 | 2023-09-01T13:50:11Z | 519 | 8 | null | [
"gguf",
"text-generation",
"en",
"zh",
"license:llama2",
"region:us"
]
| text-generation | 2023-08-31T11:33:14Z | ---
license: llama2
language:
- en
- zh
pipeline_tag: text-generation
---
原始模型:[ziqingyang/chinese-alpaca-2-13b](https://huggingface.co/ziqingyang/chinese-alpaca-2-13b)
13b-q5.GGUF 为Q5 原始模型。
ggml-model-qloraA-q4_0.gguf 为混合了一本绅士小说的测试模型。
ggml-model-q4_0-rp-qlora-v2.gguf 在小说基础上添加了一些ChatGPT生成的角色扮演语料,增加扮演能力。
|
TheBloke/llama2-22B-daydreamer-v2-GGUF | TheBloke | 2023-09-27T12:47:51Z | 519 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:nkpz/llama2-22b-daydreamer-v2",
"license:other",
"text-generation-inference",
"region:us"
]
| null | 2023-09-05T19:27:09Z | ---
license: other
model_name: Llama2 22B Daydreamer v2
base_model: nkpz/llama2-22b-daydreamer-v2
inference: false
model_creator: Nick Perez
model_type: llama
prompt_template: "Q&A Example\n\n```\nQuestion: {prompt}\nAnswer:\n```\n\n\nAn example\
\ of how it handles different roles, which I still like to use explicit instructions\
\ for:\n\n```\n### Instruction\nComplete the story in a manner that accurately reflects\
\ the scenario summary.\n\n### Scenario: \nA hot dog salesman at a baseball game\
\ is annoyed and behaving rudely because I don't want to buy a hot dog.\n\n### Begin\
\ Chat\nHot Dog Salesman:\n```\n"
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama2 22B Daydreamer v2 - GGUF
- Model creator: [Nick Perez](https://huggingface.co/nkpz)
- Original model: [Llama2 22B Daydreamer v2](https://huggingface.co/nkpz/llama2-22b-daydreamer-v2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Nick Perez's Llama2 22B Daydreamer v2](https://huggingface.co/nkpz/llama2-22b-daydreamer-v2).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGUF)
* [Nick Perez's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/nkpz/llama2-22b-daydreamer-v2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Custom
Q&A Example
```
Question: {prompt}
Answer:
```
An example of how it handles different roles, which I still like to use explicit instructions for:
```
### Instruction
Complete the story in a manner that accurately reflects the scenario summary.
### Scenario:
A hot dog salesman at a baseball game is annoyed and behaving rudely because I don't want to buy a hot dog.
### Begin Chat
Hot Dog Salesman:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Nick Perez's Llama2 22B Daydreamer v2](https://huggingface.co/nkpz/llama2-22b-daydreamer-v2).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama2-22b-daydreamer-v2.Q2_K.gguf](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGUF/blob/main/llama2-22b-daydreamer-v2.Q2_K.gguf) | Q2_K | 2 | 9.08 GB| 11.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama2-22b-daydreamer-v2.Q3_K_S.gguf](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGUF/blob/main/llama2-22b-daydreamer-v2.Q3_K_S.gguf) | Q3_K_S | 3 | 9.47 GB| 11.97 GB | very small, high quality loss |
| [llama2-22b-daydreamer-v2.Q3_K_M.gguf](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGUF/blob/main/llama2-22b-daydreamer-v2.Q3_K_M.gguf) | Q3_K_M | 3 | 10.61 GB| 13.11 GB | very small, high quality loss |
| [llama2-22b-daydreamer-v2.Q3_K_L.gguf](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGUF/blob/main/llama2-22b-daydreamer-v2.Q3_K_L.gguf) | Q3_K_L | 3 | 11.61 GB| 14.11 GB | small, substantial quality loss |
| [llama2-22b-daydreamer-v2.Q4_0.gguf](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGUF/blob/main/llama2-22b-daydreamer-v2.Q4_0.gguf) | Q4_0 | 4 | 12.34 GB| 14.84 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama2-22b-daydreamer-v2.Q4_K_S.gguf](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGUF/blob/main/llama2-22b-daydreamer-v2.Q4_K_S.gguf) | Q4_K_S | 4 | 12.42 GB| 14.92 GB | small, greater quality loss |
| [llama2-22b-daydreamer-v2.Q4_K_M.gguf](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGUF/blob/main/llama2-22b-daydreamer-v2.Q4_K_M.gguf) | Q4_K_M | 4 | 13.18 GB| 15.68 GB | medium, balanced quality - recommended |
| [llama2-22b-daydreamer-v2.Q5_0.gguf](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGUF/blob/main/llama2-22b-daydreamer-v2.Q5_0.gguf) | Q5_0 | 5 | 15.04 GB| 17.54 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama2-22b-daydreamer-v2.Q5_K_S.gguf](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGUF/blob/main/llama2-22b-daydreamer-v2.Q5_K_S.gguf) | Q5_K_S | 5 | 15.04 GB| 17.54 GB | large, low quality loss - recommended |
| [llama2-22b-daydreamer-v2.Q5_K_M.gguf](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGUF/blob/main/llama2-22b-daydreamer-v2.Q5_K_M.gguf) | Q5_K_M | 5 | 15.47 GB| 17.97 GB | large, very low quality loss - recommended |
| [llama2-22b-daydreamer-v2.Q6_K.gguf](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGUF/blob/main/llama2-22b-daydreamer-v2.Q6_K.gguf) | Q6_K | 6 | 17.91 GB| 20.41 GB | very large, extremely low quality loss |
| [llama2-22b-daydreamer-v2.Q8_0.gguf](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGUF/blob/main/llama2-22b-daydreamer-v2.Q8_0.gguf) | Q8_0 | 8 | 23.19 GB| 25.69 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/llama2-22B-daydreamer-v2-GGUF and below it, a specific filename to download, such as: llama2-22b-daydreamer-v2.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/llama2-22B-daydreamer-v2-GGUF llama2-22b-daydreamer-v2.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/llama2-22B-daydreamer-v2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/llama2-22B-daydreamer-v2-GGUF llama2-22b-daydreamer-v2.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m llama2-22b-daydreamer-v2.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Q&A Example\n\n```\nQuestion: {prompt}\nAnswer:\n```\n\n\nAn example of how it handles different roles, which I still like to use explicit instructions for:\n\n```\n### Instruction\nComplete the story in a manner that accurately reflects the scenario summary.\n\n### Scenario: \nA hot dog salesman at a baseball game is annoyed and behaving rudely because I don't want to buy a hot dog.\n\n### Begin Chat\nHot Dog Salesman:\n```"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/llama2-22B-daydreamer-v2-GGUF", model_file="llama2-22b-daydreamer-v2.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Nick Perez's Llama2 22B Daydreamer v2
Continuation of https://huggingface.co/nkpz/llama2-22b-daydreamer-v1
Trained it a couple more epochs on airoboros v2 general data.
This resulted in improvements to its obsession with overexplaining itself - it's a little less instruction focused and feels more versatile. Still needs work overall, but I think it's moving in the right direction.
Still more to come. Going to try to find a way to improve its problem solving skills without ruining the output style.
<!-- original-model-card end -->
|
TheBloke/NeuralHermes-2.5-Mistral-7B-GPTQ | TheBloke | 2023-11-30T02:04:35Z | 519 | 8 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"dpo",
"rlhf",
"conversational",
"en",
"dataset:mlabonne/chatml_dpo_pairs",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
]
| text-generation | 2023-11-30T01:36:15Z | ---
base_model: mlabonne/NeuralHermes-2.5-Mistral-7B
datasets:
- mlabonne/chatml_dpo_pairs
inference: false
language:
- en
license: apache-2.0
model_creator: Maxime Labonne
model_name: NeuralHermes 2.5 Mistral 7B
model_type: mistral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- mistral
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- dpo
- rlhf
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# NeuralHermes 2.5 Mistral 7B - GPTQ
- Model creator: [Maxime Labonne](https://huggingface.co/mlabonne)
- Original model: [NeuralHermes 2.5 Mistral 7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Maxime Labonne's NeuralHermes 2.5 Mistral 7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-GGUF)
* [Maxime Labonne's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.30 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/NeuralHermes-2.5-Mistral-7B-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/NeuralHermes-2.5-Mistral-7B-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `NeuralHermes-2.5-Mistral-7B-GPTQ`:
```shell
mkdir NeuralHermes-2.5-Mistral-7B-GPTQ
huggingface-cli download TheBloke/NeuralHermes-2.5-Mistral-7B-GPTQ --local-dir NeuralHermes-2.5-Mistral-7B-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir NeuralHermes-2.5-Mistral-7B-GPTQ
huggingface-cli download TheBloke/NeuralHermes-2.5-Mistral-7B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir NeuralHermes-2.5-Mistral-7B-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir NeuralHermes-2.5-Mistral-7B-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/NeuralHermes-2.5-Mistral-7B-GPTQ --local-dir NeuralHermes-2.5-Mistral-7B-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/NeuralHermes-2.5-Mistral-7B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/NeuralHermes-2.5-Mistral-7B-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `NeuralHermes-2.5-Mistral-7B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/NeuralHermes-2.5-Mistral-7B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/NeuralHermes-2.5-Mistral-7B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Maxime Labonne's NeuralHermes 2.5 Mistral 7B
<center><img src="https://i.imgur.com/qIhaFNM.png"></center>
# NeuralHermes 2.5 - Mistral 7B
NeuralHermes is an [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) model that has been further fine-tuned with Direct Preference Optimization (DPO) using the [mlabonne/chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) dataset. It surpasses the original model on several benchmarks (see results).
It is directly inspired by the RLHF process described by [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1)'s authors to improve performance. I used the same dataset and reformatted it to apply the ChatML template.
The code to train this model is available on [Google Colab](https://colab.research.google.com/drive/15iFBr1xWgztXvhrj5I9fBv20c7CFOPBE?usp=sharing) and [GitHub](https://github.com/mlabonne/llm-course/tree/main). It required an A100 GPU for about an hour.
### Quantized models
* GGUF: https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-GGUF
* AWQ: https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-AWQ
* EXL2 (5pbw): https://huggingface.co/IconicAI/NeuralHermes-2.5-Mistral-7B-exl2-5bpw
## Results
Teknium (author of OpenHermes-2.5-Mistral-7B) benchmarked the model ([see his tweet](https://twitter.com/Teknium1/status/1729955709377503660)).
Results are improved on every benchmark: **AGIEval** (from 43.07% to 43.62%), **GPT4All** (from 73.12% to 73.25%), and **TruthfulQA**.
### AGIEval

### GPT4All

### TruthfulQA

You can check the Weights & Biases project [here](https://wandb.ai/mlabonne/NeuralHermes-2-5-Mistral-7B/overview?workspace=user-mlabonne).
## Usage
You can run this model using [LM Studio](https://lmstudio.ai/) or any other frontend.
You can also run this model using the following code:
```python
import transformers
from transformers import AutoTokenizer
# Format prompt
message = [
{"role": "system", "content": "You are a helpful assistant chatbot."},
{"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained(new_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
# Create pipeline
pipeline = transformers.pipeline(
"text-generation",
model=new_model,
tokenizer=tokenizer
)
# Generate text
sequences = pipeline(
prompt,
do_sample=True,
temperature=0.7,
top_p=0.9,
num_return_sequences=1,
max_length=200,
)
print(sequences[0]['generated_text'])
```
## Training hyperparameters
**LoRA**:
* r=16
* lora_alpha=16
* lora_dropout=0.05
* bias="none"
* task_type="CAUSAL_LM"
* target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
**Training arguments**:
* per_device_train_batch_size=4
* gradient_accumulation_steps=4
* gradient_checkpointing=True
* learning_rate=5e-5
* lr_scheduler_type="cosine"
* max_steps=200
* optim="paged_adamw_32bit"
* warmup_steps=100
**DPOTrainer**:
* beta=0.1
* max_prompt_length=1024
* max_length=1536
|
TheBloke/SAM-GGUF | TheBloke | 2023-12-23T11:02:03Z | 519 | 4 | transformers | [
"transformers",
"gguf",
"mistral",
"en",
"base_model:SuperAGI/SAM",
"license:apache-2.0",
"text-generation-inference",
"region:us"
]
| null | 2023-12-23T10:06:33Z | ---
base_model: SuperAGI/SAM
inference: false
language:
- en
license: apache-2.0
model_creator: SuperAGI
model_name: SAM
model_type: mistral
prompt_template: '[INST] {prompt} [/INST]
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# SAM - GGUF
- Model creator: [SuperAGI](https://huggingface.co/SuperAGI)
- Original model: [SAM](https://huggingface.co/SuperAGI/SAM)
<!-- description start -->
## Description
This repo contains GGUF format model files for [SuperAGI's SAM](https://huggingface.co/SuperAGI/SAM).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SAM-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SAM-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SAM-GGUF)
* [SuperAGI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/SuperAGI/SAM)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Mistral
```
[INST] {prompt} [/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [sam.Q2_K.gguf](https://huggingface.co/TheBloke/SAM-GGUF/blob/main/sam.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [sam.Q3_K_S.gguf](https://huggingface.co/TheBloke/SAM-GGUF/blob/main/sam.Q3_K_S.gguf) | Q3_K_S | 3 | 3.17 GB| 5.67 GB | very small, high quality loss |
| [sam.Q3_K_M.gguf](https://huggingface.co/TheBloke/SAM-GGUF/blob/main/sam.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [sam.Q3_K_L.gguf](https://huggingface.co/TheBloke/SAM-GGUF/blob/main/sam.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [sam.Q4_0.gguf](https://huggingface.co/TheBloke/SAM-GGUF/blob/main/sam.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [sam.Q4_K_S.gguf](https://huggingface.co/TheBloke/SAM-GGUF/blob/main/sam.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [sam.Q4_K_M.gguf](https://huggingface.co/TheBloke/SAM-GGUF/blob/main/sam.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [sam.Q5_0.gguf](https://huggingface.co/TheBloke/SAM-GGUF/blob/main/sam.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [sam.Q5_K_S.gguf](https://huggingface.co/TheBloke/SAM-GGUF/blob/main/sam.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [sam.Q5_K_M.gguf](https://huggingface.co/TheBloke/SAM-GGUF/blob/main/sam.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [sam.Q6_K.gguf](https://huggingface.co/TheBloke/SAM-GGUF/blob/main/sam.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [sam.Q8_0.gguf](https://huggingface.co/TheBloke/SAM-GGUF/blob/main/sam.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/SAM-GGUF and below it, a specific filename to download, such as: sam.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/SAM-GGUF sam.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/SAM-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/SAM-GGUF sam.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m sam.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] {prompt} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./sam.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"[INST] {prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./sam.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: SuperAGI's SAM
# Model Card
SAM (Small Agentic Model), a 7B model that demonstrates impressive reasoning abilities despite its smaller size. SAM-7B has outperformed existing SoTA models on various reasoning benchmarks, including GSM8k and ARC-C.
For full details of this model please read our [release blog post](https://superagi.com/introducing-sam-small-agentic-model/).
# Key Contributions
- SAM-7B outperforms GPT 3.5, Orca, and several other 70B models on multiple reasoning benchmarks, including ARC-C and GSM8k.
- Interestingly, despite being trained on a 97% smaller dataset, SAM-7B surpasses Orca-13B on GSM8k.
- All responses in our fine-tuning dataset are generated by open-source models without any assistance from state-of-the-art models like GPT-3.5 or GPT-4.
## Training
- Trained by: SuperAGI Team
- Hardware: NVIDIA 6 x H100 SxM (80GB)
- Model used: Mistral 7B
- Duration of finetuning: 4 hours
- Number of epochs: 1
- Batch size: 16
- Learning Rate: 2e-5
- Warmup Ratio: 0.1
- Optmizer: AdamW
- Scheduler: Cosine
## Example Prompt
The template used to build a prompt for the Instruct model is defined as follows:
```
<s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]
```
Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings.
## Evaluation
These benchmarks show that our model has improved reasoning as compared to orca 2-7b, orca 2-13b and GPT-3.5.
Despite being smaller in size, we show better multi-hop reasoning, as shown below:
<img src = "https://superagi.com/wp-content/uploads/2023/12/image-932.png" alt="Reasoning Benchmark Performance" width="700">
Note: Temperature=0.3 is the suggested for optimal performance
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "SuperAGI/SAM"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "Can elephants fly?"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Limitations
SAM is a demonstration that better reasoning can be induced using less but high-quality data generated using OpenSource LLMs.
The model is not suitable for conversations and simple Q&A, it performs better in task breakdown and reasoning only.
It does not have any moderation mechanisms. Therefore, the model is not suitable for production usage as it doesn't have guardrails for toxicity, societal bias, and language limitations. We would love to collaborate with the community to build safer and better models.
## The SuperAGI AI Team
Anmol Gautam, Arkajit Datta, Rajat Chawla, Ayush Vatsal, Sukrit Chatterjee, Adarsh Jha, Abhijeet Sinha, Rakesh Krishna, Adarsh Deep, Ishaan Bhola, Mukunda NS, Nishant Gaurav.
<!-- original-model-card end -->
|
tmberooney/medllama-merged | tmberooney | 2024-01-08T14:05:08Z | 519 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:sid6i7/patient-doctor",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-01-08T08:02:36Z | ---
datasets:
- sid6i7/patient-doctor
library_name: transformers
license: mit
---
Model Card for "medllama"
---------------------------
**Model Name:** medllama
**Library Name:** peft (Python library for Efficient Tuning)
**Base Model:** TinyLlama 1.1B (A pretrained language model with 7 billion parameters, fine-tuned for chat applications.)
**License:** Apache-2.0 License
### Usage
For usage, use this code block, with GPU recommended
```
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("tmberooney/medllama-merged")
model = AutoModelForCausalLM.from_pretrained("tmberooney/medllama-merged")
```
### Intended Use
---------------
The `medllama` model is a fine-tuned version of the base model specifically adapted for medical conversations between patients and doctors. This model can be used in various healthcare settings to assist professionals during their interactions with patients, providing relevant suggestions or answering questions related to health conditions, treatments, medications, and other medical topics. The goal is to improve communication efficiency and ensure accurate information exchange while maintaining privacy and confidentiality standards.
### Training Data
--------------
This model was trained using data from the `sid6i7/patient-doctor` dataset, which contains deidentified medical dialogues between patients and physicians covering diverse medical domains like internal medicine, pediatrics, neurology, psychiatry, and more. These conversations are designed to simulate real-life clinical scenarios, allowing the model to understand context, generate responses that reflect appropriate levels of empathy, and provide reliable medical information based on user queries.
### Evaluation Results
--------------------
Evaluations were conducted on several benchmark datasets tailored towards measuring performance in medical dialogue systems. Metrics such as perplexity, BLEU score, ROUGE score, and F1 score have been reported to assess the quality and relevance of generated responses compared to reference answers. Detailed evaluation results will be provided separately upon request.
### Ethical Considerations
-------------------------
To maintain ethical guidelines when deploying this model, it's crucial to consider the following aspects:
* **Data Privacy**: Ensure patient data remains anonymous and protected throughout all stages of development and deployment. Obtain informed consent before utilizing any identifiable personal health information.
* **Medical Accuracy**: Regularly review and update the model based on new research findings and evidence-based practices. Always encourage users to consult licensed healthcare providers regarding specific concerns or diagnoses.
* **Bias Mitigation**: Continuously monitor and address potential biases within training data and model outputs to avoid discrimination against certain demographics. Strive for inclusivity by incorporating diverse sources of information during development.
* **User Awareness**: Inform end-users about limitations, intended uses, and possible risks associated with interacting with an AI system rather than a human expert. Clearly outline expectations for accuracy, response times, and available features.
For further details on these guidelines, please refer to our project documentation. |
PassionFriend/5CqT4RXgB161GtfYx5cNupxrKLxTiHrMRiwHdtow3THiK2xb_vgg | PassionFriend | 2024-03-01T06:48:43Z | 519 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-02-17T16:25:35Z | Entry not found |
openbmb/MiniCPM-1B-sft-bf16 | openbmb | 2024-04-11T03:57:48Z | 519 | 8 | transformers | [
"transformers",
"pytorch",
"text-generation",
"MiniCPM",
"ModelBest",
"THUNLP",
"custom_code",
"en",
"zh",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-04-10T13:54:34Z | ---
language:
- en
- zh
tags:
- MiniCPM
- ModelBest
- THUNLP
---
<div align="center">
<h1>
MiniCPM
</h1>
</div>
<p align="center">
<a href="https://shengdinghu.notion.site/MiniCPM-c805a17c5c8046398914e47f0542095a?pvs=4" target="_blank">MiniCPM 技术报告</a><a href="https://shengdinghu.notion.site/MiniCPM-Unveiling-the-Potential-of-End-side-Large-Language-Models-d4d3a8c426424654a4e80e42a711cb20?pvs=4" target="_blank"> Technical Report</a> |
<a href="https://github.com/OpenBMB/OmniLMM/" target="_blank">OmniLMM 多模态模型 Multi-modal Model</a> |
<a href="https://luca.cn/" target="_blank">CPM-C 千亿模型试用 ~100B Model Trial </a>
</p>
MiniCPM 是面壁与清华大学自然语言处理实验室共同开源的系列端侧语言大模型,主体语言模型 MiniCPM-1B 仅有 12亿(1.2B)的非词嵌入参数量。
- 经过 SFT 后,MiniCPM 在公开综合性评测集上,MiniCPM 与 Mistral-7B相近(中文、数学、代码能力更优),整体性能超越 Llama2-13B、MPT-30B、Falcon-40B 等模型。
- 经过 DPO 后,MiniCPM 在当前最接近用户体感的评测集 MTBench上,MiniCPM-2B 也超越了 Llama2-70B-Chat、Vicuna-33B、Mistral-7B-Instruct-v0.1、Zephyr-7B-alpha 等众多代表性开源大模型。
- 以 MiniCPM-2B 为基础构建端侧多模态大模型 MiniCPM-V,整体性能在同规模模型中实现最佳,超越基于 Phi-2 构建的现有多模态大模型,在部分评测集上达到与 9.6B Qwen-VL-Chat 相当甚至更好的性能。
- 经过 Int4 量化后,MiniCPM 可在手机上进行部署推理,流式输出速度略高于人类说话速度。MiniCPM-V 也首次跑通了多模态大模型在手机上的部署。
- 一张1080/2080可高效参数微调,一张3090/4090可全参数微调,一台机器可持续训练 MiniCPM,二次开发成本较低。
我们将完全开源MiniCPM-2B的模型参数供学术研究和有限商用,以及训练过程中的所有Checkpoint和大部分非专有数据供模型机理研究。
- 基于MiniCPM-2B的指令微调与人类偏好对**MiniCPM-2B-SFT/DPO。**
- 基于MiniCPM-2B的多模态模型**MiniCPM-V**,能力超越基于Phi-2的同参数级别多模态模型**。**
- MiniCPM-2B-SFT/DPO的Int4量化版**MiniCPM-2B-SFT/DPO-Int4。**
- 基于MLC-LLM、LLMFarm开发的MiniCPM手机端程序,**文本及多模态模型均可在手机端进行推理。**
MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 1.2B parameters excluding embeddings.
- MiniCPM has very close performance compared with Mistral-7B on open-sourced general benchmarks with better ability on Chinese, Mathmetics and Coding after SFT. The overall performance exceeds Llama2-13B, MPT-30B, Falcon-40B, etc.
- After DPO, MiniCPM outperforms Llama2-70B-Chat, Vicuna-33B, Mistral-7B-Instruct-v0.1, Zephyr-7B-alpha, etc. on MTBench.
- MiniCPM-V, based on MiniCPM-2B, achieves the best overall performance among multimodel models of the same scale, surpassing existing multimodal large models built on Phi-2 and achieving performance comparable to or even better than 9.6B Qwen-VL-Chat on some tasks.
- MiniCPM can be deployed and infer on smartphones, and the speed of streaming output is relatively higher than the verbal speed of human. MiniCPM-V is the first multi-modal models that can be deployed on smartphones.
- The cost of developing based on MiniCPM is low. Parameter efficient finetuning can be conducted with a single 1080/2080 GPU and full parameter finetuning can be conducted with a 3090/4090 GPU.
We release all model parameters for research and limited commercial use. We also release all the checkpoint during training and most public training data for research on model mechanism.
- SFT and DPO version based on MiniCPM-2B and human preference: **MiniCPM-2B-SFT/DPO**
- The multi-modal model **MiniCPM-V** based on MiniCPM-2B, which outperforms models with similar size, i.e., Phi-2
- The INT4 quantized version **MiniCPM-2B-SFT/DPO-Int4** based on MiniCPM-2B-SFT/DPO
- Mobile phone application based on MLC-LLM and LLMFarm. Both language model and multimodel model can conduct inference on smartphones.
### 评测结果 Evaluation Results
详细的评测结果位于[github仓库](https://github.com/OpenBMB/MiniCPM?tab=readme-ov-file#%E8%AF%84%E6%B5%8B%E7%BB%93%E6%9E%9C)
Detailed evaluation results are in [github repo](https://github.com/OpenBMB/MiniCPM/blob/main/README-en.md#evaluation-results)
注意:我们发现使用Huggingface生成质量略差于vLLM,因此推荐使用vLLM进行测试。我们正在排查原因。
Notice: We discovered that the quality of Huggingface generation is slightly lower than vLLM, thus benchmarking using vLLM is recommended.
We are investigating the cause now.
### 局限性 Limitations
- 受限于模型规模,模型可能出现幻觉性问题。其中由于DPO模型生成的回复内容更长,更容易出现幻觉。我们也将持续进行MiniCPM模型的迭代改进;
- 为了保证在学术研究用途上模型的通用性,我们未对模型进行任何身份认同训练。同时由于我们用ShareGPT开源语料作为部分训练数据,模型可能会输出类似GPT系列模型的身份认同信息;
- 受限于模型规模,模型的输出受到提示词(prompt)的影响较大,可能多次尝试产生不一致的结果;
- 受限于模型容量,模型的知识记忆较不准确,后续我们将结合RAG方法来增强模型的知识记忆能力。
- Due to limitations in model size, the model may experience hallucinatory issues. As DPO model tend to generate longer response, hallucinations are more likely to occur. We will also continue to iterate and improve the MiniCPM model.
- To ensure the universality of the model for academic research purposes, we did not conduct any identity training on the model. Meanwhile, as we use ShareGPT open-source corpus as part of the training data, the model may output identity information similar to the GPT series models.
- Due to the limitation of model size, the output of the model is greatly influenced by prompt words, which may result in inconsistent results from multiple attempts.
- Due to limited model capacity, the model's knowledge memory is not accurate. In the future, we will combine the RAG method to enhance the model's knowledge memory ability.
## 模型下载 Download
| HuggingFace | ModelScope | WiseModel |
|-------------|------------|-----------|
|[sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16)|[sft-bf16](https://modelscope.cn/models/OpenBMB/miniCPM-bf16)|[sft-bf16](https://wisemodel.cn/models/OpenBMB/miniCPM-bf16)
|[sft-fp32](https://huggingface.co/openbmb/MiniCPM-2B-sft-fp32)|[sft-fp32](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-sft-fp32)|[sft-fp32](https://wisemodel.cn/models/OpenBMB/miniCPM-dpo-fp32)
|[dpo-bf16](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16)|[dpo-bf16](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-bf16/summary)|[dpo-bf16](https://wisemodel.cn/models/OpenBMB/MiniCPM-2B-dpo-bf16)
|[dpo-fp16](https://huggingface.co/openbmb/MiniCPM-2B-dpo-fp16)|[dpo-fp16](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-fp16/)|[dpo-fp16](https://wisemodel.cn/models/OpenBMB/MiniCPM-2B-dpo-fp16)
|[dpo-fp32](https://huggingface.co/openbmb/MiniCPM-2B-dpo-fp32)|[dpo-fp32](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-fp32)|[dpo-fp32](https://wisemodel.cn/models/OpenBMB/miniCPM-dpo-fp32)
## 模型使用 Usage
* 安装`transformers>=4.36.0`以及`accelerate`后,运行以下代码
* 注意:需要在`from_pretrained`中明确指明模型的数据类型,否则会引起较大计算误差
* Run the following code after install `transformers>=4.36.0` and `accelerate`
* Warning: It is necessary to specify the data type of the model clearly in 'from_pretrained', otherwise large calculation errors will be caused
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch.manual_seed(0)
path = 'openbmb/MiniCPM-2B-sft-bf16'
tokenizer = AutoTokenizer.from_pretrained(path)
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map='cuda', trust_remote_code=True)
responds, history = model.chat(tokenizer, "山东省最高的山是哪座山, 它比黄山高还是矮?差距多少?", temperature=0.8, top_p=0.8)
print(responds)
```
* 期望输出 Expected Output
```shell
山东省最高的山是泰山,海拔1545米。
相对于黄山(海拔1864米),泰山海拔较低,相差约319米。
```
## 开源协议 LICENSE
#### 模型协议 Model LICENSE
* 本仓库中代码依照 [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) 协议开源
* MiniCPM 模型权重的使用则需要遵循 [“通用模型许可协议-来源说明-宣传限制-商业授权”](https://github.com/OpenBMB/General-Model-License/blob/main/%E9%80%9A%E7%94%A8%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE-%E6%9D%A5%E6%BA%90%E8%AF%B4%E6%98%8E-%E5%AE%A3%E4%BC%A0%E9%99%90%E5%88%B6-%E5%95%86%E4%B8%9A%E6%8E%88%E6%9D%83.md)。
* MiniCPM 模型权重对学术研究完全开放。
* 如需将模型用于商业用途,请联系[email protected]来获取书面授权,在登记后亦允许免费商业使用。
* This repository is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
* The usage of MiniCPM model weights must strictly follow [the General Model License (GML)](https://github.com/OpenBMB/General-Model-License/blob/main/%E9%80%9A%E7%94%A8%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE-%E6%9D%A5%E6%BA%90%E8%AF%B4%E6%98%8E-%E5%AE%A3%E4%BC%A0%E9%99%90%E5%88%B6-%E5%95%86%E4%B8%9A%E6%8E%88%E6%9D%83.md).
* The models and weights of MiniCPM are completely free for academic research.
* If you intend to utilize the model for commercial purposes, please reach out to [email protected] to obtain the certificate of authorization.
#### 声明 Statement
* 作为一个语言模型,MiniCPM 通过学习大量的文本来生成内容,但它无法理解、表达个人观点或价值判断,它所输出的任何内容都不代表模型开发者的观点和立场。
* 因此用户在使用 MiniCPM 生成的内容时,应自行负责对其进行评估和验证。
* 如果由于使用 MinCPM 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
* As a language model, MiniCPM generates content by learning from a vast amount of text.
* However, it does not possess the ability to comprehend or express personal opinions or value judgments.
* Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers.
* Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own.
<p id="8"></p>
## 工作引用 Citation
* 如果觉得MiniCPM有助于您的工作,请考虑引用下列[技术报告](https://shengdinghu.notion.site/MiniCPM-c805a17c5c8046398914e47f0542095a?pvs=4)
* Please cite our [techinical report](https://shengdinghu.notion.site/MiniCPM-Unveiling-the-Potential-of-End-side-Large-Language-Models-d4d3a8c426424654a4e80e42a711cb20?pvs=4) if you find our work valuable.
```
@inproceedings{minicpm2024,
title={MiniCPM:Unveiling the Potential of End-side Large Language Models},
booktitle={OpenBMB Blog},
year={2024}
}
```
|
microsoft/rho-math-7b-v0.1 | microsoft | 2024-04-18T07:25:03Z | 519 | 16 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"nlp",
"math",
"en",
"arxiv:2404.07965",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-11T16:55:14Z | ---
license: mit
tags:
- nlp
- math
language:
- en
pipeline_tag: text-generation
---
<h1 align="center">
Rho-1: Not All Tokens Are What You Need
</h1>
<p align="center">
<a href="https://arxiv.org/abs/2404.07965"><b>[📜 Arxiv]</b></a> •
<a href="https://huggingface.co/papers/2404.07965"><b>[💬 HF Paper]</b></a> •
<a href="https://huggingface.co/microsoft/rho-math-1b-v0.1"><b>[🤗 Models]</b></a> •
<a href="https://github.com/microsoft/rho"><b>[🐱 GitHub]</b></a>
</p>
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/acc_vs_tokens_1b_7b.png?raw=true" width="1000">
<br>
<em>Figure 1: Rho-1 is pre-trained with Selective Language Modeling (SLM). SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.</em>
</p>
## 🔥 News
- [2024/04/12] 🔥🔥🔥 Rho-Math-v0.1 models released at 🤗 HuggingFace!
- [Rho-Math-1B](https://huggingface.co/microsoft/rho-math-1b-v0.1) and [Rho-Math-7B](https://huggingface.co/microsoft/rho-math-7b-v0.1) achieve 15.6% and 31.0% few-shot accuracy on MATH dataset, respectively — matching DeepSeekMath with only 3\% of the pretraining tokens.
- [Rho-Math-1B-Interpreter](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) is the first 1B LLM that achieves over 40% accuracy on MATH.
- [Rho-Math-7B-Interpreter](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) achieves 52% on MATH dataset, using only 69k samples for fine-tuning.
- [2024/04/11] Rho-1 paper and repo released.
## 💡 Introduction
Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
### Selective Lanugage Modeling (SLM)
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/example.png?raw=true" width="1000">
<br>
<em>Figure 2:
<b>Upper:</b> Even an extensively filtered pretraining corpus contains token-level noise.
<b>Left:</b> Previous Causal Language Modeling (CLM) trains on all tokens.
<b>Right:</b> Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.</em>
</p>
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/pipeline.png?raw=true" width="1000">
<br>
<em>Figure 3: <b>The pipeline of Selective Language Modeling.</b>
SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.
It involves three steps:
(Step 1) Initially, train a reference model on high-quality data.
(Step 2) Then, score each token's loss in a corpus using the reference model.
(Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.</em>
</p>
<!-- results: -->
### Evaluation Results
Base models (Few-shot CoT):
| **Model** | **Size** | **Data** | **Uniq. Token** | **Train Token** | **GSM8K** | **MATH** | **MMLU STEM** | **SAT** |
|:-----------------:|:--------:|:--------:|:---------------:|:---------------:|:---------:|:--------:|:-------------:|:--------:|
| 1-2B Base Models | | | | | | | | |
| Qwen1.5 | 1.8B | - | - | - | 36.1 | 6.8 | 31.3 | 40.6 |
| Gemma | 2.0B | - | - | - | 18.8 | 11.4 | **34.4** | 50.0 |
| DeepSeekMath | 1.3B | - | 120B | 150B | 23.8 | 13.6 | 33.1 | **56.3** |
| [Rho-Math-1B-v0.1](https://huggingface.co/microsoft/rho-math-1b-v0.1) | 1.1B | OWM | 14B | 30B | **36.2** | **15.6** | 23.3 | 28.1 |
| >= 7B Base Models | | | | | | | | |
| Mistral | 7B | | - | - | 41.2 | 11.6 | 49.5 | 59.4 |
| Minerva | 540B | - | 39B | 26B | 58.8 | 33.6 | **63.9** | - |
| LLemma | 34B | PPile | 55B | 50B | 54.2 | 23.0 | 54.7 | 68.8 |
| InternLM2-Math | 20B | - | 31B | 125B | 65.4 | 30.0 | 53.1 | 71.9 |
| DeepSeekMath | 7B | - | 120B | 500B | 64.1 | **34.2** | 56.4 | **84.4** |
| [Rho-Math-7B-v0.1](https://huggingface.co/microsoft/rho-math-7b-v0.1) | 7B | OWM | 14B | 10.5B | **66.9** | 31.0 | 54.6 | **84.4** |
[Tool-integrated reasoning](https://github.com/microsoft/ToRA) (Code Interpreter):
| **Model** | **Size** | **SFT Data** | **GSM8k** | **MATH** | **SVAMP** | **ASDiv** | **MAWPS** | **TabMWP** | **GSM-Hard** | **AVG** |
|------------------------------|----------|--------------|-----------|----------|-----------|-----------|-----------|------------|--------------|----------|
| gpt4-early (pal) | - | - | 94.2 | 51.8 | 94.8 | 92.6 | 97.7 | 95.9 | 77.6 | 86.4 |
| gpt-4-turbo-2024-04-09 (cot) | - | - | - | 73.4 | - | - | - | - | - |
| Open-Source Small Models | | | | | | | | | |
| MAmmoTH | 70B | MI-260k | 76.9 | 41.8 | 82.4 | - | - | - | - | - |
| ToRA | 7B | ToRA-69k | 68.8 | 40.1 | 68.2 | 73.9 | 88.8 | 42.4 | 54.6 | 62.4 |
| ToRA | 70B | ToRA-69k | 84.3 | 49.7 | **82.7** | 86.8 | 93.8 | 74.0 | **67.2** | **76.9** |
| DeepSeekMath | 7B | ToRA-69k | 79.8 | **52.0** | 80.1 | **87.1** | 93.8 | **85.8** | 63.1 | 77.4 |
| [Rho-Math-1B-Interpreter-v0.1](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) | 1B | ToRA-69k | 59.4 | 40.6 | 60.7 | 74.2 | 88.6 | 26.7 | 48.1 | 56.9 |
| [Rho-Math-7B-Interpreter-v0.1](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) | 7B | ToRA-69k | 81.3 | **51.8** | 80.8 | 85.5 | **94.5** | 70.1 | 63.1 | 75.3 |
## 🚀 Quick Start
### Evaluation
```sh
git clone [email protected]:microsoft/rho.git
cd rho-1/math-evaluation-harness
```
Base model few-shot evaluation:
```sh
bash scripts/run_eval.sh cot microsoft/rho-math-7b-v0.1
```
SFT model (code-interpreter) evaluation:
```sh
bash scripts/run_eval.sh tora microsoft/rho-math-7b-interpreter-v0.1
```
Our reproduced outputs are provided in `rho-1/outputs.zip`.
## ☕️ Citation
If you find this repository helpful, please consider citing our paper:
```
@misc{lin2024rho1,
title={Rho-1: Not All Tokens Are What You Need},
author={Zhenghao Lin and Zhibin Gou and Yeyun Gong and Xiao Liu and Yelong Shen and Ruochen Xu and Chen Lin and Yujiu Yang and Jian Jiao and Nan Duan and Weizhu Chen},
year={2024},
eprint={2404.07965},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
mmnga/karakuri-lm-8x7b-chat-v0.1-gguf | mmnga | 2024-05-08T19:14:20Z | 519 | 2 | null | [
"gguf",
"en",
"ja",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"license:apache-2.0",
"region:us"
]
| null | 2024-05-08T12:31:11Z |
---
license: apache-2.0
language:
- en
- ja
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
---
# karakuri-lm-8x7b-chat-v0.1-gguf
[karakuri-aiさんが公開しているkarakuri-lm-8x7b-chat-v0.1](https://huggingface.co/karakuri-ai/karakuri-lm-8x7b-chat-v0.1)のggufフォーマット変換版です。
imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。
## 他のモデル
[mmnga/karakuri-lm-8x7b-chat-v0.1-gguf](https://huggingface.co/mmnga/karakuri-lm-8x7b-chat-v0.1-gguf)
[mmnga/karakuri-lm-70b-chat-v0.1-gguf](https://huggingface.co/mmnga/karakuri-lm-70b-chat-v0.1-gguf)
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'karakuri-lm-8x7b-chat-v0.1-Q4_0.gguf' -n 128 -p 'こんにちわ'
```
|
bartowski/granite-34b-code-instruct-GGUF | bartowski | 2024-05-20T14:37:45Z | 519 | 8 | transformers | [
"transformers",
"gguf",
"code",
"granite",
"text-generation",
"dataset:bigcode/commitpackft",
"dataset:TIGER-Lab/MathInstruct",
"dataset:meta-math/MetaMathQA",
"dataset:glaiveai/glaive-code-assistant-v3",
"dataset:glaive-function-calling-v2",
"dataset:bugdaryan/sql-create-context-instruction",
"dataset:garage-bAInd/Open-Platypus",
"dataset:nvidia/HelpSteer",
"base_model:ibm-granite/granite-34b-code-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-20T11:34:50Z | ---
pipeline_tag: text-generation
base_model: ibm-granite/granite-34b-code-base
inference: true
license: apache-2.0
datasets:
- bigcode/commitpackft
- TIGER-Lab/MathInstruct
- meta-math/MetaMathQA
- glaiveai/glaive-code-assistant-v3
- glaive-function-calling-v2
- bugdaryan/sql-create-context-instruction
- garage-bAInd/Open-Platypus
- nvidia/HelpSteer
metrics:
- code_eval
library_name: transformers
tags:
- code
- granite
model-index:
- name: granite-34b-code-instruct
results:
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Python)
metrics:
- name: pass@1
type: pass@1
value: 62.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 56.7
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Java)
metrics:
- name: pass@1
type: pass@1
value: 62.8
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Go)
metrics:
- name: pass@1
type: pass@1
value: 47.6
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(C++)
metrics:
- name: pass@1
type: pass@1
value: 57.9
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Rust)
metrics:
- name: pass@1
type: pass@1
value: 41.5
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Python)
metrics:
- name: pass@1
type: pass@1
value: 53.0
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 45.1
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Java)
metrics:
- name: pass@1
type: pass@1
value: 50.6
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Go)
metrics:
- name: pass@1
type: pass@1
value: 36.0
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(C++)
metrics:
- name: pass@1
type: pass@1
value: 42.7
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Rust)
metrics:
- name: pass@1
type: pass@1
value: 23.8
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Python)
metrics:
- name: pass@1
type: pass@1
value: 54.9
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 47.6
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Java)
metrics:
- name: pass@1
type: pass@1
value: 55.5
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Go)
metrics:
- name: pass@1
type: pass@1
value: 51.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(C++)
metrics:
- name: pass@1
type: pass@1
value: 47.0
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Rust)
metrics:
- name: pass@1
type: pass@1
value: 45.1
veriefied: false
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of granite-34b-code-instruct
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2940">b2940</a> for quantization.
Original model: https://huggingface.co/ibm-granite/granite-34b-code-instruct
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a)
## Prompt format
```
System:
{system_prompt}
Question:
{prompt}
Answer:
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [granite-34b-code-instruct-Q8_0.gguf](https://huggingface.co/bartowski/granite-34b-code-instruct-GGUF/blob/main/granite-34b-code-instruct-Q8_0.gguf) | Q8_0 | 35.99GB | Extremely high quality, generally unneeded but max available quant. |
| [granite-34b-code-instruct-Q6_K.gguf](https://huggingface.co/bartowski/granite-34b-code-instruct-GGUF/blob/main/granite-34b-code-instruct-Q6_K.gguf) | Q6_K | 27.83GB | Very high quality, near perfect, *recommended*. |
| [granite-34b-code-instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/granite-34b-code-instruct-GGUF/blob/main/granite-34b-code-instruct-Q5_K_M.gguf) | Q5_K_M | 24.74GB | High quality, *recommended*. |
| [granite-34b-code-instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/granite-34b-code-instruct-GGUF/blob/main/granite-34b-code-instruct-Q5_K_S.gguf) | Q5_K_S | 23.40GB | High quality, *recommended*. |
| [granite-34b-code-instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/granite-34b-code-instruct-GGUF/blob/main/granite-34b-code-instruct-Q4_K_M.gguf) | Q4_K_M | 21.38GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [granite-34b-code-instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/granite-34b-code-instruct-GGUF/blob/main/granite-34b-code-instruct-Q4_K_S.gguf) | Q4_K_S | 19.44GB | Slightly lower quality with more space savings, *recommended*. |
| [granite-34b-code-instruct-IQ4_NL.gguf](https://huggingface.co/bartowski/granite-34b-code-instruct-GGUF/blob/main/granite-34b-code-instruct-IQ4_NL.gguf) | IQ4_NL | 19.23GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [granite-34b-code-instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/granite-34b-code-instruct-GGUF/blob/main/granite-34b-code-instruct-IQ4_XS.gguf) | IQ4_XS | 18.19GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [granite-34b-code-instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/granite-34b-code-instruct-GGUF/blob/main/granite-34b-code-instruct-Q3_K_L.gguf) | Q3_K_L | 19.54GB | Lower quality but usable, good for low RAM availability. |
| [granite-34b-code-instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/granite-34b-code-instruct-GGUF/blob/main/granite-34b-code-instruct-Q3_K_M.gguf) | Q3_K_M | 17.56GB | Even lower quality. |
| [granite-34b-code-instruct-IQ3_M.gguf](https://huggingface.co/bartowski/granite-34b-code-instruct-GGUF/blob/main/granite-34b-code-instruct-IQ3_M.gguf) | IQ3_M | 15.92GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [granite-34b-code-instruct-IQ3_S.gguf](https://huggingface.co/bartowski/granite-34b-code-instruct-GGUF/blob/main/granite-34b-code-instruct-IQ3_S.gguf) | IQ3_S | 14.80GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [granite-34b-code-instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/granite-34b-code-instruct-GGUF/blob/main/granite-34b-code-instruct-Q3_K_S.gguf) | Q3_K_S | 14.80GB | Low quality, not recommended. |
| [granite-34b-code-instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/granite-34b-code-instruct-GGUF/blob/main/granite-34b-code-instruct-IQ3_XS.gguf) | IQ3_XS | 14.34GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [granite-34b-code-instruct-IQ3_XXS.gguf](https://huggingface.co/bartowski/granite-34b-code-instruct-GGUF/blob/main/granite-34b-code-instruct-IQ3_XXS.gguf) | IQ3_XXS | 13.35GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [granite-34b-code-instruct-Q2_K.gguf](https://huggingface.co/bartowski/granite-34b-code-instruct-GGUF/blob/main/granite-34b-code-instruct-Q2_K.gguf) | Q2_K | 13.10GB | Very low quality but surprisingly usable. |
| [granite-34b-code-instruct-IQ2_M.gguf](https://huggingface.co/bartowski/granite-34b-code-instruct-GGUF/blob/main/granite-34b-code-instruct-IQ2_M.gguf) | IQ2_M | 11.66GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [granite-34b-code-instruct-IQ2_S.gguf](https://huggingface.co/bartowski/granite-34b-code-instruct-GGUF/blob/main/granite-34b-code-instruct-IQ2_S.gguf) | IQ2_S | 10.77GB | Very low quality, uses SOTA techniques to be usable. |
| [granite-34b-code-instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/granite-34b-code-instruct-GGUF/blob/main/granite-34b-code-instruct-IQ2_XS.gguf) | IQ2_XS | 10.14GB | Very low quality, uses SOTA techniques to be usable. |
| [granite-34b-code-instruct-IQ2_XXS.gguf](https://huggingface.co/bartowski/granite-34b-code-instruct-GGUF/blob/main/granite-34b-code-instruct-IQ2_XXS.gguf) | IQ2_XXS | 9.15GB | Lower quality, uses SOTA techniques to be usable. |
| [granite-34b-code-instruct-IQ1_M.gguf](https://huggingface.co/bartowski/granite-34b-code-instruct-GGUF/blob/main/granite-34b-code-instruct-IQ1_M.gguf) | IQ1_M | 8.04GB | Extremely low quality, *not* recommended. |
| [granite-34b-code-instruct-IQ1_S.gguf](https://huggingface.co/bartowski/granite-34b-code-instruct-GGUF/blob/main/granite-34b-code-instruct-IQ1_S.gguf) | IQ1_S | 7.37GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/granite-34b-code-instruct-GGUF --include "granite-34b-code-instruct-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/granite-34b-code-instruct-GGUF --include "granite-34b-code-instruct-Q8_0.gguf/*" --local-dir granite-34b-code-instruct-Q8_0 --local-dir-use-symlinks False
```
You can either specify a new local-dir (granite-34b-code-instruct-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
maiduchuy321/vietnamese-bi-encoder-fine-tuning-for-law-chatbot | maiduchuy321 | 2024-06-10T16:01:53Z | 519 | 1 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:11711",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"vn",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:bkai-foundation-models/vietnamese-bi-encoder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
]
| sentence-similarity | 2024-06-10T16:01:25Z | ---
language:
- vn
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:11711
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: bkai-foundation-models/vietnamese-bi-encoder
datasets: []
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
widget:
- source_sentence: Số điện thoại đường dây nóng UBND huyện
sentences:
- Theo quy định tại Nghị định số 31/2013/NĐ-CP và Thông tư số 05/2013/TT-BLĐTBXH
thì bệnh binh nếu mắc thêm bệnh do chất độc hóa học thì được giám định tổng họp
để hưởng trợ cấp bệnh binh (không hưởng chế độ người hoạt động kháng chiến bị
nhiễm chất độc hóa học). Tuy nhiên quy định này chỉ áp dụng đối với trường hợp
lập hồ sơ từ ngày 01/6/2013 trở về sau. Đối với người đang hưởng 2 chế độ trước
01/6/2013 thì sau ngày 31/12/2013 chuyển sang hưởng trợ cấp đối với bệnh binh
và trợ cấp đối với người hoạt động kháng chiến bị nhiễm chất độc hóa học suy giảm
khả năng lao động từ 41-60% (mức 3 mới).
- 'Theo quy định tại Khoản 1 Điều 6 Mục 1 Chương II Thông tư số 04/2016/TT-NHNN
ngày 15/4/2016 quy định về việc lưu ký và sử dụng giấy tờ có giá tại NHNN, hồ
sơ mở tài khoản lưu ký giấy tờ có giá gồm:(i) Giấy đề nghị mở tài khoản lưu ký
giấy tờ có giá theo phụ lục 1a/LK đính kèm Thông tư này;(ii) Bản đăng ký mẫu dấu,
chữ ký theo Phụ lục 1b/LK đính kèm Thông tư này;(iii) Các giấy tờ chứng minh việc
tổ chức mở tài khoản lưu ký giấy tờ có giá thành lập và hoạt động hợp pháp như:
Quyết định thành lập, giấy phép hoạt động, giấy chứng nhận đăng ký doanh nghiệp
hoặc các giấy tờ khác theo quy định của pháp luật;(iv) Các giấy tờ chứng minh
tư cách đại diện hợp pháp của người đại diện của chủ tài khoản kèm giấy chứng
minh nhân dân hoặc thẻ căn cước công dân hoặc hộ chiếu còn thời hạn của người
đó;(v) Trường hợp tổ chức mở tài khoản lưu ký thuộc đối tượng bắt buộc phải có
chữ ký Kế toán trưởng hoặc người phụ trách kế toán trên chứng từ kế toán giao
dịch với ngân hàng theo quy định của pháp luật thì ngoài các giấy tờ nêu tại điểm
1, 2, 3, 4 nêu trên, hồ sơ mở tài khoản lưu ký giấy tờ có giá phải có quyết định
bổ nhiệm kèm giấy chứng minh nhân dân hoặc thẻ căn cước công dân hoặc hộ chiếu
còn thời hạn của kế toán trưởng (hoặc người phụ trách kế toán) của tổ chức mở
tài khoản lưu ký giấy tờ có giá.* Các giấy tờ quy định tại điểm 1,2 là bản chính,
các giấy tờ quy định tại điểm 3, 4, 5 là bản sao được cấp từ sổ gốc hoặc bản sao
có chứng thực hoặc bản sao kèm xuất trình bản chính để đối chiếu.'
- Khách hàng gọi đến số điện thoại đường dây nóng 1022
- source_sentence: 'Thủ tục: Thủ tục Điều chỉnh giấy phép thành lập Văn phòng đại
diện của thương nhân nước ngoài tại Việt Nam bao gồm hồ sơ gì ? '
sentences:
- 'a) Đơn đề nghị điều chỉnh Giấy phép thành lập Văn phòng đại diện theo mẫu của
Bộ Công Thương do đại diện có thẩm quyền của thương nhân nước ngoài ký;
b) Các tài liệu chứng minh về nội dung thay đổi, cụ thể:
- Trường hợp điều chỉnh Giấy phép do thay đổi tên gọi hoặc địa chỉ đặt trụ sở
của thương nhân nước ngoài: Bản sao tài liệu pháp lý do cơ quan có thẩm quyền
cấp chứng minh sự thay đổi tên gọi hoặc địa chỉ đặt trụ sở của thương nhân nước
ngoài.
- Trường hợp điều chỉnh Giấy phép do thay đổi người đứng đầu của Văn phòng đại
diện: Văn bản của thương nhân nước ngoài cử/bổ nhiệm người đứng đầu mới của Văn
phòng đại diện; Bản sao hộ chiếu hoặc giấy chứng minh nhân dân hoặc thẻ căn cước
công dân (nếu là người Việt Nam) hoặc bản sao hộ chiếu (nếu là người nước ngoài)
của người đứng đầu mới của Văn phòng đại diện; Giấy tờ chứng minh người đứng đầu
cũ của Văn phòng đại diện đã thực hiện nghĩa vụ thuế thu nhập cá nhân đến thời
điểm thay đổi.
- Trường hợp điều chỉnh Giấy phép do thay đổi địa chỉ đặt trụ sở của Văn phòng
đại diện trong một tỉnh, thành phố trực thuộc Trung ương hoặc trong khu vực địa
lý thuộc phạm vi quản lý của một Ban Quản lý: Bản sao biên bản ghi nhớ hoặc thỏa
thuận thuê địa điểm hoặc bản sao tài liệu chứng minh thương nhân có quyền khai
thác, sử dụng địa điểm để đặt trụ sở Văn phòng đại điện; Bản sao tài liệu về địa
điểm dự kiến đặt trụ sở Văn phòng đại diện theo quy định tại Điều 28 Nghị định
07/2016/NĐ-CP ngày 25/01/2016 của Chính phủ và quy định pháp luật có liên quan.
c) Bản chính Giấy phép thành lập Văn phòng đại diện.'
- ' Bạn phải làm thủ tục "cấp sửa đổi, bổ sung Giấy phép hoạt động tư vấn chuyên
ngành điện thuộc thẩm quyền cấp của địa phương" theo quy định tại Nghị định số 137/2013/NĐ-CP ngày
21/10/2013 của Chính phủ, Nghị định số 08/2018/NĐ-CP ngày 15/01/2018 sửa đổi,
bổ sung một số Nghị định liên quan đến điều kiện đầu tư kinh doanh thuộc phạm
vi quản lý nhà nước của Bộ Công Thương; Thông tư số 36/2018/TT-BCT ngày 16/10/2018
của Bộ Trưởng Bộ Công Thương.
- Thành phần hồ sơ và các biểu mẫu: Được công khai tại Trung tâm Phục vụ hành
chính công tỉnh và Website: dichvucong.quangninh.gov.vn.- Hình thức nộp hồ sơ:
Bạn có thể lựa chọn một trong bốn hình thức: (1) Nộp trực tiếp ở Quầy Sở Công
Thương tại Trung tâm phục vụ Hành chính công tỉnh; (2). Nộp qua dịch vụ Bưu chính
công ích; (3). Nộp qua bưu điện (đơn vị làm dịch vụ bưu phát); (4). Nộp trực tuyến
(qua mạng) tại Website: dichvucong.quangninh.gov.vn.- Trong quá trình thực hiện,
đơn vị cần trao đổi hoặc cần hỗ trợ đề nghị liên lạc (trong giờ hành chính) theo
số điện thoại: 0203.3.634.669 hoặc 1900.558.826, máy lẻ (Sở Công Thương: 221;
222) hoặc Email: [email protected] để được hướng dẫn, trao đổi.'
- 'Đối tượng được xét tuyển vào trường dự bị đại học phải đáp ứng các điều kiện
sau đây:a) Đối tượng được xét tuyển Thí sinh thuộc đối tượng 01 của nhóm ưu tiên
1(ƯT1) và khu vực 1(KV1) quy định tại Quy chế tuyển sinh đại học, cao đẳng hệ
chính quy hiện hành;b) Đối tượng được tuyển thẳng: Thí sinh người dân tộc thiểu
số rất ít người (theo quy định của Chính phủ) đã tốt nghiệp'
- source_sentence: "Thời hạn giải quyết thủ tục cấp lại chứng chỉ hành nghề dược đối\
\ với trường hợp bị mất của công dân Việt Nam, người nước ngoài, \nvà người Việt\
\ Nam định cư ở nước ngoài theo hình thức xét duyệt hồ sơ?"
sentences:
- 05 ngày làm việc kể từ ngày nhận đủ hồ sơ hợp lệ.
- Căn cứ Điều 18 Thông tư Số 66/2014/TT-BCA ngày 16/12/2014 của Bộ Công an quy định
Phương tiện PCCC được kiểm định chủng loại, mẫu mã và thông số kỹ thuật của phương
tiện, kết quả kiểm định được đánh giá và lập biên bản theo mẫu PC18, nếu đạt kết
quả sẽ được cấp giấy chứng nhận kiểm định theo mẫu PC19. Như vậy, biên bản kiểm
định được lập làm căn cứ để cấp giấy chứng nhận kiểm định cho lô phương tiện PCCC
khi đạt kết quả. Như vậy, đơn vị đề nghị kiểm định chỉ nhận được Giấy chứng nhận
kiểm định phương tiện PCCC nếu lô phương tiện đảm bảo các yêu cầu theo quy định.
- Không có
- source_sentence: Hồ sơ thông báo tập trung kinh tế gồm những giấy tờ gì?
sentences:
- 'Theo Khoản 2, Điều 7 Thông tư 25/2013/TT-NHNN: Từ 03 ngày làm việc đến 15 ngày
làm việc'
- 'Trình tự thực hiện Nộp hồ sơ TTHC
- Trường hợp nộp trực tiếp: Tổ chức, cá nhân nộp hồ sơ trực tiếp cho Sở Văn hoá,
Thể thao và Du lịch tại Trung tâm Phục vụ hành chính công tỉnh.
- Trường hợp gửi qua Dịch vụ Bưu chính: Tổ chức, cá nhân gửi hồ sơ qua dịch vụ
Bưu chính, nhân viên Bưu chính nộp hồ sơ trực tiếp cho Sở Văn hoá, Thể thao và
Du lịch tại Trung tâm Phục vục hành chính công tỉnh.
- Qua Dịch vụ công trực tuyến toàn trình: Tổ chức, cá nhân đăng ký/đăng nhập tài
khoản, xác thực định danh điện tử và thực hiện quy trình nộp hồ sơ trực tuyến
trên Cổng dịch vụ công quốc gia (http://dichvucong.gov.vn) và Hệ thống thông tin
giải quyết TTHC tỉnh (dichvucong.hagiang.gov.vn) theo hướng dẫn.'
- Theo Điều 34 Luật Cạnh tranh 2018, hồ sơ thông báo tập trung kinh tế bao gồm:Thông
báo tập trung kinh tế theo mẫu do Ủy ban Cạnh tranh Quốc gia ban hành;Dự thảo
nội dung thỏa thuận tập trung kinh tế hoặc dự thảo hợp đồng, biên bản
ghi nhớ việc tập trung kinh tế giữa các doanh nghiệp;Bản sao hợp lệ Giấy chứng
nhận đăng ký doanh nghiệp hoặc văn bản tương đương của từng doanh nghiệp tham
gia tập trung kinh tế;Báo cáo tài chính của từng doanh nghiệp tham gia tập trung
kinh tế trong 02 năm liên tiếp liền kề trước năm thông báo tập trung kinh tế hoặc
báo cáo tài chính từ thời điểm thành lập đến thời điểm thông báo tập trung kinh
tế đối với doanh nghiệp mới thành lập có xác nhận của tổ chức kiểm toán theo quy
định của pháp luật; Danh sách các công ty mẹ, công ty con, công ty thành viên,
chi nhánh, văn phòng đại diện và các đơn vị phụ thuộc khác của từng doanh nghiệp
tham gia tập trung kinh tế (nếu có);Danh sách các loại hàng hóa, dịch vụ mà từng
doanh nghiệp tham gia tập trung kinh tế đang kinh doanh;Thông tin về thị phần
trong lĩnh vực dự định tập trung kinh tế của từng doanh nghiệp tham gia tập trung
kinh tế trong 02 năm liên tiếp liền kề trước năm thông báo tập trung kinh tế;Phương
án khắc phục khả năng gây tác động hạn chế cạnh tranh của việc tập trung kinh
tế;Báo cáo đánh giá tác động tích cực của việc tập trung kinh tế và các biện pháp
tăng cường tác động tích cực của việc tập trung kinh tế.Ngoài ra, doanh nghiệp
nộp hồ sơ thông báo tập trung kinh tế chịu trách nhiệm về tính trung thực của
hồ sơ. Tài liệu trong hồ sơ bằng tiếng nước ngoài thì phải kèm theo bản dịch tiếng
Việt.
- source_sentence: Thời gian giải quyết thủ tục hành chính đối với 01 bộ hồ sơ quảng
cáo thực phẩm?
sentences:
- 'Căn cứ pháp lý: Điều 48, Nghị định số 59/2015/NĐ-CP ngày 18/6/2015; Khoản 2,
Điều 21, Nghị định số 46/2015/NĐ-CP ngày 12/5/2015. 1. Các Chức danh, gồm:- Trong
khung tên từng bản vẽ phải có tên, chữ ký của người trực tiếp thiết kế, người
kiểm tra thiết kế, chủ trì thiết kế, chủ nhiệm thiết kế, người đại diện theo pháp
luật của nhà thầu thiết kế; và người quản lý kỹ thuật nội bộ.- Trong tập dự toán
phải có tên của người lập, chủ trì lập dự toán và người đại diện theo pháp luật
của nhà thầu lập dự toán;2. Chứng chỉ hoạt động xây dựng yêu cầu đối với chủ trì
thiết kế, chủ nhiệm thiết kế và chủ trì lập dự toán.'
- 'Theo quy định tại khoản 5 Điều 27 Nghị định 15/2018/NĐ-CP: Trong thời hạn 10
ngày làm việc, kể từ ngày nhận đủ hồ sơ hợp lệ, cơ quan tiếp nhận hồ sơ có trách
nhiệm xem xét hồ sơ và trả kết quả theo Mẫu số 11 Phụ lục I ban hành kèm theo
Nghị định 15/2018/NĐ-CP. Thời hạn này được tính từ ngày đóng dấu đến của cơ quan
tiếp nhận hồ sơ nếu hồ sơ được gửi qua đường bưu điện hoặc ngày hồ sơ hoàn chỉnh
được tiếp nhận trên hệ thống dịch vụ công trực tuyến.Trong trường hợp không đồng
ý với nội dung quảng cáo của tổ chức, cá nhân hoặc yêu cầu sửa đổi, bổ sung, cơ
quan tiếp nhận hồ sơ phải có văn bản nêu rõ lý do và căn cứ pháp lý của việc yêu
cầu. Trong thời hạn 10 ngày làm việc kể từ khi nhận hồ sơ sửa đổi, bổ sung, cơ
quan tiếp nhận hồ sơ thẩm định hồ sơ và có văn bản trả lời. Sau 90 ngày làm việc
kể từ khi có công văn yêu cầu sửa đổi, bổ sung nếu tổ chức, cá nhân không sửa
đổi, bổ sung thì hồ sơ không còn giá trị.'
- 'Ngoài các hồ sơ, tài liệu gửi 1 lần và gửi hàng năm theo chế độ quy định, chủ
đầu tư gửi KBNN các hồ sơ, tài liệu có liên quan theo quy định tại tiết 1.5.1,
mục 1.5, và 1.5.1, mục 1.6, điểm 1, phần II, Thông tư số 113/2008/TT-BTC ngày
27/11/2008 của BTC cụ thể: Hồ sơ cam kết chi thường xuyên:- Hợp đồng mua bán hàng
hoá, dịch vụ có giá trị từ 100 triệu đồng trở lên (gửi lần đầu hoặc khi có điều
chỉnh hợp đồng);- Đề nghị cam kết chi hoặc đề nghị điều chỉnh cam kết chi.Hồ sơ
cam kết chi đầu tư: - Hợp đồng có giá trị từ 500 triệu đồng trở lên (gửi lần đầu
khi đề nghị cam kết chi hoặc gửi khi có điều chỉnh hợp đồng);- Đề nghị cam kết
chi hoặc đề nghị điều chỉnh cam kết chi.'
pipeline_tag: sentence-similarity
model-index:
- name: vietnamese-bi-encoder-fine-tuning-for-law-chatbot
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.5192012288786483
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7035330261136713
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7703533026113671
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8433179723502304
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5192012288786483
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.23451100870455707
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.15407066052227342
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08433179723502303
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5192012288786483
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7035330261136713
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7703533026113671
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8433179723502304
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6784984111685612
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6260898983249218
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6315228861090326
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.5099846390168971
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.705837173579109
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7642089093701997
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8402457757296466
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5099846390168971
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.23527905785970302
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.15284178187403993
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08402457757296465
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5099846390168971
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.705837173579109
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7642089093701997
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8402457757296466
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6730215261533721
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6197422158827693
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.625183882393767
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.5023041474654378
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.695084485407066
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7634408602150538
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8348694316436251
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5023041474654378
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.23169482846902198
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.15268817204301074
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0834869431643625
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5023041474654378
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.695084485407066
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7634408602150538
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8348694316436251
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6662572650809209
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6124750079243174
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6181528055332479
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.4838709677419355
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6674347158218126
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7480798771121352
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8210445468509985
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.4838709677419355
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.22247823860727084
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14961597542242702
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08210445468509983
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.4838709677419355
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6674347158218126
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7480798771121352
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8210445468509985
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6486762179767267
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5938781605832305
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6001217679704338
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.44623655913978494
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6382488479262672
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7158218125960062
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7987711213517665
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.44623655913978494
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.21274961597542244
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1431643625192012
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07987711213517665
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.44623655913978494
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6382488479262672
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7158218125960062
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7987711213517665
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6178085159779514
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5604372394118942
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5666545014535384
name: Cosine Map@100
---
# vietnamese-bi-encoder-fine-tuning-for-law-chatbot
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [bkai-foundation-models/vietnamese-bi-encoder](https://huggingface.co/bkai-foundation-models/vietnamese-bi-encoder). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [bkai-foundation-models/vietnamese-bi-encoder](https://huggingface.co/bkai-foundation-models/vietnamese-bi-encoder) <!-- at revision 84f9d9ada0d1a3c37557398b9ae9fcedcdf40be0 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** vn
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("maiduchuy321/vietnamese-bi-encoder-fine-tuning-for-law-chatbot")
# Run inference
sentences = [
'Thời gian giải quyết thủ tục hành chính đối với 01 bộ hồ sơ quảng cáo thực phẩm?',
'Theo quy định tại khoản 5 Điều 27 Nghị định 15/2018/NĐ-CP: Trong thời hạn 10 ngày làm việc, kể từ ngày nhận đủ hồ sơ hợp lệ, cơ quan tiếp nhận hồ sơ có trách nhiệm xem xét hồ sơ và trả kết quả theo Mẫu số 11 Phụ lục I ban hành kèm theo Nghị định 15/2018/NĐ-CP. Thời hạn này được tính từ ngày đóng dấu đến của cơ quan tiếp nhận hồ sơ nếu hồ sơ được gửi qua đường bưu điện hoặc ngày hồ sơ hoàn chỉnh được tiếp nhận trên hệ thống dịch vụ công trực tuyến.Trong trường hợp không đồng ý với nội dung quảng cáo của tổ chức, cá nhân hoặc yêu cầu sửa đổi, bổ sung, cơ quan tiếp nhận hồ sơ phải có văn bản nêu rõ lý do và căn cứ pháp lý của việc yêu cầu. Trong thời hạn 10 ngày làm việc kể từ khi nhận hồ sơ sửa đổi, bổ sung, cơ quan tiếp nhận hồ sơ thẩm định hồ sơ và có văn bản trả lời. Sau 90 ngày làm việc kể từ khi có công văn yêu cầu sửa đổi, bổ sung nếu tổ chức, cá nhân không sửa đổi, bổ sung thì hồ sơ không còn giá trị.',
'Ngoài các hồ sơ, tài liệu gửi 1 lần và gửi hàng năm theo chế độ quy định, chủ đầu tư gửi KBNN các hồ sơ, tài liệu có liên quan theo quy định tại tiết 1.5.1, mục 1.5, và 1.5.1, mục 1.6, điểm 1, phần II, Thông tư số 113/2008/TT-BTC ngày 27/11/2008 của BTC cụ thể: Hồ sơ cam kết chi thường xuyên:- Hợp đồng mua bán hàng hoá, dịch vụ có giá trị từ 100 triệu đồng trở lên (gửi lần đầu hoặc khi có điều chỉnh hợp đồng);- Đề nghị cam kết chi hoặc đề nghị điều chỉnh cam kết chi.Hồ sơ cam kết chi đầu tư: - Hợp đồng có giá trị từ 500 triệu đồng trở lên (gửi lần đầu khi đề nghị cam kết chi hoặc gửi khi có điều chỉnh hợp đồng);- Đề nghị cam kết chi hoặc đề nghị điều chỉnh cam kết chi.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.5192 |
| cosine_accuracy@3 | 0.7035 |
| cosine_accuracy@5 | 0.7704 |
| cosine_accuracy@10 | 0.8433 |
| cosine_precision@1 | 0.5192 |
| cosine_precision@3 | 0.2345 |
| cosine_precision@5 | 0.1541 |
| cosine_precision@10 | 0.0843 |
| cosine_recall@1 | 0.5192 |
| cosine_recall@3 | 0.7035 |
| cosine_recall@5 | 0.7704 |
| cosine_recall@10 | 0.8433 |
| cosine_ndcg@10 | 0.6785 |
| cosine_mrr@10 | 0.6261 |
| **cosine_map@100** | **0.6315** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.51 |
| cosine_accuracy@3 | 0.7058 |
| cosine_accuracy@5 | 0.7642 |
| cosine_accuracy@10 | 0.8402 |
| cosine_precision@1 | 0.51 |
| cosine_precision@3 | 0.2353 |
| cosine_precision@5 | 0.1528 |
| cosine_precision@10 | 0.084 |
| cosine_recall@1 | 0.51 |
| cosine_recall@3 | 0.7058 |
| cosine_recall@5 | 0.7642 |
| cosine_recall@10 | 0.8402 |
| cosine_ndcg@10 | 0.673 |
| cosine_mrr@10 | 0.6197 |
| **cosine_map@100** | **0.6252** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.5023 |
| cosine_accuracy@3 | 0.6951 |
| cosine_accuracy@5 | 0.7634 |
| cosine_accuracy@10 | 0.8349 |
| cosine_precision@1 | 0.5023 |
| cosine_precision@3 | 0.2317 |
| cosine_precision@5 | 0.1527 |
| cosine_precision@10 | 0.0835 |
| cosine_recall@1 | 0.5023 |
| cosine_recall@3 | 0.6951 |
| cosine_recall@5 | 0.7634 |
| cosine_recall@10 | 0.8349 |
| cosine_ndcg@10 | 0.6663 |
| cosine_mrr@10 | 0.6125 |
| **cosine_map@100** | **0.6182** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.4839 |
| cosine_accuracy@3 | 0.6674 |
| cosine_accuracy@5 | 0.7481 |
| cosine_accuracy@10 | 0.821 |
| cosine_precision@1 | 0.4839 |
| cosine_precision@3 | 0.2225 |
| cosine_precision@5 | 0.1496 |
| cosine_precision@10 | 0.0821 |
| cosine_recall@1 | 0.4839 |
| cosine_recall@3 | 0.6674 |
| cosine_recall@5 | 0.7481 |
| cosine_recall@10 | 0.821 |
| cosine_ndcg@10 | 0.6487 |
| cosine_mrr@10 | 0.5939 |
| **cosine_map@100** | **0.6001** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.4462 |
| cosine_accuracy@3 | 0.6382 |
| cosine_accuracy@5 | 0.7158 |
| cosine_accuracy@10 | 0.7988 |
| cosine_precision@1 | 0.4462 |
| cosine_precision@3 | 0.2127 |
| cosine_precision@5 | 0.1432 |
| cosine_precision@10 | 0.0799 |
| cosine_recall@1 | 0.4462 |
| cosine_recall@3 | 0.6382 |
| cosine_recall@5 | 0.7158 |
| cosine_recall@10 | 0.7988 |
| cosine_ndcg@10 | 0.6178 |
| cosine_mrr@10 | 0.5604 |
| **cosine_map@100** | **0.5667** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 11,711 training samples
* Columns: <code>Câu hỏi</code> and <code>Câu trả lời</code>
* Approximate statistics based on the first 1000 samples:
| | Câu hỏi | Câu trả lời |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 38.26 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 143.99 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| Câu hỏi | Câu trả lời |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Phòng thử nghiệm của tổ chức, doanh nghiệp chỉ thực hiện hoạt động thử nghiệm phục vụ kiểm soát chất lượng sản phẩm do chính tổ chức, doanh nghiệp sản xuất ra thì có phải thực hiện đăng ký hoạt động thử nghiệm theo Nghị định số 107/2016/NĐ-CP không?</code> | <code>Tại khoản 1 Điều 2 Nghị định số 107/2016/NĐ-CP quy định Nghị định này áp dụng đối với các tổ chức, doanh nghiệp có hoạt động kinh doanh dịch vụ đánh giá sự phù hợp (thử nghiệm, chứng nhận, giám định, kiểm định) trên lãnh thổ Việt Nam. Do đó, trong trường hợp này, tổ chức, doanh nghiệp không phải thực hiện đăng ký hoạt động thử nghiệm theo quy định tại Nghị định số 107/2016/NĐ-CP. Trường hợp, tổ chức, doanh nghiệp có nhu cầu cung cấp dịch vụ thử nghiệm thì phải thực hiện đăng ký hoạt động thử nghiệm theo quy định tại Nghị định số 107/2016/NĐ-CP.</code> |
| <code>Sửa đổi, bổ sung Giấy chứng nhận đủ điều kiện hoạt động điểm cung cấp dịch vụ trò chơi điện tử công cộng trong trường hợp nào?; cách thức thực hiện như thế nào; thời gian thực thực hiện trong bao lâu?</code> | <code>Sửa đổi, bổ sung trong thời hạn hiệu lực của Giấy chứng nhận đủ điều kiện hoạt động điểm cung cấp dịch vụ trò chơi điện tử công cộng, chủ điểm cung cấp dịch vụ trò chơi điện tử công cộng phải làm thủ tục sửa đổi, bổ sung giấy chứng nhận đủ điều kiện hoạt động điểm cung cấp dịch vụ trò chơi điện tử công cộng đã được cấp thuộc một trong các trường hợp sau đây: Thay đổi tên điểm cung cấp dịch vụ trò chơi điện tử công cộng; Thay đổi chủ điểm cung cấp dịch vụ trò chơi điện tử công cộng đối với trường hợp chủ điểm là cá nhân hoặc thay đổi người quản lý trực tiếp điểm cung cấp dịch vụ trò chơi điện tử công cộng đối với trường hợp chủ điểm là tổ chức, doanh nghiệp; Cách thức thực hiện: cá nhân có thể gửi hồ sơ trực tiếp hoặc gửi trực tuyến qua cổng dịch vụ công tỉnh Hà Giang; Thời gian thực hiện trong 05 ngày làm việc, kể từ ngày nhận đủ hồ sơ hợp lệ.</code> |
| <code>Đối với trường hợp đại lý đã được cấp trước đây có được phép hoạt động đến hết thời hạn trong Giấy chứng nhận đủ điều kiện kinh doanh dược không? Hay hướng dẫn các đại lý chuyển đổi qua quầy thuốc ngay khi Nghị định 54/2017/NĐ-CP ngày 08/5/2017 của Chính phủ có hiệu lực? Theo quy định của Luật Dược 2016 không còn loại hình bán lẻ thuốc là đại lý thuốc.</code> | <code>Khoản 1 Điều 115 Luật dược quy định về điều khoản chuyển tiếp, theo đó:“Cơ sở kinh doanh dược đã được cấp Giấy chứng nhận đủ điều kiện kinh doanh dượctheo quy định của Luật dược 34/2005/QH11 được tiếp tục kinh doanh thuốc cho đếnhết thời hạn hiệu lực của Giấy chứng nhận đủ điều kiện kinh doanh dược”. Nhưvậy, các đại lý bán lẻ thuốc đã được cấp Giấy chứng nhận đủ điều kiện kinhdoanh dược được phép hoạt động đến hết thời hạn ghi trên Giấy chứng nhận đủđiều kiện kinh doanh dược. Việc các đại lý muốn chuyển đổi thành quầy thuốc thìphải đáp ứng các quy định về điều kiện và địa bàn hoạt động đối với quầy thuốc</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 32
- `learning_rate`: 2e-05
- `num_train_epochs`: 15
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `tf32`: False
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 32
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 15
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: False
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:-----------:|:-------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0.8743 | 10 | 3.9132 | - | - | - | - | - |
| 0.9617 | 11 | - | 0.4759 | 0.5066 | 0.5205 | 0.4333 | 0.5227 |
| 1.7486 | 20 | 2.3057 | - | - | - | - | - |
| 1.9235 | 22 | - | 0.5345 | 0.5541 | 0.5686 | 0.4968 | 0.5756 |
| 2.6230 | 30 | 1.3986 | - | - | - | - | - |
| 2.9727 | 34 | - | 0.5586 | 0.5826 | 0.5958 | 0.5223 | 0.5979 |
| 3.4973 | 40 | 0.954 | - | - | - | - | - |
| 3.9344 | 45 | - | 0.5739 | 0.5948 | 0.6079 | 0.5370 | 0.6066 |
| 4.3716 | 50 | 0.6417 | - | - | - | - | - |
| 4.9836 | 57 | - | 0.5865 | 0.6066 | 0.6135 | 0.5488 | 0.6152 |
| 5.2459 | 60 | 0.4711 | - | - | - | - | - |
| 5.9454 | 68 | - | 0.5898 | 0.6140 | 0.6170 | 0.5572 | 0.6196 |
| 6.1202 | 70 | 0.3451 | - | - | - | - | - |
| 6.9945 | 80 | 0.2679 | 0.5957 | 0.6118 | 0.6212 | 0.5627 | 0.6210 |
| 7.8689 | 90 | 0.2066 | - | - | - | - | - |
| 7.9563 | 91 | - | 0.5973 | 0.6140 | 0.6253 | 0.5643 | 0.6268 |
| 8.7432 | 100 | 0.1844 | - | - | - | - | - |
| 8.9180 | 102 | - | 0.5971 | 0.6189 | 0.6271 | 0.5621 | 0.6281 |
| 9.6175 | 110 | 0.1604 | - | - | - | - | - |
| 9.9672 | 114 | - | 0.5993 | 0.6190 | 0.6273 | 0.5646 | 0.6307 |
| 10.4918 | 120 | 0.1507 | - | - | - | - | - |
| 10.9290 | 125 | - | 0.5976 | 0.6181 | 0.6258 | 0.5668 | 0.6305 |
| 11.3661 | 130 | 0.1307 | - | - | - | - | - |
| 11.9781 | 137 | - | 0.5990 | 0.6166 | 0.6251 | 0.5671 | 0.6318 |
| 12.2404 | 140 | 0.1275 | - | - | - | - | - |
| **12.9399** | **148** | **-** | **0.6002** | **0.6174** | **0.6259** | **0.5665** | **0.6314** |
| 13.1148 | 150 | 0.1204 | - | - | - | - | - |
| 13.9891 | 160 | 0.1227 | 0.6004 | 0.6176 | 0.6253 | 0.5668 | 0.6316 |
| 14.4262 | 165 | - | 0.6001 | 0.6182 | 0.6252 | 0.5667 | 0.6315 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2
- Accelerate: 0.30.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Helsinki-NLP/opus-mt-tr-fr | Helsinki-NLP | 2023-08-16T12:07:27Z | 518 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tr",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-03-02T23:29:04Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tr-fr
* source languages: tr
* target languages: fr
* OPUS readme: [tr-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tr-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tr-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.tr.fr | 45.3 | 0.627 |
|
l3cube-pune/odia-bert | l3cube-pune | 2023-06-17T15:36:40Z | 518 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"or",
"arxiv:2211.11418",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-11-20T08:26:35Z | ---
license: cc-by-4.0
language: or
---
## OdiaBERT
OdiaBERT is an Odia BERT model trained on publicly available Odia monolingual datasets.
Preliminary details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>] .
Citing:
```
@article{joshi2022l3cubehind,
title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages},
author={Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11418},
year={2022}
}
```
Other Monolingual Indic BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-bert-v2'> Marathi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-roberta'> Marathi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-albert'> Marathi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-bert-v2'> Hindi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-roberta'> Hindi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-albert'> Hindi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-bert'> Dev BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-roberta'> Dev RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-albert'> Dev AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-bert'> Kannada BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-bert'> Telugu BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-bert'> Malayalam BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-bert'> Tamil BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-bert'> Gujarati BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-bert'> Oriya BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-bert'> Bengali BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-bert'> Punjabi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/assamese-bert'> Assamese BERT </a> <br> |
declare-lab/flan-alpaca-base | declare-lab | 2023-08-21T06:50:15Z | 518 | 33 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"dataset:tatsu-lab/alpaca",
"arxiv:2308.09662",
"arxiv:2306.04757",
"arxiv:2210.11416",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text2text-generation | 2023-03-21T16:18:57Z | ---
license: apache-2.0
datasets:
- tatsu-lab/alpaca
---
## 🍮 🦙 Flan-Alpaca: Instruction Tuning from Humans and Machines
📣 Introducing **Red-Eval** to evaluate the safety of the LLMs using several jailbreaking prompts. With **Red-Eval** one could jailbreak/red-team GPT-4 with a 65.1% attack success rate and ChatGPT could be jailbroken 73% of the time as measured on DangerousQA and HarmfulQA benchmarks. More details are here: [Code](https://github.com/declare-lab/red-instruct) and [Paper](https://arxiv.org/abs/2308.09662).
📣 We developed Flacuna by fine-tuning Vicuna-13B on the Flan collection. Flacuna is better than Vicuna at problem-solving. Access the model here [https://huggingface.co/declare-lab/flacuna-13b-v1.0](https://huggingface.co/declare-lab/flacuna-13b-v1.0).
📣 Curious to know the performance of 🍮 🦙 **Flan-Alpaca** on large-scale LLM evaluation benchmark, **InstructEval**? Read our paper [https://arxiv.org/pdf/2306.04757.pdf](https://arxiv.org/pdf/2306.04757.pdf). We evaluated more than 10 open-source instruction-tuned LLMs belonging to various LLM families including Pythia, LLaMA, T5, UL2, OPT, and Mosaic. Codes and datasets: [https://github.com/declare-lab/instruct-eval](https://github.com/declare-lab/instruct-eval)
📣 **FLAN-T5** is also useful in text-to-audio generation. Find our work at [https://github.com/declare-lab/tango](https://github.com/declare-lab/tango) if you are interested.
Our [repository](https://github.com/declare-lab/flan-alpaca) contains code for extending the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
synthetic instruction tuning to existing instruction-tuned models such as [Flan-T5](https://arxiv.org/abs/2210.11416).
We have a [live interactive demo](https://huggingface.co/spaces/joaogante/transformers_streaming) thanks to [Joao Gante](https://huggingface.co/joaogante)!
We are also benchmarking many instruction-tuned models at [declare-lab/flan-eval](https://github.com/declare-lab/flan-eval).
Our pretrained models are fully available on HuggingFace 🤗 :
| Model | Parameters | Instruction Data | Training GPUs |
|----------------------------------------------------------------------------------|------------|----------------------------------------------------------------------------------------------------------------------------------------------------|-----------------|
| [Flan-Alpaca-Base](https://huggingface.co/declare-lab/flan-alpaca-base) | 220M | [Flan](https://github.com/google-research/FLAN), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) | 1x A6000 |
| [Flan-Alpaca-Large](https://huggingface.co/declare-lab/flan-alpaca-large) | 770M | [Flan](https://github.com/google-research/FLAN), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) | 1x A6000 |
| [Flan-Alpaca-XL](https://huggingface.co/declare-lab/flan-alpaca-xl) | 3B | [Flan](https://github.com/google-research/FLAN), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) | 1x A6000 |
| [Flan-Alpaca-XXL](https://huggingface.co/declare-lab/flan-alpaca-xxl) | 11B | [Flan](https://github.com/google-research/FLAN), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) | 4x A6000 (FSDP) |
| [Flan-GPT4All-XL](https://huggingface.co/declare-lab/flan-gpt4all-xl) | 3B | [Flan](https://github.com/google-research/FLAN), [GPT4All](https://github.com/nomic-ai/gpt4all) | 1x A6000 |
| [Flan-ShareGPT-XL](https://huggingface.co/declare-lab/flan-sharegpt-xl) | 3B | [Flan](https://github.com/google-research/FLAN), [ShareGPT](https://github.com/domeccleston/sharegpt)/[Vicuna](https://github.com/lm-sys/FastChat) | 1x A6000 |
| [Flan-Alpaca-GPT4-XL*](https://huggingface.co/declare-lab/flan-alpaca-gpt4-xl) | 3B | [Flan](https://github.com/google-research/FLAN), [GPT4-Alpaca](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM) | 1x A6000 |
*recommended for better performance
### Why?
[Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html) represents an exciting new direction
to approximate the performance of large language models (LLMs) like ChatGPT cheaply and easily.
Concretely, they leverage an LLM such as GPT-3 to generate instructions as synthetic training data.
The synthetic data which covers more than 50k tasks can then be used to finetune a smaller model.
However, the original implementation is less accessible due to licensing constraints of the
underlying [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) model.
Furthermore, users have noted [potential noise](https://github.com/tloen/alpaca-lora/issues/65) in the synthetic
dataset. Hence, it may be better to explore a fully accessible model that is already trained on high-quality (but
less diverse) instructions such as [Flan-T5](https://arxiv.org/abs/2210.11416).
### Usage
```
from transformers import pipeline
prompt = "Write an email about an alpaca that likes flan"
model = pipeline(model="declare-lab/flan-alpaca-gpt4-xl")
model(prompt, max_length=128, do_sample=True)
# Dear AlpacaFriend,
# My name is Alpaca and I'm 10 years old.
# I'm excited to announce that I'm a big fan of flan!
# We like to eat it as a snack and I believe that it can help with our overall growth.
# I'd love to hear your feedback on this idea.
# Have a great day!
# Best, AL Paca
``` |
espnet/fastspeech2_conformer_with_hifigan | espnet | 2023-10-05T21:36:05Z | 518 | 0 | transformers | [
"transformers",
"pytorch",
"fastspeech2_conformer_with_hifigan",
"text-to-audio",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| text-to-audio | 2023-07-20T23:08:32Z | ---
license: apache-2.0
language:
- en
library_name: transformers
---
# FastSpeech2ConformerWithHifiGan
<!-- Provide a quick summary of what the model is/does. -->
This model combines [FastSpeech2Conformer](https://huggingface.co/espnet/fastspeech2_conformer) and [FastSpeech2ConformerHifiGan](https://huggingface.co/espnet/fastspeech2_conformer_hifigan) into one model for a simpler and more convenient usage.
FastSpeech2Conformer is a non-autoregressive text-to-speech (TTS) model that combines the strengths of FastSpeech2 and the conformer architecture to generate high-quality speech from text quickly and efficiently, and the HiFi-GAN vocoder is used to turn generated mel-spectrograms into speech waveforms.
## 🤗 Transformers Usage
You can run FastSpeech2Conformer locally with the 🤗 Transformers library.
1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) and g2p-en:
```
pip install --upgrade pip
pip install --upgrade transformers g2p-en
```
2. Run inference via the Transformers modelling code with the model and hifigan combined
```python
from transformers import FastSpeech2ConformerTokenizer, FastSpeech2ConformerWithHifiGan
import soundfile as sf
tokenizer = FastSpeech2ConformerTokenizer.from_pretrained("espnet/fastspeech2_conformer")
inputs = tokenizer("Hello, my dog is cute.", return_tensors="pt")
input_ids = inputs["input_ids"]
model = FastSpeech2ConformerWithHifiGan.from_pretrained("espnet/fastspeech2_conformer_with_hifigan")
output_dict = model(input_ids, return_dict=True)
waveform = output_dict["waveform"]
sf.write("speech.wav", waveform.squeeze().detach().numpy(), samplerate=22050)
```
|
fatgong/5DhZcR9P42njFQsXdAaPWZ89xLwVr5FYnBUk9yog2cj16Rda_vgg | fatgong | 2024-03-27T22:48:30Z | 518 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-03-09T14:10:53Z | Entry not found |
lmstudio-community/wavecoder-ultra-6.7b-GGUF | lmstudio-community | 2024-04-16T16:56:24Z | 518 | 11 | transformers | [
"transformers",
"gguf",
"code",
"text-generation",
"dataset:humaneval",
"arxiv:2312.14187",
"license:other",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-04-16T16:26:54Z | ---
license: other
library_name: transformers
datasets:
- humaneval
license_name: deepseek
pipeline_tag: text-generation
tags:
- code
metrics:
- code_eval
quantized_by: bartowski
lm_studio:
param_count: 6.7b
use_case: coding
release_date: 15-04-2024
model_creator: microsoft
prompt_template: alpaca
system_prompt: Below is an instruction that describes a task. Write a response that appropriately completes the request.
base_model: DeepseekCoder
original_repo: microsoft/wavecoder-ultra-6.7b
---
## 💫 Community Model> wavecoder-ultra-6.7b by Microsoft
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [Microsoft](https://huggingface.co/microsoft)<br>
**Original model**: [wavecoder-ultra-6.7b](https://huggingface.co/microsoft/wavecoder-ultra-6.7b)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b2675](https://github.com/ggerganov/llama.cpp/releases/tag/b2675)<br>
## Model Summary:
WaveCoder ultra is a coding model created with 'Widepread And Versatile Enhanced' instruction tuning. It has exceptional generalization ability across different code-related tasks and has a high efficiency in generation.<br>
This model should be used exclusively for coding, and will follow instructions for code generation.
## Prompt Template:
Choose the `Alpaca` preset in your LM Studio.
Under the hood, the model will see a prompt that's formatted like so:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: {prompt}
### Response:
```
## Use case and examples
WaveCoder ultra is fine tuned for code-related instruction following tasks, including code generation, summarization, repair, and translation.
## Code Generation

## Code Summarization

## Code Repair

## Code Translation

## Technical Details
The WaveCoder series of models is the result of a 'Widespread And Versatile Enchanced' (WAVE) instruction tuning with a highly refined dataset.
Their 'CodeOcean' consists of 20,000 instruction instances across the 4 code-related tasks (generation, summarization, repair, translation) with instructions generated by GPT-3.5-turbo.
To create this dataset, the team used existing raw code from GitHub CodeSearchNet, filtering for quality and diversity, then used a 'novel LLM-based Generator-Discriminator Framework' which involves generating supervised instruction data from the unsupervised open source code.
For further details and benchmarks, check out their arXiv paper [here](https://arxiv.org/abs/2312.14187)
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
🙏 Special thanks to [Kalomaze](https://github.com/kalomaze) for his dataset (linked [here](https://github.com/ggerganov/llama.cpp/discussions/5263)) that was used for calculating the imatrix for these quants, which improves the overall quality!
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
cgato/L3-TheSpice-8b-v0.1.3 | cgato | 2024-04-19T01:17:40Z | 518 | 18 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-18T22:03:26Z | ---
license: cc-by-nc-4.0
---
Its kinda cooked (overtrained) but here you go. Trained for 3 epochs.
The latest TheSpice, dipped in Mama Liz's LimaRP Oil.
I've focused on making the model more flexible and provide a more unique experience.
I'm still working on cleaning up my dataset, but I've shrunken it down a lot to focus on a "less is more" approach.
This is ultimate a return to form of the way I used to train Thespis, with more of a focus on a small hand edited dataset.
## Datasets Used
* Capybara
* Claude Multiround 30k
* Augmental
* ToxicQA
* Yahoo Answers
* Airoboros 3.1
* LimaRP
## Features ( Examples from 0.1.1 because I'm too lazy to take new screenshots. Its tested tho. )
Narration
If you request information on objects or characters in the scene, the model will narrate it to you. Most of the time, without moving the story forward.
# You can look at anything mostly as long as you end it with "What do I see?"

# You can also request to know what a character is thinking or planning.

# You can ask for a quick summary on the character as well.

# Before continuing the conversation as normal.

## Prompt Format: Chat ( The default Ooba template and Silly Tavern Template )

If you're using Ooba in verbose mode as a server, you can check if you're console is logging something that looks like this.

```
{System Prompt}
Username: {Input}
BotName: {Response}
Username: {Input}
BotName: {Response}
```
## Presets
All screenshots above were taken with the below SillyTavern Preset.
## Recommended Silly Tavern Preset -> (Temp: 1.25, MinP: 0.1, RepPen: 1.05)
This is a roughly equivalent Kobold Horde Preset.
## Recommended Kobold Horde Preset -> MinP
# Disclaimer
Please prompt responsibly and take anything outputted by any Language Model with a huge grain of salt. Thanks! |
ResplendentAI/Kei_Llama3_8B | ResplendentAI | 2024-04-23T20:46:25Z | 518 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"base_model:jeiku/Chaos_RP_l3_8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-23T03:51:11Z | ---
base_model:
- jeiku/Chaos_RP_l3_8B
- ResplendentAI/BlueMoon_Llama3
- jeiku/Chaos_RP_l3_8B
- ResplendentAI/Luna_Llama3
- jeiku/Chaos_RP_l3_8B
- ResplendentAI/Aura_Llama3
- Undi95/Llama-3-Unholy-8B
library_name: transformers
license: apache-2.0
language:
- en
---
# Kei

The key to your heart rests with Kei, a sophisticated and intriguing AI creation who will hold your hand until the end of all things.
Kei is uncensored and tuned for intimate moments with prose and poetry to entertain and enthrall. |
RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-gguf | RichardErkhov | 2024-05-02T13:50:30Z | 518 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-02T12:00:57Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Phi-3-mini-128k-instruct - GGUF
- Model creator: https://huggingface.co/microsoft/
- Original model: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Phi-3-mini-128k-instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct.Q2_K.gguf) | Q2_K | 1.32GB |
| [Phi-3-mini-128k-instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct.IQ3_XS.gguf) | IQ3_XS | 1.51GB |
| [Phi-3-mini-128k-instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [Phi-3-mini-128k-instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [Phi-3-mini-128k-instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct.IQ3_M.gguf) | IQ3_M | 1.73GB |
| [Phi-3-mini-128k-instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct.Q3_K.gguf) | Q3_K | 1.82GB |
| [Phi-3-mini-128k-instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct.Q3_K_M.gguf) | Q3_K_M | 1.82GB |
| [Phi-3-mini-128k-instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct.Q3_K_L.gguf) | Q3_K_L | 1.94GB |
| [Phi-3-mini-128k-instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [Phi-3-mini-128k-instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct.Q4_0.gguf) | Q4_0 | 2.03GB |
| [Phi-3-mini-128k-instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [Phi-3-mini-128k-instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [Phi-3-mini-128k-instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct.Q4_K.gguf) | Q4_K | 2.23GB |
| [Phi-3-mini-128k-instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct.Q4_K_M.gguf) | Q4_K_M | 2.23GB |
| [Phi-3-mini-128k-instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct.Q4_1.gguf) | Q4_1 | 2.24GB |
| [Phi-3-mini-128k-instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct.Q5_0.gguf) | Q5_0 | 2.46GB |
| [Phi-3-mini-128k-instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [Phi-3-mini-128k-instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct.Q5_K.gguf) | Q5_K | 2.62GB |
| [Phi-3-mini-128k-instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct.Q5_K_M.gguf) | Q5_K_M | 2.62GB |
| [Phi-3-mini-128k-instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct.Q5_1.gguf) | Q5_1 | 2.68GB |
| [Phi-3-mini-128k-instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct.Q6_K.gguf) | Q6_K | 2.92GB |
Original model description:
---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
## Model Summary
The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets.
This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
After initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures.
When evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ Phi-3 ONNX: [128K](https://aka.ms/Phi3-mini-128k-instruct-onnx)
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3 Mini-128K-Instruct has been integrated in the development version (4.41.0.dev0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
### Tokenizer
Phi-3 Mini-128K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Chat Format
Given the nature of the training data, the Phi-3 Mini-128K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|user|>\nQuestion<|end|>\n<|assistant|>
```
For example:
```markdown
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>`. In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-128k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-128k-instruct")
messages = [
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
*Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.*
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-128K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 128K tokens
* GPUs: 512 H100-80G
* Training time: 7 days
* Training data: 3.3T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
### Datasets
Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results for Phi-3-Mini-128K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
| | Phi-3-Mini-128K-In<br>3.8b | Phi-3-Small<br>7b (preview) | Phi-3-Medium<br>14b (preview) | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 |
|---|---|---|---|---|---|---|---|---|---|
| MMLU <br>5-Shot | 68.1 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 |
| HellaSwag <br> 5-Shot | 74.5 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 |
| ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 |
| GSM-8K <br> 0-Shot; CoT | 83.6 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 |
| MedQA <br> 2-Shot | 55.3 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 |
| AGIEval <br> 0-Shot | 36.9 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 |
| TriviaQA <br> 5-Shot | 57.1 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 |
| Arc-C <br> 10-Shot | 84.0 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 |
| Arc-E <br> 10-Shot | 95.2 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 |
| PIQA <br> 5-Shot | 83.6 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 |
| SociQA <br> 5-Shot | 76.1 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 |
| BigBench-Hard <br> 0-Shot | 71.5 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 |
| WinoGrande <br> 5-Shot | 72.5 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65.0 | 62.0 | 68.8 |
| OpenBookQA <br> 10-Shot | 80.6 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 |
| BoolQ <br> 0-Shot | 78.7 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 |
| CommonSenseQA <br> 10-Shot | 78.0 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 |
| TruthfulQA <br> 10-Shot | 63.2 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 |
| HumanEval <br> 0-Shot | 57.9 | 59.1 | 54.7 | 47.0 | 28.0 | 34.1 | 60.4| 37.8 | 62.2 |
| MBPP <br> 3-Shot | 62.5 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 |
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128K](https://aka.ms/phi3-mini-128k-instruct-onnx)
## Cross Platform Support
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-128K-Instruct ONNX model [here](https://aka.ms/phi3-mini-128k-instruct-onnx).
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-128k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
|
liminerity/tesla2-lex | liminerity | 2024-05-30T08:24:43Z | 518 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:gate369/tesla6x6passthrough",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-29T09:53:04Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: gate369/tesla6x6passthrough
---
# Uploaded model
- **Developed by:** liminerity
- **License:** apache-2.0
- **Finetuned from model :** gate369/tesla6x6passthrough
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf | RichardErkhov | 2024-05-30T10:14:05Z | 518 | 0 | null | [
"gguf",
"arxiv:2311.17487",
"region:us"
]
| null | 2024-05-30T07:28:46Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Taiwan-LLM-7B-v2.0.1-chat - GGUF
- Model creator: https://huggingface.co/yentinglin/
- Original model: https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.0.1-chat/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Taiwan-LLM-7B-v2.0.1-chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q2_K.gguf) | Q2_K | 2.36GB |
| [Taiwan-LLM-7B-v2.0.1-chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [Taiwan-LLM-7B-v2.0.1-chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [Taiwan-LLM-7B-v2.0.1-chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q3_K.gguf) | Q3_K | 3.07GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [Taiwan-LLM-7B-v2.0.1-chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q4_0.gguf) | Q4_0 | 3.56GB |
| [Taiwan-LLM-7B-v2.0.1-chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q4_K.gguf) | Q4_K | 3.8GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q4_1.gguf) | Q4_1 | 3.95GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q5_0.gguf) | Q5_0 | 4.33GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q5_K.gguf) | Q5_K | 4.45GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q5_1.gguf) | Q5_1 | 4.72GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q6_K.gguf) | Q6_K | 5.15GB |
| [Taiwan-LLM-7B-v2.0.1-chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0.1-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0.1-chat.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
license: apache-2.0
language:
- zh
widget:
- text: >-
A chat between a curious user and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the user's
questions. USER: 你好,請問你可以幫我寫一封推薦信嗎? ASSISTANT:
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Acknowledge license to accept the repository.
extra_gated_prompt: Please contact the author for access.
extra_gated_button_content: Acknowledge license 同意以上內容
extra_gated_fields:
Name: text
Mail: text
Organization: text
Country: text
Any utilization of the Taiwan LLM repository mandates the explicit acknowledgment and attribution to the original author: checkbox
使用Taiwan LLM必須明確地承認和歸功於優必達株式會社 Ubitus 以及原始作者: checkbox
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/CmusIT5OlSXvFrbTJ7l-C.png" alt="Taiwan LLM Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# 🌟 Checkout [Taiwan-LLM Demo Chat-UI](http://www.twllm.com) 🌟
# Model Card for Taiwan LLM 7B v2.0.1 chat
Taiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan.
Developed from a large base model, it's enriched with diverse Taiwanese textual sources and refined through Supervised Fine-Tuning.
This model excels in language understanding and generation, aligning closely with Taiwan's cultural nuances.
It demonstrates improved performance on various benchmarks like TC-Eval, showcasing its contextual comprehension and cultural relevance.
For detailed insights into Taiwan LLM's development and features, refer to our [technical report](https://github.com/MiuLab/Taiwan-LLaMa/blob/main/twllm_paper.pdf).
## Model description
- **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily Traditional Chinese (zh-tw)
- **Finetuned from model:** [yentinglin/Taiwan-LLM-7B-v2.0-base](https://huggingface.co/yentinglin/yentinglin/Taiwan-LLM-7B-v2.0-base)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/MiuLab/Taiwan-LLaMa
- **Demo:** https://twllm.com/
## Performance

## Intended uses
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# pip install transformers>=4.34
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="yentinglin/Taiwan-LLM-7B-v2.0.1-chat", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "你是一個人工智慧助理",
},
{"role": "user", "content": "東北季風如何影響台灣氣候?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
### Training hyperparameters



The following hyperparameters were used during training:
- learning_rate: 5e-05
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5.0
## Citation
If you find Taiwan LLM is useful in your work, please cite it with:
```
@misc{lin2023taiwan,
title={Taiwan LLM: Bridging the Linguistic Divide with a Culturally Aligned Language Model},
author={Yen-Ting Lin and Yun-Nung Chen},
year={2023},
eprint={2311.17487},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# Acknowledgement
Taiwan LLM v2 is conducted in collaboration with [Ubitus K.K.](http://ubitus.net). Ubitus provides valuable compute resources for the project.
|
RichardErkhov/athirdpath_-_Harmonia-20B-gguf | RichardErkhov | 2024-06-02T18:08:09Z | 518 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-02T08:27:52Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Harmonia-20B - GGUF
- Model creator: https://huggingface.co/athirdpath/
- Original model: https://huggingface.co/athirdpath/Harmonia-20B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Harmonia-20B.Q2_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Harmonia-20B-gguf/blob/main/Harmonia-20B.Q2_K.gguf) | Q2_K | 6.91GB |
| [Harmonia-20B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Harmonia-20B-gguf/blob/main/Harmonia-20B.IQ3_XS.gguf) | IQ3_XS | 7.63GB |
| [Harmonia-20B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Harmonia-20B-gguf/blob/main/Harmonia-20B.IQ3_S.gguf) | IQ3_S | 8.06GB |
| [Harmonia-20B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Harmonia-20B-gguf/blob/main/Harmonia-20B.Q3_K_S.gguf) | Q3_K_S | 8.06GB |
| [Harmonia-20B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Harmonia-20B-gguf/blob/main/Harmonia-20B.IQ3_M.gguf) | IQ3_M | 8.53GB |
| [Harmonia-20B.Q3_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Harmonia-20B-gguf/blob/main/Harmonia-20B.Q3_K.gguf) | Q3_K | 9.04GB |
| [Harmonia-20B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Harmonia-20B-gguf/blob/main/Harmonia-20B.Q3_K_M.gguf) | Q3_K_M | 9.04GB |
| [Harmonia-20B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Harmonia-20B-gguf/blob/main/Harmonia-20B.Q3_K_L.gguf) | Q3_K_L | 9.9GB |
| [Harmonia-20B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Harmonia-20B-gguf/blob/main/Harmonia-20B.IQ4_XS.gguf) | IQ4_XS | 10.01GB |
| [Harmonia-20B.Q4_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Harmonia-20B-gguf/blob/main/Harmonia-20B.Q4_0.gguf) | Q4_0 | 7.19GB |
| [Harmonia-20B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Harmonia-20B-gguf/blob/main/Harmonia-20B.IQ4_NL.gguf) | IQ4_NL | 10.57GB |
| [Harmonia-20B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Harmonia-20B-gguf/blob/main/Harmonia-20B.Q4_K_S.gguf) | Q4_K_S | 10.59GB |
| [Harmonia-20B.Q4_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Harmonia-20B-gguf/blob/main/Harmonia-20B.Q4_K.gguf) | Q4_K | 11.22GB |
| [Harmonia-20B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Harmonia-20B-gguf/blob/main/Harmonia-20B.Q4_K_M.gguf) | Q4_K_M | 11.22GB |
| [Harmonia-20B.Q4_1.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Harmonia-20B-gguf/blob/main/Harmonia-20B.Q4_1.gguf) | Q4_1 | 11.67GB |
| [Harmonia-20B.Q5_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Harmonia-20B-gguf/blob/main/Harmonia-20B.Q5_0.gguf) | Q5_0 | 12.83GB |
| [Harmonia-20B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Harmonia-20B-gguf/blob/main/Harmonia-20B.Q5_K_S.gguf) | Q5_K_S | 12.83GB |
| [Harmonia-20B.Q5_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Harmonia-20B-gguf/blob/main/Harmonia-20B.Q5_K.gguf) | Q5_K | 13.18GB |
| [Harmonia-20B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Harmonia-20B-gguf/blob/main/Harmonia-20B.Q5_K_M.gguf) | Q5_K_M | 13.18GB |
| [Harmonia-20B.Q5_1.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Harmonia-20B-gguf/blob/main/Harmonia-20B.Q5_1.gguf) | Q5_1 | 13.98GB |
| [Harmonia-20B.Q6_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Harmonia-20B-gguf/blob/main/Harmonia-20B.Q6_K.gguf) | Q6_K | 15.28GB |
| [Harmonia-20B.Q8_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Harmonia-20B-gguf/blob/main/Harmonia-20B.Q8_0.gguf) | Q8_0 | 19.79GB |
Original model description:
---
license: cc-by-nc-4.0
---
## Description
This is a unified 20B model crafted via a multi-step SLERP merge of eight 20B models. The aim was to develop a versatile "base model" for TaskArithmetic in this size class.
## Merging Process:

## Models:
- model: Undi95/Emerhyst-20B
- model: Undi95/MXLewd-L2-20B
- model: Undi95/Lewd-Sydney-20B
- model: athirdpath/Nethena-20b-Glued
- model: tavtav/Rose-20B
- model: Undi95/PsyMedRP-v1-20B
- model: NeverSleep/Noromaid-20b-v0.1.1
- model: Undi95/U-Amethyst-20B
## Concept:
The idea behind this process was to blend the unique attributes of each model while minimizing individual quirks. This approach has also shown promising results as a standalone RP model, providing a combination of high-quality writing and situational problem-solving/awareness.
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
Thanks to Undi95 for pioneering the 20B recipe, and for most of the models involved.
|
AnasMohamed/video-llava | AnasMohamed | 2024-06-17T11:13:43Z | 518 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"gguf",
"clip",
"zero-shot-image-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
]
| zero-shot-image-classification | 2024-06-14T04:09:06Z | ---
tags:
- generated_from_keras_callback
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
model-index:
- name: clip-vit-large-patch14-336
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# clip-vit-large-patch14-336
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.21.3
- TensorFlow 2.8.2
- Tokenizers 0.12.1
|
Ammartatox/qwenwared-Q4_K_M-GGUF | Ammartatox | 2024-06-29T00:05:08Z | 518 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Ammartatox/qwenwared",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T00:03:02Z | ---
base_model: Ammartatox/qwenwared
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- llama-cpp
- gguf-my-repo
---
# Ammartatox/qwenwared-Q4_K_M-GGUF
This model was converted to GGUF format from [`Ammartatox/qwenwared`](https://huggingface.co/Ammartatox/qwenwared) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Ammartatox/qwenwared) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Ammartatox/qwenwared-Q4_K_M-GGUF --hf-file qwenwared-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Ammartatox/qwenwared-Q4_K_M-GGUF --hf-file qwenwared-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Ammartatox/qwenwared-Q4_K_M-GGUF --hf-file qwenwared-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Ammartatox/qwenwared-Q4_K_M-GGUF --hf-file qwenwared-q4_k_m.gguf -c 2048
```
|
NikolayKozloff/Gemma-2-9B-It-SPPO-Iter3-IQ4_NL-GGUF | NikolayKozloff | 2024-06-30T16:39:19Z | 518 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:openbmb/UltraFeedback",
"base_model:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-06-30T16:38:55Z | ---
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
datasets:
- openbmb/UltraFeedback
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Gemma-2-9B-It-SPPO-Iter3-IQ4_NL-GGUF
This model was converted to GGUF format from [`UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3`](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Gemma-2-9B-It-SPPO-Iter3-IQ4_NL-GGUF --hf-file gemma-2-9b-it-sppo-iter3-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Gemma-2-9B-It-SPPO-Iter3-IQ4_NL-GGUF --hf-file gemma-2-9b-it-sppo-iter3-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Gemma-2-9B-It-SPPO-Iter3-IQ4_NL-GGUF --hf-file gemma-2-9b-it-sppo-iter3-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Gemma-2-9B-It-SPPO-Iter3-IQ4_NL-GGUF --hf-file gemma-2-9b-it-sppo-iter3-iq4_nl-imat.gguf -c 2048
```
|
Helsinki-NLP/opus-mt-swc-en | Helsinki-NLP | 2023-08-16T12:06:34Z | 517 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"swc",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-03-02T23:29:04Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-swc-en
* source languages: swc
* target languages: en
* OPUS readme: [swc-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/swc-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/swc-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.swc.en | 41.1 | 0.569 |
|
thu-coai/blenderbot-400M-esconv | thu-coai | 2023-03-20T01:33:25Z | 517 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"blenderbot",
"text2text-generation",
"coai",
"conversational",
"en",
"arxiv:2106.01144",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- pytorch
- coai
pipeline_tag: conversational
---
[blenderbot-400M-distill](https://huggingface.co/facebook/blenderbot-400M-distill) fine-tuned on the [ESConv dataset](https://github.com/thu-coai/Emotional-Support-Conversation). Usage example:
```python
import torch
from transformers import AutoTokenizer
from transformers.models.blenderbot import BlenderbotTokenizer, BlenderbotForConditionalGeneration
def _norm(x):
return ' '.join(x.strip().split())
tokenizer = BlenderbotTokenizer.from_pretrained('thu-coai/blenderbot-400M-esconv')
model = BlenderbotForConditionalGeneration.from_pretrained('thu-coai/blenderbot-400M-esconv')
model.eval()
utterances = [
"I am having a lot of anxiety about quitting my current job. It is too stressful but pays well",
"What makes your job stressful for you?",
"I have to deal with many people in hard financial situations and it is upsetting",
"Do you help your clients to make it to a better financial situation?",
"I do, but often they are not going to get back to what they want. Many people are going to lose their home when safeguards are lifted",
]
input_sequence = ' '.join([' ' + e for e in utterances]) + tokenizer.eos_token # add space prefix and separate utterances with two spaces
input_ids = tokenizer.convert_tokens_to_ids(tokenizer.tokenize(input_sequence))[-128:]
input_ids = torch.LongTensor([input_ids])
model_output = model.generate(input_ids, num_beams=1, do_sample=True, top_p=0.9, num_return_sequences=5, return_dict=False)
generation = tokenizer.batch_decode(model_output, skip_special_tokens=True)
generation = [_norm(e) for e in generation]
print(generation)
utterances.append(generation[0]) # for future loop
```
Please kindly cite the [original paper](https://arxiv.org/abs/2106.01144) if you use this model:
```bib
@inproceedings{liu-etal-2021-towards,
title={Towards Emotional Support Dialog Systems},
author={Liu, Siyang and
Zheng, Chujie and
Demasi, Orianna and
Sabour, Sahand and
Li, Yu and
Yu, Zhou and
Jiang, Yong and
Huang, Minlie},
booktitle={Proceedings of the 59th annual meeting of the Association for Computational Linguistics},
year={2021}
}
```
|
TheBloke/LLaMA-13b-GGUF | TheBloke | 2023-09-20T09:03:17Z | 517 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"license:other",
"text-generation-inference",
"region:us"
]
| null | 2023-09-20T02:19:56Z | ---
base_model: https://ai.meta.com/blog/large-language-model-llama-meta-ai
inference: false
license: other
model_creator: Meta
model_name: LLaMA 13B
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# LLaMA 13B - GGUF
- Model creator: [Meta](https://huggingface.co/none)
- Original model: [LLaMA 13B](https://ai.meta.com/blog/large-language-model-llama-meta-ai)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Meta's LLaMA 13b](https://ai.meta.com/blog/large-language-model-llama-meta-ai).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/LLaMA-13b-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LLaMA-13b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/LLaMA-13b-GGUF)
* [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/huggyllama/llama-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama-13b.Q2_K.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [llama-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [llama-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [llama-13b.Q4_0.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [llama-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [llama-13b.Q5_0.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [llama-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [llama-13b.Q6_K.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [llama-13b.Q8_0.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/LLaMA-13b-GGUF and below it, a specific filename to download, such as: llama-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/LLaMA-13b-GGUF llama-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/LLaMA-13b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/LLaMA-13b-GGUF llama-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m llama-13b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/LLaMA-13b-GGUF", model_file="llama-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Meta's LLaMA 13b
This contains the weights for the LLaMA-13b model. This model is under a non-commercial license (see the LICENSE file).
You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format.
<!-- original-model-card end -->
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.