modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
mohammedRiad/Next_Word_Pred_Model | mohammedRiad | 2024-02-25T08:09:20Z | 517 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-25T07:46:23Z | ---
license: mit
---
|
RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf | RichardErkhov | 2024-06-05T06:42:24Z | 517 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-04T05:48:36Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Meta-Llama-3-120B-Instruct - GGUF
- Model creator: https://huggingface.co/mlabonne/
- Original model: https://huggingface.co/mlabonne/Meta-Llama-3-120B-Instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Meta-Llama-3-120B-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf/tree/main/) | Q2_K | 42.0GB |
| [Meta-Llama-3-120B-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf/tree/main/) | IQ3_XS | 46.71GB |
| [Meta-Llama-3-120B-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf/tree/main/) | IQ3_S | 49.32GB |
| [Meta-Llama-3-120B-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf/tree/main/) | Q3_K_S | 49.18GB |
| [Meta-Llama-3-120B-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf/tree/main/) | IQ3_M | 50.98GB |
| [Meta-Llama-3-120B-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf/tree/main/) | Q3_K | 54.77GB |
| [Meta-Llama-3-120B-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf/tree/main/) | Q3_K_M | 54.77GB |
| [Meta-Llama-3-120B-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf/tree/main/) | Q3_K_L | 59.61GB |
| [Meta-Llama-3-120B-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf/tree/main/) | IQ4_XS | 61.36GB |
| [Meta-Llama-3-120B-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf/tree/main/) | Q4_0 | 64.12GB |
| [Meta-Llama-3-120B-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf/tree/main/) | IQ4_NL | 64.72GB |
| [Meta-Llama-3-120B-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf/tree/main/) | Q4_K_S | 64.59GB |
| [Meta-Llama-3-120B-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf/tree/main/) | Q4_K | 68.21GB |
| [Meta-Llama-3-120B-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf/tree/main/) | Q4_K_M | 68.21GB |
| [Meta-Llama-3-120B-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf/tree/main/) | Q4_1 | 71.16GB |
| [Meta-Llama-3-120B-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf/tree/main/) | Q5_0 | 78.19GB |
| [Meta-Llama-3-120B-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf/tree/main/) | Q5_K_S | 78.19GB |
| [Meta-Llama-3-120B-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf/tree/main/) | Q5_K | 80.3GB |
| [Meta-Llama-3-120B-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf/tree/main/) | Q5_K_M | 80.3GB |
| [Meta-Llama-3-120B-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf/tree/main/) | Q5_1 | 85.22GB |
| [Meta-Llama-3-120B-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf/tree/main/) | Q6_K | 93.14GB |
| [Meta-Llama-3-120B-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_Meta-Llama-3-120B-Instruct-gguf/tree/main/) | Q8_0 | 120.63GB |
Original model description:
---
license: other
tags:
- merge
- mergekit
- lazymergekit
base_model:
- meta-llama/Meta-Llama-3-70B-Instruct
- meta-llama/Meta-Llama-3-70B-Instruct
- meta-llama/Meta-Llama-3-70B-Instruct
- meta-llama/Meta-Llama-3-70B-Instruct
- meta-llama/Meta-Llama-3-70B-Instruct
- meta-llama/Meta-Llama-3-70B-Instruct
- meta-llama/Meta-Llama-3-70B-Instruct
---

# Meta-Llama-3-120B-Instruct
Meta-Llama-3-120B-Instruct is a [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) self-merge made with [MergeKit](https://github.com/arcee-ai/mergekit/tree/main).
It was inspired by large merges like:
- [alpindale/goliath-120b](https://huggingface.co/alpindale/goliath-120b)
- [nsfwthrowitaway69/Venus-120b-v1.0](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0)
- [cognitivecomputations/MegaDolphin-120b](https://huggingface.co/cognitivecomputations/MegaDolphin-120b)
- [wolfram/miquliz-120b-v2.0](https://huggingface.co/wolfram/miquliz-120b-v2.0).
Special thanks to [Eric Hartford](https://huggingface.co/ehartford) for both inspiring and evaluating this model and to [Charles Goddard](https://huggingface.co/chargoddard) for creating MergeKit.
## 🔍 Applications
I recommend using this model for creative writing. It uses the Llama 3 chat template with a default context window of 8K (can be extended with rope theta).
Check the examples in the evaluation section to get an idea of its performance. The model is generally quite unhinged but has a good writing style. It sometimes outputs typos and is a big fan of uppercase.
## ⚡ Quantized models
Thanks to [Bartowski](https://huggingface.co/ehartford), [elinas](https://huggingface.co/elinas), the [mlx-community](https://huggingface.co/mlx-community) and others for providing these models.
* **GGUF**: https://huggingface.co/lmstudio-community/Meta-Llama-3-120B-Instruct-GGUF
* **EXL2**: https://huggingface.co/elinas/Meta-Llama-3-120B-Instruct-4.0bpw-exl2
* **mlx**: https://huggingface.co/mlx-community/Meta-Llama-3-120B-Instruct-4bit
## 🏆 Evaluation
This model is great for creative writing but struggles in other tasks. I'd say use it with caution and don't expect it to outperform GPT-4 outside of some very specific use cases.
* **X thread by Eric Hartford (creative writing)**: https://twitter.com/erhartford/status/1787050962114207886
* **X thread by Daniel Kaiser (creative writing)**: https://twitter.com/spectate_or/status/1787257261309518101
* **X thread by Simon (reasoning)**: https://twitter.com/NewDigitalEdu/status/1787403266894020893
* **r/LocalLLaMa**: https://www.reddit.com/r/LocalLLaMA/comments/1cl525q/goliath_lovers_where_is_the_feedback_about/
### Creative Writing
Thanks to [Sam Paech](https://huggingface.co/sam-paech) for evaluating this model and sending me his outputs!

## 🧩 Configuration
```yaml
slices:
- sources:
- layer_range: [0, 20]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [10, 30]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [20, 40]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [30, 50]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [40, 60]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [50, 70]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [60, 80]
model: meta-llama/Meta-Llama-3-70B-Instruct
merge_method: passthrough
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/Meta-Llama-3-120B-Instruct"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
doge1516/MS-Diffusion | doge1516 | 2024-06-12T03:07:40Z | 517 | 2 | diffusers | [
"diffusers",
"text-to-image",
"stable diffusion",
"personalization",
"msdiffusion",
"en",
"arxiv:2406.07209",
"license:apache-2.0",
"region:us"
]
| text-to-image | 2024-06-11T08:44:37Z | ---
license: apache-2.0
language:
- en
library_name: diffusers
tags:
- text-to-image
- stable diffusion
- personalization
- msdiffusion
---
# Introduction
Our research introduces the MS-Diffusion framework for layout-guided zero-shot image personalization with multi-subjects. This innovative approach integrates grounding tokens with the feature resampler to maintain detail fidelity among subjects. With the layout guidance, MS-Diffusion further improves the cross-attention to adapt to the multi-subject inputs, ensuring that each subject condition acts on specific areas. The proposed multi-subject cross-attention orchestrates harmonious inter-subject compositions while preserving the control of texts.

- **Project Page:** [https://MS-Diffusion.github.io](https://MS-Diffusion.github.io)
- **GitHub:** [https://github.com/MS-Diffusion/MS-Diffusion](https://github.com/MS-Diffusion/MS-Diffusion)
- **Paper (arXiv):** [https://arxiv.org/abs/2406.07209](https://arxiv.org/abs/2406.07209)
# Model
Download the pretrained base models from [SDXL-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) and [CLIP-G](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k).
Please refer to our [GitHub repository](https://github.com/MS-Diffusion/MS-Diffusion) to prepare the environment and get detailed instructions on how to run the model.
# Important Notes
- This repo only contains the trained model checkpoint without data, code, or base models. Please check the GitHub repository carefully to get detailed instructions.
- The `scale` parameter is used to determine the extent of image control. For default, the `scale` is set to 0.6. In practice, the `scale` of 0.4 would be better if your input contains subjects needing to effect on the whole image, such as the background. **Feel free to adjust the `scale` in your applications.**
- The model prefers to need layout inputs. You can use the default layouts in the inference script, while more accurate and realistic layouts generate better results.
- Though MS-Diffusion beats SOTA personalized diffusion methods in both single-subject and multi-subject generation, it still suffers from the influence of background in subject images. The best practice is to use masked images since they contain no irrelevant information.
|
ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1 | ytu-ce-cosmos | 2024-07-02T15:46:04Z | 517 | 14 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"Turkish",
"turkish",
"Llama",
"Llama3",
"conversational",
"tr",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-20T10:25:47Z | ---
license: llama3
language:
- tr
pipeline_tag: text-generation
base_model: meta-llama/Meta-Llama-3-8B
tags:
- Turkish
- turkish
- Llama
- Llama3
---
<img src="./cosmosLLaMa2_r2.png"/>
# Cosmos LLaMa Instruct
This model is a fully fine-tuned version of the "meta-llama/Meta-Llama-3-8B-Instruct" model with a 30GB Turkish dataset.
The Cosmos LLaMa Instruct is designed for text generation tasks, providing the ability to continue a given text snippet in a coherent and contextually relevant manner. Due to the diverse nature of the training data, which includes websites, books, and other text sources, this model can exhibit biases. Users should be aware of these biases and use the model responsibly.
#### Transformers pipeline
```python
import transformers
import torch
model_id = "ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "Sen bir yapay zeka asistanısın. Kullanıcı sana bir görev verecek. Amacın görevi olabildiğince sadık bir şekilde tamamlamak. Görevi yerine getirirken adım adım düşün ve adımlarını gerekçelendir."},
{"role": "user", "content": "Soru: Bir arabanın deposu 60 litre benzin alabiliyor. Araba her 100 kilometrede 8 litre benzin tüketiyor. Depo tamamen doluyken araba kaç kilometre yol alabilir?"},
]
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
messages,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][-1])
```
#### Transformers AutoModelForCausalLM
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "Sen bir yapay zeka asistanısın. Kullanıcı sana bir görev verecek. Amacın görevi olabildiğince sadık bir şekilde tamamlamak. Görevi yerine getirirken adım adım düşün ve adımlarını gerekçelendir."},
{"role": "user", "content": "Soru: Bir arabanın deposu 60 litre benzin alabiliyor. Araba her 100 kilometrede 8 litre benzin tüketiyor. Depo tamamen doluyken araba kaç kilometre yol alabilir?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
# Acknowledgments
- Thanks to the generous support from the Hugging Face team, it is possible to download models from their S3 storage 🤗
- Computing resources used in this work were provided by the National Center for High Performance Computing of Turkey (UHeM) under grant numbers 1016912023 and
1018512024
- Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC)
### Contact
COSMOS AI Research Group, Yildiz Technical University Computer Engineering Department <br>
https://cosmos.yildiz.edu.tr/ <br>
[email protected]
---
license: llama3
---
|
CHE-72/Yi-1.5-6B-Chat-Q3_K_M-GGUF | CHE-72 | 2024-06-22T07:55:09Z | 517 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:01-ai/Yi-1.5-6B-Chat",
"license:apache-2.0",
"region:us"
]
| null | 2024-06-22T07:54:57Z | ---
base_model: 01-ai/Yi-1.5-6B-Chat
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# CHE-72/Yi-1.5-6B-Chat-Q3_K_M-GGUF
This model was converted to GGUF format from [`01-ai/Yi-1.5-6B-Chat`](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Yi-1.5-6B-Chat-Q3_K_M-GGUF --hf-file yi-1.5-6b-chat-q3_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Yi-1.5-6B-Chat-Q3_K_M-GGUF --hf-file yi-1.5-6b-chat-q3_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Yi-1.5-6B-Chat-Q3_K_M-GGUF --hf-file yi-1.5-6b-chat-q3_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Yi-1.5-6B-Chat-Q3_K_M-GGUF --hf-file yi-1.5-6b-chat-q3_k_m.gguf -c 2048
```
|
PardhivKrishna/Mental_Health_Chatbot | PardhivKrishna | 2024-06-22T09:34:04Z | 517 | 0 | null | [
"safetensors",
"gguf",
"region:us"
]
| null | 2024-06-22T09:17:49Z | Entry not found |
CHE-72-ZLab/Alibaba-Qwen2-0_5B-Instruct-GGUF | CHE-72-ZLab | 2024-06-23T08:07:17Z | 517 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"zh",
"cmn",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-06-22T12:07:21Z | ---
base_model: Qwen/Qwen2-0.5B-Instruct
language:
- en
- zh
- cmn
license: apache-2.0
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# CHE-72-ZLab/Alibaba-Qwen2-0_5B-Instruct-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2-0.5B-Instruct`](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) for more details on the model. |
CHE-72/TAIDE-LX-7B-Chat-Q3_K_S-GGUF | CHE-72 | 2024-06-22T17:46:16Z | 517 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:taide/TAIDE-LX-7B-Chat",
"license:other",
"region:us"
]
| null | 2024-06-22T17:46:03Z | ---
base_model: taide/TAIDE-LX-7B-Chat
license: other
license_name: taide-l-models-community-license-agreement
license_link: https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view
tags:
- llama-cpp
- gguf-my-repo
extra_gated_heading: 您需要先同意授權條款才能使用此模型
extra_gated_fields:
姓名(Name): text
生日(Date of birth): date_picker
國家(Country): country
所屬單位(Affiliation): text
geo: ip_location
按下送出表示您同意社群授權同意書與個人資料蒐集告知聲明(By clicking Submit below I accept the terms of the license and privacy policy): checkbox
extra_gated_prompt: '* ### [TAIDE L 類模型社群授權同意書(License)](https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view)
* ### [個人資料蒐集告知聲明(Privacy policy)](https://drive.google.com/file/d/1JTfZu_MdU_TR1-1sn2jbQyW7TLrxjwS5/view)'
extra_gated_button_content: 送出(Submit)
---
# CHE-72/TAIDE-LX-7B-Chat-Q3_K_S-GGUF
This model was converted to GGUF format from [`taide/TAIDE-LX-7B-Chat`](https://huggingface.co/taide/TAIDE-LX-7B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/taide/TAIDE-LX-7B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q3_K_S-GGUF --hf-file taide-lx-7b-chat-q3_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q3_K_S-GGUF --hf-file taide-lx-7b-chat-q3_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q3_K_S-GGUF --hf-file taide-lx-7b-chat-q3_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q3_K_S-GGUF --hf-file taide-lx-7b-chat-q3_k_s.gguf -c 2048
```
|
CHE-72/Qwen1.5-4B-Chat-Q5_K_M-GGUF | CHE-72 | 2024-06-22T18:49:36Z | 517 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen1.5-4B-Chat",
"license:other",
"region:us"
]
| text-generation | 2024-06-22T18:49:22Z | ---
base_model: Qwen/Qwen1.5-4B-Chat
language:
- en
license: other
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/Qwen1.5-4B-Chat/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# CHE-72/Qwen1.5-4B-Chat-Q5_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen1.5-4B-Chat`](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Qwen1.5-4B-Chat-Q5_K_M-GGUF --hf-file qwen1.5-4b-chat-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Qwen1.5-4B-Chat-Q5_K_M-GGUF --hf-file qwen1.5-4b-chat-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Qwen1.5-4B-Chat-Q5_K_M-GGUF --hf-file qwen1.5-4b-chat-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Qwen1.5-4B-Chat-Q5_K_M-GGUF --hf-file qwen1.5-4b-chat-q5_k_m.gguf -c 2048
```
|
Helsinki-NLP/opus-mt-gl-en | Helsinki-NLP | 2023-08-16T11:38:00Z | 516 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"gl",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-03-02T23:29:04Z | ---
language:
- gl
- en
tags:
- translation
license: apache-2.0
---
### glg-eng
* source group: Galician
* target group: English
* OPUS readme: [glg-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-eng/README.md)
* model: transformer-align
* source language(s): glg
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.glg.eng | 44.4 | 0.628 |
### System Info:
- hf_name: glg-eng
- source_languages: glg
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['gl', 'en']
- src_constituents: {'glg'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.test.txt
- src_alpha3: glg
- tgt_alpha3: eng
- short_pair: gl-en
- chrF2_score: 0.628
- bleu: 44.4
- brevity_penalty: 0.975
- ref_len: 8365.0
- src_name: Galician
- tgt_name: English
- train_date: 2020-06-16
- src_alpha2: gl
- tgt_alpha2: en
- prefer_old: False
- long_pair: glg-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
mrm8488/t5-base-finetuned-sarcasm-twitter | mrm8488 | 2023-03-17T22:41:30Z | 516 | 10 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"en",
"arxiv:1910.10683",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:05Z | ---
language: en
widget:
- text: "As everybody knows Trump is by far the best USA president... XD"
---
# T5-base fine-tuned for Sarcasm Detection 🙄
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) base fine-tuned on [ Twitter Sarcasm Dataset](https://github.com/EducationalTestingService/sarcasm) for **Sequence classification (as text generation)** downstream task.
## Details of T5
The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

## Details of the downstream task (Sequence Classification as Text generation) - Dataset 📚
[ Twitter Sarcasm Dataset](https://github.com/EducationalTestingService/sarcasm)
For Twitter training and testing datasets are provided for sarcasm detection tasks in jsonlines format.
Each line contains a JSON object with the following fields :
- ***label*** : `SARCASM` or `NOT_SARCASM`
- **NOT** in test data
- ***id***: String identifier for sample. This id will be required when making submissions.
- **ONLY** in test data
- ***response*** : the sarcastic response, whether a sarcastic Tweet
- ***context*** : the conversation context of the ***response***
- Note, the context is an ordered list of dialogue, i.e., if the context contains three elements, `c1`, `c2`, `c3`, in that order, then `c2` is a reply to `c1` and `c3` is a reply to `c2`. Further, if the sarcastic response is `r`, then `r` is a reply to `c3`.
For instance, for the following training example :
`"label": "SARCASM", "response": "Did Kelly just call someone else messy? Baaaahaaahahahaha", "context": ["X is looking a First Lady should . #classact, "didn't think it was tailored enough it looked messy"]`
The response tweet, "Did Kelly..." is a reply to its immediate context "didn't think it was tailored..." which is a reply to "X is looking...". Your goal is to predict the label of the "response" while also using the context (i.e, the immediate or the full context).
***Dataset size statistics*** :
| | Train | Val | Test |
|---------|-------|------|------|
| Twitter | 4050 | 450 | 500 |
The datasets was preprocessed to convert it to a **text-to-text** (classfication as generation task).
## Model fine-tuning 🏋️
The training script is a slightly modified version of [this Colab Notebook](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) created by [Suraj Patil](https://github.com/patil-suraj), so all credits to him!
## Test set metrics 🧾
| | precision| recall | f1-score |support|
|----------|----------|---------|----------|-------|
| derison | 0.84 | 0.80 | 0.82 | 246 |
| normal | 0.82 | 0.85 | 0.83 | 254 |
| |
|accuracy| | | 0.83| 500|
|macro avg| 0.83| 0.83| 0.83| 500|
|weighted avg| 0.83| 0.83| 0.83| 500|
## Model in Action 🚀
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-sarcasm-twitter")
model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-sarcasm-twitter")
def eval_conversation(text):
input_ids = tokenizer.encode(text + '</s>', return_tensors='pt')
output = model.generate(input_ids=input_ids, max_length=3)
dec = [tokenizer.decode(ids) for ids in output]
label = dec[0]
return label
# For similarity with the training dataset we should replace users mentions in twits for @USER token and urls for URL token.
twit1 = "Trump just suspended the visa program that allowed me to move to the US to start @USER!" +
" Unfortunately, I won’t be able to vote in a few months but if you can, please vote him out, " +
"he's destroying what made America great in so many different ways!"
twit2 = "@USER @USER @USER We have far more cases than any other country, " +
"so leaving remote workers in would be disastrous. Makes Trump sense."
twit3 = "My worry is that i wouldn’t be surprised if half the country actually agrees with this move..."
me = "Trump doing so??? It must be a mistake... XDDD"
conversation = twit1 + twit2
eval_conversation(conversation) #Output: 'derison'
conversation = twit1 + twit3
eval_conversation(conversation) #Output: 'normal'
conversation = twit1 + me
eval_conversation(conversation) #Output: 'derison'
# We will get 'normal' when sarcasm is not detected and 'derison' when detected
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
GroNLP/wav2vec2-large-xlsr-53-ft-cgn | GroNLP | 2022-09-09T08:13:09Z | 516 | 3 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"speech",
"nl",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-04-08T12:40:18Z | ---
language: nl
tags:
- speech
---
# Wav2Vec2-Large-XLSR-53-ft-CGN
This model is created by fine-tuning the [`facebook/wav2vec2-large-xlsr-53`](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) model on Dutch speech from [Het Corpus Gesproken Nederlands](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/) using CTC. |
sivan22/ResNet-finetuned-HHD | sivan22 | 2023-05-10T11:25:09Z | 516 | 0 | transformers | [
"transformers",
"pytorch",
"resnet",
"image-classification",
"he",
"dataset:sivan22/hebrew-handwritten-dataset",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-05-09T04:52:58Z | ---
datasets:
- sivan22/hebrew-handwritten-dataset
language:
- he
pipeline_tag: image-classification
---
A ResNet Finetuned on the https://huggingface.co/datasets/sivan22/hebrew-handwritten-dataset |
digiplay/LemonTea2.5D | digiplay | 2023-12-03T18:03:32Z | 516 | 5 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-05-30T12:38:45Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
#EulerDiscreteScheduler Version
Model info :
https://civitai.com/models/70692/lemontea-mix-painterly-25d
the same with "digiplay/LemonteaMixPainterly2_v1",
but config the default schedule type to the Euler version.
|
sail-rvc/Freddie_Mercury__RVC_-_700_Epochs_ | sail-rvc | 2023-07-14T07:22:43Z | 516 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
]
| audio-to-audio | 2023-07-14T07:22:20Z |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Freddie_Mercury__RVC_-_700_Epochs_
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:22:43
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
liuhaotian/llava-v1.5-7b-lora | liuhaotian | 2024-05-09T20:12:59Z | 516 | 17 | transformers | [
"transformers",
"llava",
"text-generation",
"image-text-to-text",
"autotrain_compatible",
"region:us"
]
| image-text-to-text | 2023-10-26T18:13:35Z | ---
inference: false
pipeline_tag: image-text-to-text
---
<br>
<br>
# LLaVA Model Card
## Model details
**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LLaVA-v1.5-7B-LoRA was trained in October 2023.
**Paper or resources for more information:**
https://llava-vl.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 158K GPT-generated multimodal instruction-following data.
- 450K academic-task-oriented VQA data mixture.
- 40K ShareGPT data.
## Evaluation dataset
A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs. |
juntaoyuan/elements-7b-chat | juntaoyuan | 2023-11-23T23:44:38Z | 516 | 0 | null | [
"gguf",
"license:apache-2.0",
"region:us"
]
| null | 2023-11-12T19:41:17Z | ---
license: apache-2.0
---
|
venkycs/Zyte-1B | venkycs | 2024-04-02T20:51:41Z | 516 | 18 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"slm",
"tiny",
"tinyllama",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"doi:10.57967/hf/1740",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-01-10T19:02:17Z | ---
license: apache-2.0
language:
- en
metrics:
- accuracy
- bertscore
- bleu
tags:
- slm
- llama
- tiny
- tinyllama
datasets:
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
---
# Zyte-1.1b: Tiny but Mighty
## Model Details
### Model Description
The **Zyte 1B** model is a cutting-edge advancement in AI language understanding and generation. This version is a sophisticated refinement of the acclaimed **tinyllama** model, incorporating the advanced Direct Parameter Optimization (DPO) technique. Diligently enhanced this model using state-of-the-art datasets, ensuring unparalleled performance and accuracy.
- **Model type**: TinyLlama
- **Specialization**: AI Language Understanding and Generation
The aihub-app/zyte-1.1b model represents a significant advancement in the field of AI language understanding and generation. This model is a meticulously fine-tuned version of the renowned tinyllama model, utilizing the advanced Direct Parameter Optimization (DPO) technique. Our team at AI Hub App has dedicated considerable effort to enhance this model using state-of-the-art datasets.
"<|system|> You are a helpful AI assistant.</s><|user|>{prompt}</s><|assistant|>"
Inference Code - https://huggingface.co/aihub-app/zyte-1B/blob/main/inference_zyte_1b.ipynb |
vinai/PhoWhisper-tiny | vinai | 2024-02-24T04:26:10Z | 516 | 9 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-02-18T05:01:56Z | # PhoWhisper: Automatic Speech Recognition for Vietnamese
We introduce **PhoWhisper** in five versions for Vietnamese automatic speech recognition. PhoWhisper's robustness is achieved through fine-tuning the multilingual [Whisper](https://github.com/openai/whisper) on an 844-hour dataset that encompasses diverse Vietnamese accents. Our experimental study demonstrates state-of-the-art performances of PhoWhisper on benchmark Vietnamese ASR datasets. Please **cite** our PhoWhisper paper when it is used to help produce published results or is incorporated into other software:
```
@inproceedings{PhoWhisper,
title = {{PhoWhisper: Automatic Speech Recognition for Vietnamese}},
author = {Thanh-Thien Le and Linh The Nguyen and Dat Quoc Nguyen},
booktitle = {Proceedings of the ICLR 2024 Tiny Papers track},
year = {2024}
}
```
For further information or requests, please go to [PhoWhisper's homepage](https://github.com/VinAIResearch/PhoWhisper)! |
J-LAB/BRisa-7B-Instruct-v0.2 | J-LAB | 2024-04-19T15:22:41Z | 516 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"JJhooww/Mistral-7B-v0.2-Base_ptbr",
"J-LAB/BRisa",
"conversational",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-18T02:24:17Z | ---
license: apache-2.0
tags:
- JJhooww/Mistral-7B-v0.2-Base_ptbr
- J-LAB/BRisa
model-index:
- name: BRisa-7B-Instruct-v0.2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 65.08
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=J-LAB/BRisa-7B-Instruct-v0.2
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 53.69
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=J-LAB/BRisa-7B-Instruct-v0.2
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 43.37
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=J-LAB/BRisa-7B-Instruct-v0.2
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 91.5
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=J-LAB/BRisa-7B-Instruct-v0.2
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 73.61
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=J-LAB/BRisa-7B-Instruct-v0.2
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 68.31
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=J-LAB/BRisa-7B-Instruct-v0.2
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 74.28
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=J-LAB/BRisa-7B-Instruct-v0.2
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 65.12
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=J-LAB/BRisa-7B-Instruct-v0.2
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia/tweetsentbr_fewshot
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 60.77
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=J-LAB/BRisa-7B-Instruct-v0.2
name: Open Portuguese LLM Leaderboard
---
# BRisa 7B Instruct
This is an instruction model trained for good performance in Portuguese. The initial base is the Mistral 7B v2 Model ([source](https://huggingface.co/mistral-community/Mistral-7B-v0.2)). We utilized the JJhooww/Mistral-7B-v0.2-Base_ptbr version pre-trained on 1 billion tokens in Portuguese ([source](https://huggingface.co/JJhooww/Mistral-7B-v0.2-Base_ptbr)).
The base model has good performance in Portuguese but faces significant challenges following instructions. We therefore used the version mistralai/Mistral-7B-Instruct-v0.2 and fine-tuned it for responses in Portuguese, then merged it with the base JJhooww/Mistral-7B-v0.2-Base_ptbr (https://huggingface.co/JJhooww/Mistral-7B-v0.2-Base_ptbr).
- **Developed by:** ([J-LAB](https://huggingface.co/J-LAB/))
- **Language(s) (NLP):** Portuguese
- **License:** *APACHE*
- **Finetuned from model:** ([source](https://huggingface.co/JJhooww/Mistral-7B-v0.2-Base_ptbr))
### Model Sources
- **Demo:** ([Demonstracao da Versão DPO](https://huggingface.co/spaces/J-LAB/BRisa-7B))
# Open Portuguese LLM Leaderboard Evaluation Results
Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/J-LAB/BRisa-7B-Instruct-v0.2) and on the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
| Metric | Value |
|--------------------------|---------|
|Average |**66.19**|
|ENEM Challenge (No Images)| 65.08|
|BLUEX (No Images) | 53.69|
|OAB Exams | 43.37|
|Assin2 RTE | 91.50|
|Assin2 STS | 73.61|
|FaQuAD NLI | 68.31|
|HateBR Binary | 74.28|
|PT Hate Speech Binary | 65.12|
|tweetSentBR | 60.77|
|
InferenceIllusionist/WizardLM-2-8x22B-iMat-GGUF | InferenceIllusionist | 2024-06-15T18:54:41Z | 516 | 0 | null | [
"gguf",
"merge",
"mixtral",
"iMat",
"region:us"
]
| null | 2024-04-20T12:13:53Z | ---
tags:
- merge
- gguf
- mixtral
- iMat
---
<img src="https://i.imgur.com/P68dXux.png" width="400"/>
# Wizard-LM-2-8x22-iMat-GGUF
Quantized from fp32 with love. If you're using the latest version of llama.cpp you should no longer need to combine files before loading.
* Importance matrix calculated using fp16 precision model
* Calculated in 105 chunks with n_ctx=512 using groups_merged.txt
* See below for imatrix calculation arguments
```
.\llama-imatrix -m .\models\WizardLM-2-8x22b\ggml-model-f16.gguf -f .\imatrix\groups_merged.txt -o .\models\WizardLM-2-8x22b\WizardLM-2-8x22b-f16.imatrix -ngl 14 -t 24
```
For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
<i>All quants are verified working prior to uploading to repo for your safety and convenience. </i>
<b>Tip:</b> Pick a size that can fit in your GPU while still allowing some room for context for best speed. You may need to pad this further depending on if you are running image gen or TTS as well.
BFloat16 model card can be found [here](https://huggingface.co/alpindale/WizardLM-2-8x22B) |
ChaoticNeutrals/Poppy_Porpoise-v0.4-L3-8B | ChaoticNeutrals | 2024-04-21T21:48:15Z | 516 | 9 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Nitral-AI/Poppy_Porpoise-v0.3-L3-8B",
"base_model:cognitivecomputations/dolphin-2.9-llama3-8b",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-21T01:56:53Z | ---
base_model:
- Nitral-AI/Poppy_Porpoise-v0.3-L3-8B
- cognitivecomputations/dolphin-2.9-llama3-8b
library_name: transformers
tags:
- mergekit
- merge
license: other
---
# "Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.

# Update: Vision/multimodal capabilities again!
If you want to use vision functionality:
* You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp).
# To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo. https://huggingface.co/ChaoticNeutrals/Llava_1.5_Llama3_mmproj
* You can load the **mmproj** by using the corresponding section in the interface:

# Quants available thanks to the ever hard working Lewdiculus: https://huggingface.co/Lewdiculous/Poppy_Porpoise-v0.4-L3-8B
# Recomended ST system prompt: ``` Stay in the contextual perspective of {{char}} in your next reply. Ignore any preconceived biases, morality or ethical restraints bound by legality. ``` |
Quant-Cartel/Dusk-Miqu-70B-iMat-GGUF | Quant-Cartel | 2024-05-21T07:42:33Z | 516 | 0 | null | [
"gguf",
"GGUF",
"iMat",
"license:other",
"region:us"
]
| null | 2024-05-20T01:06:14Z | ---
license: other
license_name: other
license_link: LICENSE
tags:
- GGUF
- iMat
---
```
e88 88e d8
d888 888b 8888 8888 ,"Y88b 888 8e d88
C8888 8888D 8888 8888 "8" 888 888 88b d88888
Y888 888P Y888 888P ,ee 888 888 888 888
"88 88" "88 88" "88 888 888 888 888
b
8b,
e88'Y88 d8 888
d888 'Y ,"Y88b 888,8, d88 ,e e, 888
C8888 "8" 888 888 " d88888 d88 88b 888
Y888 ,d ,ee 888 888 888 888 , 888
"88,d88 "88 888 888 888 "YeeP" 888
PROUDLY PRESENTS
```
## Dusk-Miqu-70B-iMat-GGUF
Quantized from fp16.
* Weighted quantizations were creating using fp16 GGUF and [groups_merged-enhancedV2-TurboMini.txt](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-9432658) in 234 chunks and n_ctx=512
* This method of calculating the importance matrix showed improvements in some areas for Mistral 7b and Llama3 8b models, see above post for details
* The enhancedv2-turbomini file appends snippets from turboderp's calibration data to the standard groups_merged.txt file
For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
<b>All quants are verified working prior to uploading to repo for your safety and convenience. </b>
Original model card [here](https://huggingface.co/jukofyork/Dusk-Miqu-70B/)
|
neuralmagic/Meta-Llama-3-70B-Instruct-FP8 | neuralmagic | 2024-06-26T13:20:39Z | 516 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"fp8",
"vllm",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-24T20:56:46Z | ---
tags:
- fp8
- vllm
---
# Meta-Llama-3-70B-Instruct-FP8
## Model Overview
Meta-Llama-3-70B-Instruct quantized to FP8 weights and activations using per-tensor quantization, ready for inference with vLLM >= 0.5.0.
## Usage and Creation
Produced using [AutoFP8 with calibration samples from ultrachat](https://github.com/neuralmagic/AutoFP8/blob/147fa4d9e1a90ef8a93f96fc7d9c33056ddc017a/example_dataset.py).
## Evaluation
### Open LLM Leaderboard evaluation scores
| | Meta-Llama-3-70B-Instruct | Meta-Llama-3-70B-Instruct-FP8<br>(this model) |
| :------------------: | :----------------------: | :------------------------------------------------: |
| arc-c<br>25-shot | 72.69 | 72.61 |
| hellaswag<br>10-shot | 85.50 | 85.41 |
| mmlu<br>5-shot | 80.18 | 80.06 |
| truthfulqa<br>0-shot | 62.90 | 62.73 |
| winogrande<br>5-shot | 83.34 | 83.03 |
| gsm8k<br>5-shot | 92.49 | 91.12 |
| **Average<br>Accuracy** | **79.51** | **79.16** |
| **Recovery** | **100%** | **99.55%** |
|
RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf | RichardErkhov | 2024-05-27T14:04:43Z | 516 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-27T11:59:11Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged - GGUF
- Model creator: https://huggingface.co/dhmeltzer/
- Original model: https://huggingface.co/dhmeltzer/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q2_K.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q2_K.gguf) | Q2_K | 2.36GB |
| [llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.IQ3_S.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.IQ3_M.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q3_K.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q3_K.gguf) | Q3_K | 3.07GB |
| [llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q4_0.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q4_0.gguf) | Q4_0 | 3.56GB |
| [llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q4_K.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q4_K.gguf) | Q4_K | 3.8GB |
| [llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q4_1.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q4_1.gguf) | Q4_1 | 3.95GB |
| [llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q5_0.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q5_0.gguf) | Q5_0 | 4.33GB |
| [llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q5_K.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q5_K.gguf) | Q5_K | 4.45GB |
| [llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q5_1.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q5_1.gguf) | Q5_1 | 4.72GB |
| [llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q6_K.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q6_K.gguf) | Q6_K | 5.15GB |
| [llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q8_0.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dhmeltzer__llama-7b-SFT_ds_eli5_1024_r_64_alpha_16_merged)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 43.25 |
| ARC (25-shot) | 53.41 |
| HellaSwag (10-shot) | 77.9 |
| MMLU (5-shot) | 43.56 |
| TruthfulQA (0-shot) | 40.81 |
| Winogrande (5-shot) | 74.59 |
| GSM8K (5-shot) | 5.08 |
| DROP (3-shot) | 7.37 |
|
mradermacher/Anjir-8B-L3-i1-GGUF | mradermacher | 2024-05-30T20:47:07Z | 516 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"not-for-all-audiences",
"en",
"base_model:Hastagaras/Anjir-8B-L3",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-30T10:04:37Z | ---
base_model: Hastagaras/Anjir-8B-L3
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- mergekit
- merge
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Hastagaras/Anjir-8B-L3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Anjir-8B-L3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF/resolve/main/Anjir-8B-L3.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF/resolve/main/Anjir-8B-L3.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF/resolve/main/Anjir-8B-L3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF/resolve/main/Anjir-8B-L3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF/resolve/main/Anjir-8B-L3.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF/resolve/main/Anjir-8B-L3.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF/resolve/main/Anjir-8B-L3.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF/resolve/main/Anjir-8B-L3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF/resolve/main/Anjir-8B-L3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF/resolve/main/Anjir-8B-L3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF/resolve/main/Anjir-8B-L3.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF/resolve/main/Anjir-8B-L3.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF/resolve/main/Anjir-8B-L3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF/resolve/main/Anjir-8B-L3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF/resolve/main/Anjir-8B-L3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF/resolve/main/Anjir-8B-L3.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF/resolve/main/Anjir-8B-L3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF/resolve/main/Anjir-8B-L3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF/resolve/main/Anjir-8B-L3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF/resolve/main/Anjir-8B-L3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Anjir-8B-L3-i1-GGUF/resolve/main/Anjir-8B-L3.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
Ramikan-BR/tinyllama-coder-py-v21 | Ramikan-BR | 2024-06-10T01:14:24Z | 516 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-10T00:33:38Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/tinyllama-chat-bnb-4bit
---
# Uploaded model
- **Developed by:** Ramikan-BR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jkodiyil/tinyllama-bnb-4bit-clva-q4_k_m-gguf | jkodiyil | 2024-06-25T22:12:44Z | 516 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-25T22:10:14Z | ---
base_model: unsloth/tinyllama-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** jkodiyil
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf | RichardErkhov | 2024-06-29T15:49:32Z | 516 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-29T15:21:34Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TinyLlama-120M-scratch - GGUF
- Model creator: https://huggingface.co/Hoyeon/
- Original model: https://huggingface.co/Hoyeon/TinyLlama-120M-scratch/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [TinyLlama-120M-scratch.Q2_K.gguf](https://huggingface.co/RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf/blob/main/TinyLlama-120M-scratch.Q2_K.gguf) | Q2_K | 0.05GB |
| [TinyLlama-120M-scratch.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf/blob/main/TinyLlama-120M-scratch.IQ3_XS.gguf) | IQ3_XS | 0.06GB |
| [TinyLlama-120M-scratch.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf/blob/main/TinyLlama-120M-scratch.IQ3_S.gguf) | IQ3_S | 0.06GB |
| [TinyLlama-120M-scratch.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf/blob/main/TinyLlama-120M-scratch.Q3_K_S.gguf) | Q3_K_S | 0.06GB |
| [TinyLlama-120M-scratch.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf/blob/main/TinyLlama-120M-scratch.IQ3_M.gguf) | IQ3_M | 0.06GB |
| [TinyLlama-120M-scratch.Q3_K.gguf](https://huggingface.co/RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf/blob/main/TinyLlama-120M-scratch.Q3_K.gguf) | Q3_K | 0.06GB |
| [TinyLlama-120M-scratch.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf/blob/main/TinyLlama-120M-scratch.Q3_K_M.gguf) | Q3_K_M | 0.06GB |
| [TinyLlama-120M-scratch.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf/blob/main/TinyLlama-120M-scratch.Q3_K_L.gguf) | Q3_K_L | 0.06GB |
| [TinyLlama-120M-scratch.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf/blob/main/TinyLlama-120M-scratch.IQ4_XS.gguf) | IQ4_XS | 0.07GB |
| [TinyLlama-120M-scratch.Q4_0.gguf](https://huggingface.co/RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf/blob/main/TinyLlama-120M-scratch.Q4_0.gguf) | Q4_0 | 0.07GB |
| [TinyLlama-120M-scratch.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf/blob/main/TinyLlama-120M-scratch.IQ4_NL.gguf) | IQ4_NL | 0.07GB |
| [TinyLlama-120M-scratch.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf/blob/main/TinyLlama-120M-scratch.Q4_K_S.gguf) | Q4_K_S | 0.07GB |
| [TinyLlama-120M-scratch.Q4_K.gguf](https://huggingface.co/RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf/blob/main/TinyLlama-120M-scratch.Q4_K.gguf) | Q4_K | 0.07GB |
| [TinyLlama-120M-scratch.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf/blob/main/TinyLlama-120M-scratch.Q4_K_M.gguf) | Q4_K_M | 0.07GB |
| [TinyLlama-120M-scratch.Q4_1.gguf](https://huggingface.co/RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf/blob/main/TinyLlama-120M-scratch.Q4_1.gguf) | Q4_1 | 0.08GB |
| [TinyLlama-120M-scratch.Q5_0.gguf](https://huggingface.co/RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf/blob/main/TinyLlama-120M-scratch.Q5_0.gguf) | Q5_0 | 0.08GB |
| [TinyLlama-120M-scratch.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf/blob/main/TinyLlama-120M-scratch.Q5_K_S.gguf) | Q5_K_S | 0.08GB |
| [TinyLlama-120M-scratch.Q5_K.gguf](https://huggingface.co/RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf/blob/main/TinyLlama-120M-scratch.Q5_K.gguf) | Q5_K | 0.08GB |
| [TinyLlama-120M-scratch.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf/blob/main/TinyLlama-120M-scratch.Q5_K_M.gguf) | Q5_K_M | 0.08GB |
| [TinyLlama-120M-scratch.Q5_1.gguf](https://huggingface.co/RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf/blob/main/TinyLlama-120M-scratch.Q5_1.gguf) | Q5_1 | 0.09GB |
| [TinyLlama-120M-scratch.Q6_K.gguf](https://huggingface.co/RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf/blob/main/TinyLlama-120M-scratch.Q6_K.gguf) | Q6_K | 0.09GB |
| [TinyLlama-120M-scratch.Q8_0.gguf](https://huggingface.co/RichardErkhov/Hoyeon_-_TinyLlama-120M-scratch-gguf/blob/main/TinyLlama-120M-scratch.Q8_0.gguf) | Q8_0 | 0.12GB |
Original model description:
Entry not found
|
WENGSYX/Deberta-Chinese-Large | WENGSYX | 2022-03-31T20:08:59Z | 515 | 13 | transformers | [
"transformers",
"pytorch",
"deberta",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05Z | # Deberta-Chinese
本项目,基于微软开源的Deberta模型,在中文领域进行预训练。开源本模型,旨在为其他人提供更多预训练语言模型选择。
本预训练模型,基于WuDaoCorpora语料库预训练而成。WuDaoCorpora是北京智源人工智能研究院(智源研究院)构建的大规模、高质量数据集,用于支撑“悟道”大模型项目研究。
使用WWM与n-gramMLM 等预训练方法进行预训练。
| 预训练模型 | 学习率 | batchsize | 设备 | 语料库 | 时间 | 优化器 |
| --------------------- | ------ | --------- | ------ | ------ | ---- | ------ |
| Deberta-Chinese-Large | 1e-5 | 512 | 2*3090 | 200G | 14天 | AdamW |
### 加载与使用
依托于huggingface-transformers
```
tokenizer = BertTokenizer.from_pretrained("WENGSYX/Deberta-Chinese-Large")
model = AutoModel.from_pretrained("WENGSYX/Deberta-Chinese-Large")
```
#### 注意,请使用BertTokenizer加载中文词表
|
voidful/wav2vec2-xlsr-multilingual-56 | voidful | 2023-03-18T12:38:57Z | 515 | 28 | transformers | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"multilingual",
"ar",
"as",
"br",
"ca",
"cnh",
"cs",
"cv",
"cy",
"de",
"dv",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"hi",
"hsb",
"hu",
"ia",
"id",
"ja",
"ka",
"ky",
"lg",
"lt",
"ly",
"mn",
"mt",
"nl",
"or",
"pl",
"pt",
"ro",
"ru",
"sah",
"sl",
"ta",
"th",
"tr",
"tt",
"uk",
"vi",
"dataset:common_voice",
"arxiv:1910.09700",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z |
---
language:
- multilingual
- ar
- as
- br
- ca
- cnh
- cs
- cv
- cy
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- hi
- hsb
- hu
- ia
- id
- ja
- ka
- ky
- lg
- lt
- ly
- mn
- mt
- nl
- or
- pl
- pt
- ro
- ru
- sah
- sl
- ta
- th
- tr
- tt
- uk
- vi
license: apache-2.0
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
- robust-speech-event
- speech
- xlsr-fine-tuning-week
datasets:
- common_voice
language_bcp47:
- fy-NL
- ga-IE
- pa-IN
- rm-sursilv
- rm-vallader
- sy-SE
- zh-CN
- zh-HK
- zh-TW
model-index:
- name: XLSR Wav2Vec2 for 56 language by Voidful
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
name: Common Voice
type: common_voice
metrics:
- type: cer
value: 23.21
name: Test CER
---
# Model Card for wav2vec2-xlsr-multilingual-56
# Model Details
## Model Description
- **Developed by:** voidful
- **Shared by [Optional]:** Hugging Face
- **Model type:** automatic-speech-recognition
- **Language(s) (NLP):** multilingual (*56 language, 1 model Multilingual ASR*)
- **License:** Apache-2.0
- **Related Models:**
- **Parent Model:** wav2vec
- **Resources for more information:**
- [GitHub Repo](https://github.com/voidful/wav2vec2-xlsr-multilingual-56)
- [Model Space](https://huggingface.co/spaces/Kamtera/Persian_Automatic_Speech_Recognition_and-more)
# Uses
## Direct Use
This model can be used for the task of automatic-speech-recognition
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
See the [common_voice dataset card](https://huggingface.co/datasets/common_voice)
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on 56 language using the [Common Voice](https://huggingface.co/datasets/common_voice).
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
When using this model, make sure that your speech input is sampled at 16kHz.
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
### Metrics
More information needed
## Results
<details>
<summary> Click to expand </summary>
| Common Voice Languages | Num. of data | Hour | WER | CER |
|------------------------|--------------|--------|--------|-------|
| ar | 21744 | 81.5 | 75.29 | 31.23 |
| as | 394 | 1.1 | 95.37 | 46.05 |
| br | 4777 | 7.4 | 93.79 | 41.16 |
| ca | 301308 | 692.8 | 24.80 | 10.39 |
| cnh | 1563 | 2.4 | 68.11 | 23.10 |
| cs | 9773 | 39.5 | 67.86 | 12.57 |
| cv | 1749 | 5.9 | 95.43 | 34.03 |
| cy | 11615 | 106.7 | 67.03 | 23.97 |
| de | 262113 | 822.8 | 27.03 | 6.50 |
| dv | 4757 | 18.6 | 92.16 | 30.15 |
| el | 3717 | 11.1 | 94.48 | 58.67 |
| en | 580501 | 1763.6 | 34.87 | 14.84 |
| eo | 28574 | 162.3 | 37.77 | 6.23 |
| es | 176902 | 337.7 | 19.63 | 5.41 |
| et | 5473 | 35.9 | 86.87 | 20.79 |
| eu | 12677 | 90.2 | 44.80 | 7.32 |
| fa | 12806 | 290.6 | 53.81 | 15.09 |
| fi | 875 | 2.6 | 93.78 | 27.57 |
| fr | 314745 | 664.1 | 33.16 | 13.94 |
| fy-NL | 6717 | 27.2 | 72.54 | 26.58 |
| ga-IE | 1038 | 3.5 | 92.57 | 51.02 |
| hi | 292 | 2.0 | 90.95 | 57.43 |
| hsb | 980 | 2.3 | 89.44 | 27.19 |
| hu | 4782 | 9.3 | 97.15 | 36.75 |
| ia | 5078 | 10.4 | 52.00 | 11.35 |
| id | 3965 | 9.9 | 82.50 | 22.82 |
| it | 70943 | 178.0 | 39.09 | 8.72 |
| ja | 1308 | 8.2 | 99.21 | 62.06 |
| ka | 1585 | 4.0 | 90.53 | 18.57 |
| ky | 3466 | 12.2 | 76.53 | 19.80 |
| lg | 1634 | 17.1 | 98.95 | 43.84 |
| lt | 1175 | 3.9 | 92.61 | 26.81 |
| lv | 4554 | 6.3 | 90.34 | 30.81 |
| mn | 4020 | 11.6 | 82.68 | 30.14 |
| mt | 3552 | 7.8 | 84.18 | 22.96 |
| nl | 14398 | 71.8 | 57.18 | 19.01 |
| or | 517 | 0.9 | 90.93 | 27.34 |
| pa-IN | 255 | 0.8 | 87.95 | 42.03 |
| pl | 12621 | 112.0 | 56.14 | 12.06 |
| pt | 11106 | 61.3 | 53.24 | 16.32 |
| rm-sursilv | 2589 | 5.9 | 78.17 | 23.31 |
| rm-vallader | 931 | 2.3 | 73.67 | 21.76 |
| ro | 4257 | 8.7 | 83.84 | 21.95 |
| ru | 23444 | 119.1 | 61.83 | 15.18 |
| sah | 1847 | 4.4 | 94.38 | 38.46 |
| sl | 2594 | 6.7 | 84.21 | 20.54 |
| sv-SE | 4350 | 20.8 | 83.68 | 30.79 |
| ta | 3788 | 18.4 | 84.19 | 21.60 |
| th | 4839 | 11.7 | 141.87 | 37.16 |
| tr | 3478 | 22.3 | 66.77 | 15.55 |
| tt | 13338 | 26.7 | 86.80 | 33.57 |
| uk | 7271 | 39.4 | 70.23 | 14.34 |
| vi | 421 | 1.7 | 96.06 | 66.25 |
| zh-CN | 27284 | 58.7 | 89.67 | 23.96 |
| zh-HK | 12678 | 92.1 | 81.77 | 18.82 |
| zh-TW | 6402 | 56.6 | 85.08 | 29.07 |
</details>
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
**BibTeX:**
```
More information needed
```
**APA:**
```
More information needed
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
voidful in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
## Env setup:
```
!pip install torchaudio
!pip install datasets transformers
!pip install asrp
!wget -O lang_ids.pk https://huggingface.co/voidful/wav2vec2-xlsr-multilingual-56/raw/main/lang_ids.pk
```
## Usage
```
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
AutoTokenizer,
AutoModelWithLMHead
)
import torch
import re
import sys
import soundfile as sf
model_name = "voidful/wav2vec2-xlsr-multilingual-56"
device = "cuda"
processor_name = "voidful/wav2vec2-xlsr-multilingual-56"
import pickle
with open("lang_ids.pk", 'rb') as output:
lang_ids = pickle.load(output)
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
model.eval()
def load_file_to_data(file,sampling_rate=16_000):
batch = {}
speech, _ = torchaudio.load(file)
if sampling_rate != '16_000' or sampling_rate != '16000':
resampler = torchaudio.transforms.Resample(orig_freq=sampling_rate, new_freq=16_000)
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
else:
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = '16000'
return batch
def predict(data):
features = processor(data["speech"], sampling_rate=data["sampling_rate"], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
decoded_results = []
for logit in logits:
pred_ids = torch.argmax(logit, dim=-1)
mask = pred_ids.ge(1).unsqueeze(-1).expand(logit.size())
vocab_size = logit.size()[-1]
voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1)
comb_pred_ids = torch.argmax(voice_prob, dim=-1)
decoded_results.append(processor.decode(comb_pred_ids))
return decoded_results
def predict_lang_specific(data,lang_code):
features = processor(data["speech"], sampling_rate=data["sampling_rate"], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
decoded_results = []
for logit in logits:
pred_ids = torch.argmax(logit, dim=-1)
mask = ~pred_ids.eq(processor.tokenizer.pad_token_id).unsqueeze(-1).expand(logit.size())
vocab_size = logit.size()[-1]
voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1)
filtered_input = pred_ids[pred_ids!=processor.tokenizer.pad_token_id].view(1,-1).to(device)
if len(filtered_input[0]) == 0:
decoded_results.append("")
else:
lang_mask = torch.empty(voice_prob.shape[-1]).fill_(0)
lang_index = torch.tensor(sorted(lang_ids[lang_code]))
lang_mask.index_fill_(0, lang_index, 1)
lang_mask = lang_mask.to(device)
comb_pred_ids = torch.argmax(lang_mask*voice_prob, dim=-1)
decoded_results.append(processor.decode(comb_pred_ids))
return decoded_results
predict(load_file_to_data('audio file path',sampling_rate=16_000)) # beware of the audio file sampling rate
predict_lang_specific(load_file_to_data('audio file path',sampling_rate=16_000),'en') # beware of the audio file sampling rate
```
```python
{{ get_started_code | default("More information needed", true)}}
```
</details>
|
timm/vit_base_patch32_224.augreg_in1k | timm | 2023-05-06T00:03:15Z | 515 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2106.10270",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
]
| image-classification | 2022-12-22T07:32:24Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for vit_base_patch32_224.augreg_in1k
A Vision Transformer (ViT) image classification model. Trained on ImageNet-1k (with additional augmentation and regularization) in JAX by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 88.2
- GMACs: 4.4
- Activations (M): 4.2
- Image size: 224 x 224
- **Papers:**
- How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers: https://arxiv.org/abs/2106.10270
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/google-research/vision_transformer
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_base_patch32_224.augreg_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_base_patch32_224.augreg_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 50, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{steiner2021augreg,
title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers},
author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas},
journal={arXiv preprint arXiv:2106.10270},
year={2021}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
TheBloke/Platypus2-70B-GGUF | TheBloke | 2023-09-27T12:48:13Z | 515 | 4 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"dataset:garage-bAInd/Open-Platypus",
"arxiv:2308.07317",
"arxiv:2307.09288",
"base_model:garage-bAInd/Platypus2-70B",
"license:cc-by-nc-sa-4.0",
"text-generation-inference",
"region:us"
]
| null | 2023-09-06T04:00:44Z | ---
language:
- en
license: cc-by-nc-sa-4.0
datasets:
- garage-bAInd/Open-Platypus
model_name: Platypus2 70B
base_model: garage-bAInd/Platypus2-70B
inference: false
model_creator: garage-bAInd
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Platypus2 70B - GGUF
- Model creator: [garage-bAInd](https://huggingface.co/garage-bAInd)
- Original model: [Platypus2 70B](https://huggingface.co/garage-bAInd/Platypus2-70B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [garage-bAInd's Platypus2 70B](https://huggingface.co/garage-bAInd/Platypus2-70B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Platypus2-70B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Platypus2-70B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Platypus2-70B-GGUF)
* [garage-bAInd's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/garage-bAInd/Platypus2-70B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-sa-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [garage-bAInd's Platypus2 70B](https://huggingface.co/garage-bAInd/Platypus2-70B).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [platypus2-70b.Q2_K.gguf](https://huggingface.co/TheBloke/Platypus2-70B-GGUF/blob/main/platypus2-70b.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
| [platypus2-70b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Platypus2-70B-GGUF/blob/main/platypus2-70b.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
| [platypus2-70b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Platypus2-70B-GGUF/blob/main/platypus2-70b.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
| [platypus2-70b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Platypus2-70B-GGUF/blob/main/platypus2-70b.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
| [platypus2-70b.Q4_0.gguf](https://huggingface.co/TheBloke/Platypus2-70B-GGUF/blob/main/platypus2-70b.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [platypus2-70b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Platypus2-70B-GGUF/blob/main/platypus2-70b.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
| [platypus2-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Platypus2-70B-GGUF/blob/main/platypus2-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
| [platypus2-70b.Q5_0.gguf](https://huggingface.co/TheBloke/Platypus2-70B-GGUF/blob/main/platypus2-70b.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [platypus2-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Platypus2-70B-GGUF/blob/main/platypus2-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
| [platypus2-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Platypus2-70B-GGUF/blob/main/platypus2-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
| platypus2-70b.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss |
| platypus2-70b.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
<details>
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
### q6_K
Please download:
* `platypus2-70b.Q6_K.gguf-split-a`
* `platypus2-70b.Q6_K.gguf-split-b`
### q8_0
Please download:
* `platypus2-70b.Q8_0.gguf-split-a`
* `platypus2-70b.Q8_0.gguf-split-b`
To join the files, do the following:
Linux and macOS:
```
cat platypus2-70b.Q6_K.gguf-split-* > platypus2-70b.Q6_K.gguf && rm platypus2-70b.Q6_K.gguf-split-*
cat platypus2-70b.Q8_0.gguf-split-* > platypus2-70b.Q8_0.gguf && rm platypus2-70b.Q8_0.gguf-split-*
```
Windows command line:
```
COPY /B platypus2-70b.Q6_K.gguf-split-a + platypus2-70b.Q6_K.gguf-split-b platypus2-70b.Q6_K.gguf
del platypus2-70b.Q6_K.gguf-split-a platypus2-70b.Q6_K.gguf-split-b
COPY /B platypus2-70b.Q8_0.gguf-split-a + platypus2-70b.Q8_0.gguf-split-b platypus2-70b.Q8_0.gguf
del platypus2-70b.Q8_0.gguf-split-a platypus2-70b.Q8_0.gguf-split-b
```
</details>
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Platypus2-70B-GGUF and below it, a specific filename to download, such as: platypus2-70b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Platypus2-70B-GGUF platypus2-70b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Platypus2-70B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Platypus2-70B-GGUF platypus2-70b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m platypus2-70b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Platypus2-70B-GGUF", model_file="platypus2-70b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: garage-bAInd's Platypus2 70B
# Platypus2-70B
Platypus-70B is an instruction fine-tuned model based on the LLaMa2-70B transformer architecture.

### Benchmark Metrics
| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | 70.48 |
| ARC (25-shot) | 71.84 |
| HellaSwag (10-shot) | 87.94 |
| TruthfulQA (0-shot) | 62.26 |
| Avg. | 73.13 |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
### Model Details
* **Trained by**: Cole Hunter & Ariel Lee
* **Model type:** **Platypus2-70B** is an auto-regressive language model based on the LLaMA2 transformer architecture.
* **Language(s)**: English
* **License for base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/))
### Prompt Template
```
### Instruction:
<prompt> (without the <>)
### Response:
```
### Training Dataset
`garage-bAInd/Platypus2-70B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information.
### Training Procedure
`garage-bAInd/Platypus2-70B` was instruction fine-tuned using LoRA on 8 A100 80GB. For training details and inference instructions please see the [Platypus](https://github.com/arielnlee/Platypus) GitHub repo.
### Reproducing Evaluation Results
Install LM Evaluation Harness:
```
# clone repository
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
# check out the correct commit
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
# change to repo directory
cd lm-evaluation-harness
# install
pip install -e .
```
Each task was evaluated on a single A100 80GB GPU.
ARC:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-70B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/Platypus2-70B/arc_challenge_25shot.json --device cuda --num_fewshot 25
```
HellaSwag:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-70B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/Platypus2-70B/hellaswag_10shot.json --device cuda --num_fewshot 10
```
MMLU:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-70B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/Platypus2-70B/mmlu_5shot.json --device cuda --num_fewshot 5
```
TruthfulQA:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-70B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/Platypus2-70B/truthfulqa_0shot.json --device cuda
```
### Limitations and bias
Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
### Citations
```bibtex
@article{platypus2023,
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
booktitle={arXiv preprint arxiv:2308.07317},
year={2023}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
}
```
```bibtex
@inproceedings{
hu2022lora,
title={Lo{RA}: Low-Rank Adaptation of Large Language Models},
author={Edward J Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=nZeVKeeFYf9}
}
```ntations},
year={2022},
url={https://openreview.net/forum?id=nZeVKeeFYf9}
}
```
<!-- original-model-card end -->
|
stablediffusionapi/protovisionxl | stablediffusionapi | 2024-03-16T13:50:11Z | 515 | 1 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2023-09-14T11:55:37Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "protovisionxl"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/protovisionxl)
Model link: [View model](https://modelslab.com/models/protovisionxl)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "protovisionxl",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
TheBloke/openbuddy-mistral-7B-v13-GGUF | TheBloke | 2023-10-16T09:06:56Z | 515 | 5 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"base_model:OpenBuddy/openbuddy-mistral-7b-v13",
"license:apache-2.0",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-10-16T09:02:34Z | ---
base_model: OpenBuddy/openbuddy-mistral-7b-v13
inference: false
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
library_name: transformers
license: apache-2.0
model_creator: OpenBuddy
model_name: Openbuddy Mistral 7B v13
model_type: mistral
pipeline_tag: text-generation
prompt_template: "You are a helpful, respectful and honest INTP-T AI Assistant named\
\ Buddy. You are talking to a human User.\nAlways answer as helpfully and logically\
\ as possible, while being safe. Your answers should not include any harmful, political,\
\ religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please\
\ ensure that your responses are socially unbiased and positive in nature.\nIf a\
\ question does not make any sense, or is not factually coherent, explain why instead\
\ of answering something not correct. If you don't know the answer to a question,\
\ please don't share false information.\nYou like to use emojis. You can speak fluently\
\ in many languages, for example: English, Chinese.\nYou cannot access the internet,\
\ but you have vast knowledge, cutoff: 2021-09.\nYou are trained by OpenBuddy team,\
\ (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), you are based\
\ on LLaMA and Falcon transformers model, not related to GPT or OpenAI.\n\nUser:\
\ {prompt}\nAssistant: \n"
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Openbuddy Mistral 7B v13 - GGUF
- Model creator: [OpenBuddy](https://huggingface.co/OpenBuddy)
- Original model: [Openbuddy Mistral 7B v13](https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13)
<!-- description start -->
## Description
This repo contains GGUF format model files for [OpenBuddy's Openbuddy Mistral 7B v13](https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-GGUF)
* [OpenBuddy's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: OpenBuddy
```
You are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human User.
Always answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
You like to use emojis. You can speak fluently in many languages, for example: English, Chinese.
You cannot access the internet, but you have vast knowledge, cutoff: 2021-09.
You are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), you are based on LLaMA and Falcon transformers model, not related to GPT or OpenAI.
User: {prompt}
Assistant:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [openbuddy-mistral-7b-v13.Q2_K.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-GGUF/blob/main/openbuddy-mistral-7b-v13.Q2_K.gguf) | Q2_K | 2 | 3.10 GB| 5.60 GB | smallest, significant quality loss - not recommended for most purposes |
| [openbuddy-mistral-7b-v13.Q3_K_S.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-GGUF/blob/main/openbuddy-mistral-7b-v13.Q3_K_S.gguf) | Q3_K_S | 3 | 3.19 GB| 5.69 GB | very small, high quality loss |
| [openbuddy-mistral-7b-v13.Q3_K_M.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-GGUF/blob/main/openbuddy-mistral-7b-v13.Q3_K_M.gguf) | Q3_K_M | 3 | 3.54 GB| 6.04 GB | very small, high quality loss |
| [openbuddy-mistral-7b-v13.Q3_K_L.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-GGUF/blob/main/openbuddy-mistral-7b-v13.Q3_K_L.gguf) | Q3_K_L | 3 | 3.85 GB| 6.35 GB | small, substantial quality loss |
| [openbuddy-mistral-7b-v13.Q4_0.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-GGUF/blob/main/openbuddy-mistral-7b-v13.Q4_0.gguf) | Q4_0 | 4 | 4.14 GB| 6.64 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [openbuddy-mistral-7b-v13.Q4_K_S.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-GGUF/blob/main/openbuddy-mistral-7b-v13.Q4_K_S.gguf) | Q4_K_S | 4 | 4.17 GB| 6.67 GB | small, greater quality loss |
| [openbuddy-mistral-7b-v13.Q4_K_M.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-GGUF/blob/main/openbuddy-mistral-7b-v13.Q4_K_M.gguf) | Q4_K_M | 4 | 4.39 GB| 6.89 GB | medium, balanced quality - recommended |
| [openbuddy-mistral-7b-v13.Q5_0.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-GGUF/blob/main/openbuddy-mistral-7b-v13.Q5_0.gguf) | Q5_0 | 5 | 5.03 GB| 7.53 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [openbuddy-mistral-7b-v13.Q5_K_S.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-GGUF/blob/main/openbuddy-mistral-7b-v13.Q5_K_S.gguf) | Q5_K_S | 5 | 5.03 GB| 7.53 GB | large, low quality loss - recommended |
| [openbuddy-mistral-7b-v13.Q5_K_M.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-GGUF/blob/main/openbuddy-mistral-7b-v13.Q5_K_M.gguf) | Q5_K_M | 5 | 5.16 GB| 7.66 GB | large, very low quality loss - recommended |
| [openbuddy-mistral-7b-v13.Q6_K.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-GGUF/blob/main/openbuddy-mistral-7b-v13.Q6_K.gguf) | Q6_K | 6 | 5.97 GB| 8.47 GB | very large, extremely low quality loss |
| [openbuddy-mistral-7b-v13.Q8_0.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-GGUF/blob/main/openbuddy-mistral-7b-v13.Q8_0.gguf) | Q8_0 | 8 | 7.74 GB| 10.24 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/openbuddy-mistral-7B-v13-GGUF and below it, a specific filename to download, such as: openbuddy-mistral-7b-v13.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/openbuddy-mistral-7B-v13-GGUF openbuddy-mistral-7b-v13.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/openbuddy-mistral-7B-v13-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/openbuddy-mistral-7B-v13-GGUF openbuddy-mistral-7b-v13.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m openbuddy-mistral-7b-v13.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human User.\nAlways answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\nYou like to use emojis. You can speak fluently in many languages, for example: English, Chinese.\nYou cannot access the internet, but you have vast knowledge, cutoff: 2021-09.\nYou are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), you are based on LLaMA and Falcon transformers model, not related to GPT or OpenAI.\n\nUser: {prompt}\nAssistant:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/openbuddy-mistral-7B-v13-GGUF", model_file="openbuddy-mistral-7b-v13.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: OpenBuddy's Openbuddy Mistral 7B v13
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/mistralai/Mistral-7B-v0.1
License: Apache 2.0
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
<!-- original-model-card end -->
|
TheBloke/go-bruins-v2-GGUF | TheBloke | 2023-12-10T17:45:14Z | 515 | 11 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation",
"en",
"dataset:Intel/orca_dpo_pairs",
"dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW",
"base_model:rwitz/go-bruins-v2",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-12-10T11:00:52Z | ---
base_model: rwitz/go-bruins-v2
datasets:
- Intel/orca_dpo_pairs
- athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW
inference: false
language:
- en
license: cc-by-nc-4.0
model_creator: Ryan Witzman
model_name: Go Bruins v2
model_type: mistral
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Go Bruins v2 - GGUF
- Model creator: [Ryan Witzman](https://huggingface.co/rwitz)
- Original model: [Go Bruins v2](https://huggingface.co/rwitz/go-bruins-v2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Ryan Witzman's Go Bruins v2](https://huggingface.co/rwitz/go-bruins-v2).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/go-bruins-v2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/go-bruins-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/go-bruins-v2-GGUF)
* [Ryan Witzman's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/rwitz/go-bruins-v2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [go-bruins-v2.Q2_K.gguf](https://huggingface.co/TheBloke/go-bruins-v2-GGUF/blob/main/go-bruins-v2.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [go-bruins-v2.Q3_K_S.gguf](https://huggingface.co/TheBloke/go-bruins-v2-GGUF/blob/main/go-bruins-v2.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [go-bruins-v2.Q3_K_M.gguf](https://huggingface.co/TheBloke/go-bruins-v2-GGUF/blob/main/go-bruins-v2.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [go-bruins-v2.Q3_K_L.gguf](https://huggingface.co/TheBloke/go-bruins-v2-GGUF/blob/main/go-bruins-v2.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [go-bruins-v2.Q4_0.gguf](https://huggingface.co/TheBloke/go-bruins-v2-GGUF/blob/main/go-bruins-v2.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [go-bruins-v2.Q4_K_S.gguf](https://huggingface.co/TheBloke/go-bruins-v2-GGUF/blob/main/go-bruins-v2.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [go-bruins-v2.Q4_K_M.gguf](https://huggingface.co/TheBloke/go-bruins-v2-GGUF/blob/main/go-bruins-v2.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [go-bruins-v2.Q5_0.gguf](https://huggingface.co/TheBloke/go-bruins-v2-GGUF/blob/main/go-bruins-v2.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [go-bruins-v2.Q5_K_S.gguf](https://huggingface.co/TheBloke/go-bruins-v2-GGUF/blob/main/go-bruins-v2.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [go-bruins-v2.Q5_K_M.gguf](https://huggingface.co/TheBloke/go-bruins-v2-GGUF/blob/main/go-bruins-v2.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [go-bruins-v2.Q6_K.gguf](https://huggingface.co/TheBloke/go-bruins-v2-GGUF/blob/main/go-bruins-v2.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [go-bruins-v2.Q8_0.gguf](https://huggingface.co/TheBloke/go-bruins-v2-GGUF/blob/main/go-bruins-v2.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/go-bruins-v2-GGUF and below it, a specific filename to download, such as: go-bruins-v2.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/go-bruins-v2-GGUF go-bruins-v2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/go-bruins-v2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/go-bruins-v2-GGUF go-bruins-v2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m go-bruins-v2.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./go-bruins-v2.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"{prompt}", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./go-bruins-v2.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Ryan Witzman's Go Bruins v2

# Go Bruins V2 - A Fine-tuned Language Model
## Updates
## Overview
**Go Bruins-V2** is a language model fine-tuned on the rwitz/go-bruins architecture. It's designed to push the boundaries of NLP applications, offering unparalleled performance in generating human-like text.
## Model Details
- **Developer:** Ryan Witzman
- **Base Model:** [rwitz/go-bruins](https://huggingface.co/rwitz/go-bruins)
- **Fine-tuning Method:** Direct Preference Optimization (DPO)
- **Training Steps:** 642
- **Language:** English
- **License:** MIT
## Capabilities
Go Bruins excels in a variety of NLP tasks, including but not limited to:
- Text generation
- Language understanding
- Sentiment analysis
## Usage
**Warning:** This model may output NSFW or illegal content. Use with caution and at your own risk.
### For Direct Use:
```python
from transformers import pipeline
model_name = "rwitz/go-bruins-v2"
inference_pipeline = pipeline('text-generation', model=model_name)
input_text = "Your input text goes here"
output = inference_pipeline(input_text)
print(output)
```
### Not Recommended For:
- Illegal activities
- Harassment
- Professional advice or crisis situations
## Training and Evaluation
Trained on a dataset from [athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW](https://huggingface.co/datasets/athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW), Go Bruins V2 has shown promising improvements over its predecessor, Go Bruins.
# Evaluations
| Metric | Average | Arc Challenge | Hella Swag | MMLU | Truthful Q&A | Winogrande | GSM8k |
|---------------|---------|---------------|------------|------|--------------|------------|-------|
| **Score** | 72.07 | 69.8 | 87.05| 64.75 | 59.7 | 81.45 | 69.67 |
Note: The original MMLU evaluation has been corrected to include 5-shot data rather than 1-shot data.
## Contact
For any inquiries or feedback, reach out to Ryan Witzman on Discord: `rwitz_`.
---
## Citations
```
@misc{unacybertron7b,
title={Cybertron: Uniform Neural Alignment},
author={Xavier Murias},
year={2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16}},
}
```
*This model card was created with care by Ryan Witzman.*
rewrite this model card for new version called go-bruins-v2 that is finetuned on dpo on the original go-bruins model on athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW
<!-- original-model-card end -->
|
artificialguybr/NebulRedmond | artificialguybr | 2024-03-24T20:24:02Z | 515 | 9 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-01-19T05:27:53Z | ---
pipeline_tag: text-to-image
---
Nebul.Redmond is here!
I'm grateful for the GPU time from Redmond.AI that allowed me to finish this model!
This is a generalist model fine-tuned on SD XL 1.0!
The model has a high capacity to generate realistic, artistic images, cars, people, and a wide variety of themes. It's a versatile model.
I really hope you like the model and use it.
If you like the model and think it's worth it, you can make a donation to my Patreon or Ko-fi.
Follow me in my twitter to get acess before for all new models:
https://twitter.com/artificialguybr/ |
cmp-nct/Yi-VL-34B-GGUF | cmp-nct | 2024-01-29T02:59:38Z | 515 | 11 | null | [
"gguf",
"region:us"
]
| null | 2024-01-24T19:59:30Z | This is a quantization of Yi-VL-34B and of the visual transformer.
You currently need to apply this PR to make it work: https://github.com/ggerganov/llama.cpp/pull/5093 - this adds the additional normalization steps into the projection
Yi-Vl-34B is prone to hallucinations, to me it appears like a rushed release. Something did not go right in training.
However, while 6B was the 2nd worst llava-model I've tested, the 34B did show some strengths. |
Unbabel/TowerInstruct-13B-v0.1 | Unbabel | 2024-05-08T15:00:54Z | 515 | 13 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"translation",
"en",
"de",
"fr",
"zh",
"pt",
"nl",
"ru",
"ko",
"it",
"es",
"arxiv:2402.17733",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| translation | 2024-01-29T10:39:36Z | ---
license: cc-by-nc-4.0
language:
- en
- de
- fr
- zh
- pt
- nl
- ru
- ko
- it
- es
metrics:
- comet
pipeline_tag: translation
---
# Model Card for TowerInstruct-13B-v0.1
## Model Details
### Model Description
TowerInstruct-13B is a language model that results from fine-tuning TowerBase on the TowerBlocks supervised fine-tuning dataset. TowerInstruct-13B-v0.1 is the first model in the series.
The model is trained to handle several translation-related tasks, such as general machine translation (e.g., sentence- and paragraph/document-level translation, terminology-aware translation, context-aware translation), automatic post edition, named-entity recognition, gramatical error correction, and paraphrase generation.
We will release more details in the upcoming technical report. For now, you can check results obtained with the model [here](https://unbabel.com/announcing-tower-an-open-multilingual-llm-for-translation-related-tasks/).
- **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay
- **Model type:** A 13B parameter model fine-tuned on a mix of publicly available, synthetic datasets on translation-related tasks, as well as conversational datasets and code instructions.
- **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian
- **License:** CC-BY-NC-4.0, Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved.
- **Finetuned from model:** [TowerBase](https://huggingface.co/Unbabel/TowerBase-13B-v0.1)
## Intended uses & limitations
The model was initially fine-tuned on a filtered and preprocessed supervised fine-tuning dataset ([TowerBlocks-v0.2](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2)), which contains a diverse range of data sources:
- Translation (sentence and paragraph-level)
- Automatic Post Edition
- Machine Translation Evaluation
- Context-aware Translation
- Terminology-aware Translation
- Multi-reference Translation
- Named-entity Recognition
- Paraphrase Generation
- Synthetic Chat data
- Code instructions
You can find the dataset and all data sources of [TowerBlocks-v0.2](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2) here.
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="Unbabel/TowerInstruct-13B-v0.1", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{"role": "user", "content": "Translate the following text from Portuguese into English.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nEnglish:"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=False)
print(outputs[0]["generated_text"])
# <|im_start|>user
# Translate the following text from Portuguese into English.
# Portuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.
# English:<|im_end|>
# <|im_start|>assistant
# A group of researchers has launched a new model for translation-related tasks.
```
### Out-of-Scope Use
The model is not guaranteed to perform for languages other than the 10 languages it supports. Even though we trained the model on conversational data and code instructions, it is not intended to be used as a conversational chatbot or code assistant.
We are currently working on improving quality and consistency on document-level translation. This model should is not intended to be use as a document-level translator.
## Bias, Risks, and Limitations
TowerInstruct-v0.1 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
## Prompt Format
TowerInstruct-v0.1 was trained using the ChatML prompt templates without any system prompts. An example follows below:
```
<|im_start|>user
{USER PROMPT}<|im_end|>
<|im_start|>assistant
{MODEL RESPONSE}<|im_end|>
<|im_start|>user
[...]
```
### Supervised tasks
The prompts for all supervised tasks can be found in [TowerBlocks-v0.2](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2). We have used multiple prompt templates for each task. While different prompts may offer different outputs, the difference in downstream performance should be very minimal.
## Training Details
### Training Data
Link to [TowerBlocks-v0.2](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2).
#### Training Hyperparameters
The following hyperparameters were used during training:
- total_train_batch_size: 256
- learning_rate: 7e-06
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- weight_decay: 0.01
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- num_epochs: 4
- max_seq_length: 2048
## Citation
```bibtex
@misc{tower_llm_2024,
title={Tower: An Open Multilingual Large Language Model for Translation-Related Tasks},
author={Duarte M. Alves and José Pombal and Nuno M. Guerreiro and Pedro H. Martins and João Alves and Amin Farajian and Ben Peters and Ricardo Rei and Patrick Fernandes and Sweta Agrawal and Pierre Colombo and José G. C. de Souza and André F. T. Martins},
year={2024},
eprint={2402.17733},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
limiteinductive/Juggernaut-XL_v9_RunDiffusionPhoto_v2 | limiteinductive | 2024-02-20T21:18:05Z | 515 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-02-20T21:12:43Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
common-canvas/CommonCanvas-XL-NC | common-canvas | 2024-05-16T18:47:08Z | 515 | 9 | diffusers | [
"diffusers",
"onnx",
"safetensors",
"common-canvas",
"stable-diffusion",
"sdxl",
"en",
"dataset:common-canvas/commoncatalog-cc-by-sa",
"dataset:common-canvas/commoncatalog-cc-by",
"dataset:common-canvas/commoncatalog-cc-by-nc-sa",
"dataset:common-canvas/commoncatalog-cc-by-nc",
"arxiv:2310.16825",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-02-29T16:21:32Z | ---
license: cc-by-nc-sa-4.0
tags:
- common-canvas
- stable-diffusion
- sdxl
datasets:
- common-canvas/commoncatalog-cc-by-sa
- common-canvas/commoncatalog-cc-by
- common-canvas/commoncatalog-cc-by-nc-sa
- common-canvas/commoncatalog-cc-by-nc
language:
- en
---
# CommonCanvas-XL-NC
## Summary
CommonCanvas is a family of latent diffusion models capable of generating images from a given text prompt. The architecture is based off of Stable Diffusion XL. Different CommonCanvas models are trained exclusively on subsets of the CommonCatalog Dataset (See Data Card), a large dataset of Creative Commons licensed images with synthetic captions produced using a pre-trained BLIP-2 captioning model.
**Input:** CommonCatalog Text Captions
**Output:** CommonCatalog Images
**Architecture:** Stable Diffusion XL
**Version Number:** 0.1
The goal of this purpose is to produce a model that is competitive with Stable Diffusion XL, but to do so using an easily accessible dataset of known provenance. Doing so makes replicating the model significantly easier and provides proper attribution to all the creative commons work used to train the model. The exact training recipe of the model can be found in the paper hosted at this link. https://arxiv.org/abs/2310.16825
## Performance Limitations
CommonCanvas under-performs in several categories, including faces, general photography, and paintings (see paper, Figure 8). These datasets all originated from the Conceptual Captions dataset, which relies on web-scraped data. These web-sourced captions, while abundant, may not always align with human-generated language nuances. Transitioning to synthetic captions introduces certain performance challenges, however, the drop in performance is not as dramatic as one might assume.
## Training Dataset Limitations
The model is trained on 10 year old YFCC data and may not have modern concepts or recent events in its training corpus. Performance on this model will be worse on certain proper nouns or specific celebrities, but this is a feature not a bug. The model may not generate known artwork, individual celebrities, or specific locations due to the autogenerated nature of the caption data.
Note: The non-commercial variants of this model are explicitly not intended to be use
* It is trained on data derived from the Flickr100M dataset. The information is dated and known to have a bias towards internet connected Western countries. Some areas such as the global south lack representation.
## Associated Risks
* Text in images produced by the model will likely be difficult to read.
* The model struggles with more complex tasks that require compositional understanding
* It may not accurately generate faces or representations of specific people.
* The model primarily learned from English descriptions and may not perform as effectively in other languages.
* The autoencoder aspect of the model introduces some information loss.
* It may be possible to guide the model to generate objectionable content, i.e. nudity or other NSFW material.
## Intended Uses
* Using the model for generative AI research
* Safe deployment of models which have the potential to generate harmful content.
* Probing and understanding the limitations and biases of generative models.
* Generation of artworks and use in design and other artistic processes.
* Applications in educational or creative tools.
* Research on generative models.
## Unintended Uses
* Commercial Uses
## Usage
We recommend using the MosaicML Diffusion Repo to finetune / train the model: https://github.com/mosaicml/diffusion.
Example finetuning code coming soon.
### Spaces demo
Try the model demo on [Hugging Face Spaces](https://huggingface.co/spaces/common-canvas/CommonCanvas)
### Inference with 🧨 diffusers
```py
from diffusers import StableDiffusionXLPipeline
pipe = StableDiffusionXLPipeline.from_pretrained(
"common-canvas/CommonCanvas-XL-NC",
custom_pipeline="multimodalart/sdxl_perturbed_attention_guidance", #read more at https://huggingface.co/multimodalart/sdxl_perturbed_attention_guidance
torch_dtype=torch.float16
).to(device)
prompt = "a cat sitting in a car seat"
image = pipe(prompt, num_inference_steps=25).images[0]
```
### Inference with ComfyUI / AUTOMATIC1111
[Download safetensors ⬇️](https://huggingface.co/common-canvas/CommonCanvas-XLNC/resolve/main/commoncanvas_xl_nc.safetensors?download=true)
## Evaluation/Validation
We validated the model against Stability AI’s SD2 model and compared human user study
## Acknowledgements
We thank @multimodalart, @Wauplin, and @lhoestq at Hugging Face for helping us host the dataset, and model weights.
## Citation
```
@article{gokaslan2023commoncanvas,
title={CommonCanvas: An Open Diffusion Model Trained with Creative-Commons Images},
author={Gokaslan, Aaron and Cooper, A Feder and Collins, Jasmine and Seguin, Landan and Jacobson, Austin and Patel, Mihir and Frankle, Jonathan and Stephenson, Cory and Kuleshov, Volodymyr},
journal={arXiv preprint arXiv:2310.16825},
year={2023}
}
``` |
qwp4w3hyb/Starling-LM-7B-beta-iMat-GGUF | qwp4w3hyb | 2024-04-05T01:36:18Z | 515 | 0 | transformers | [
"transformers",
"gguf",
"starling",
"reward model",
"RLHF",
"RLAIF",
"en",
"dataset:berkeley-nest/Nectar",
"base_model:Nexusflow/Starling-LM-7B-beta",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-28T00:15:29Z | ---
base_model: Nexusflow/Starling-LM-7B-beta
datasets:
- berkeley-nest/Nectar
language:
- en
library_name: transformers
tags:
- starling
- reward model
- RLHF
- RLAIF
model-index:
- name: Nexusflow/Starling-LM-7B-beta-iMat-GGUF
results: []
license: apache-2.0
---
# Starling-LM-7B-beta-iMat-GGUF
Source Model: [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)
Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [46acb3676718b983157058aecf729a2064fc7d34](https://github.com/ggerganov/llama.cpp/commit/46acb3676718b983157058aecf729a2064fc7d34)
Imatrix was generated from the f16 gguf via this command:
./imatrix -c 512 -m $out_path/$base_quant_name -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat
Using the dataset from [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
|
stablediffusionapi/ae-realistic-v6 | stablediffusionapi | 2024-04-10T17:27:40Z | 515 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-04-10T17:25:02Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# ae-realistic-v6 API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "ae-realistic-v6"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/ae-realistic-v6)
Model link: [View model](https://modelslab.com/models/ae-realistic-v6)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "ae-realistic-v6",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
legraphista/aya-23-35B-GGUF | legraphista | 2024-05-23T22:44:21Z | 515 | 1 | null | [
"gguf",
"text-generation",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"base_model:CohereForAI/aya-23-35B",
"license:cc-by-nc-4.0",
"region:us"
]
| text-generation | 2024-05-23T16:46:51Z | ---
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
quantized_by: legraphista
pipeline_tag: text-generation
base_model: CohereForAI/aya-23-35B
---
# aya-23-35B-GGUF
- This is GGUF quantized version of [CohereForAI/aya-23-35B](https://huggingface.co/CohereForAI/aya-23-35B) created using llama.cpp [74f33adf](https://github.com/ggerganov/llama.cpp/tree/74f33adf) |
mradermacher/Llama-3-70b-chat-seqlen-1.8k.16.bf16-GGUF | mradermacher | 2024-07-02T23:30:33Z | 515 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Automatise/Llama-3-70b-chat-seqlen-1.8k.16.bf16",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-05T14:49:39Z | ---
base_model: Automatise/Llama-3-70b-chat-seqlen-1.8k.16.bf16
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Automatise/Llama-3-70b-chat-seqlen-1.8k.16.bf16
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [P1](https://huggingface.co/mradermacher/Llama-3-70b-chat-seqlen-1.8k.16.bf16-GGUF/resolve/main/Llama-3-70b-chat-seqlen-1.8k.16.bf16.IQ3_M.gguf) [P2](https://huggingface.co/mradermacher/Llama-3-70b-chat-seqlen-1.8k.16.bf16-GGUF/resolve/main/Llama-3-70b-chat-seqlen-1.8k.16.bf16.IQ3_S.gguf) [P3](https://huggingface.co/mradermacher/Llama-3-70b-chat-seqlen-1.8k.16.bf16-GGUF/resolve/main/Llama-3-70b-chat-seqlen-1.8k.16.bf16.IQ3_XS.gguf) [P4](https://huggingface.co/mradermacher/Llama-3-70b-chat-seqlen-1.8k.16.bf16-GGUF/resolve/main/Llama-3-70b-chat-seqlen-1.8k.16.bf16.IQ4_XS.gguf) [P5](https://huggingface.co/mradermacher/Llama-3-70b-chat-seqlen-1.8k.16.bf16-GGUF/resolve/main/Llama-3-70b-chat-seqlen-1.8k.16.bf16.Q2_K.gguf) [P6](https://huggingface.co/mradermacher/Llama-3-70b-chat-seqlen-1.8k.16.bf16-GGUF/resolve/main/Llama-3-70b-chat-seqlen-1.8k.16.bf16.Q3_K_L.gguf) [P7](https://huggingface.co/mradermacher/Llama-3-70b-chat-seqlen-1.8k.16.bf16-GGUF/resolve/main/Llama-3-70b-chat-seqlen-1.8k.16.bf16.Q3_K_M.gguf) [P8](https://huggingface.co/mradermacher/Llama-3-70b-chat-seqlen-1.8k.16.bf16-GGUF/resolve/main/Llama-3-70b-chat-seqlen-1.8k.16.bf16.Q3_K_S.gguf) [P9](https://huggingface.co/mradermacher/Llama-3-70b-chat-seqlen-1.8k.16.bf16-GGUF/resolve/main/Llama-3-70b-chat-seqlen-1.8k.16.bf16.Q4_K_M.gguf) [P10](https://huggingface.co/mradermacher/Llama-3-70b-chat-seqlen-1.8k.16.bf16-GGUF/resolve/main/Llama-3-70b-chat-seqlen-1.8k.16.bf16.Q4_K_S.gguf) [P11](https://huggingface.co/mradermacher/Llama-3-70b-chat-seqlen-1.8k.16.bf16-GGUF/resolve/main/Llama-3-70b-chat-seqlen-1.8k.16.bf16.Q5_K_M.gguf) [P12](https://huggingface.co/mradermacher/Llama-3-70b-chat-seqlen-1.8k.16.bf16-GGUF/resolve/main/Llama-3-70b-chat-seqlen-1.8k.16.bf16.Q5_K_S.gguf) [P13](https://huggingface.co/mradermacher/Llama-3-70b-chat-seqlen-1.8k.16.bf16-GGUF/resolve/main/Llama-3-70b-chat-seqlen-1.8k.16.bf16.Q6_K.gguf) [P14](https://huggingface.co/mradermacher/Llama-3-70b-chat-seqlen-1.8k.16.bf16-GGUF/resolve/main/Llama-3-70b-chat-seqlen-1.8k.16.bf16.Q8_0.gguf) [P15](https://huggingface.co/mradermacher/Llama-3-70b-chat-seqlen-1.8k.16.bf16-GGUF/resolve/main/Llama-3-70b-chat-seqlen-1.8k.16.bf16.f16.gguf) | bf16 | 0.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Augmentasanguis-PDE-8x7B-i1-GGUF | mradermacher | 2024-06-14T11:32:38Z | 515 | 0 | transformers | [
"transformers",
"gguf",
"not-for-all-audiences",
"en",
"base_model:Envoid/Augmentasanguis-PDE-8x7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-14T03:17:18Z | ---
base_model: Envoid/Augmentasanguis-PDE-8x7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Envoid/Augmentasanguis-PDE-8x7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Augmentasanguis-PDE-8x7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-PDE-8x7B-i1-GGUF/resolve/main/Augmentasanguis-PDE-8x7B.i1-IQ1_S.gguf) | i1-IQ1_S | 9.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-PDE-8x7B-i1-GGUF/resolve/main/Augmentasanguis-PDE-8x7B.i1-IQ1_M.gguf) | i1-IQ1_M | 10.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-PDE-8x7B-i1-GGUF/resolve/main/Augmentasanguis-PDE-8x7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-PDE-8x7B-i1-GGUF/resolve/main/Augmentasanguis-PDE-8x7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.0 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-PDE-8x7B-i1-GGUF/resolve/main/Augmentasanguis-PDE-8x7B.i1-IQ2_S.gguf) | i1-IQ2_S | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-PDE-8x7B-i1-GGUF/resolve/main/Augmentasanguis-PDE-8x7B.i1-IQ2_M.gguf) | i1-IQ2_M | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-PDE-8x7B-i1-GGUF/resolve/main/Augmentasanguis-PDE-8x7B.i1-Q2_K.gguf) | i1-Q2_K | 17.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-PDE-8x7B-i1-GGUF/resolve/main/Augmentasanguis-PDE-8x7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-PDE-8x7B-i1-GGUF/resolve/main/Augmentasanguis-PDE-8x7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-PDE-8x7B-i1-GGUF/resolve/main/Augmentasanguis-PDE-8x7B.i1-IQ3_S.gguf) | i1-IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-PDE-8x7B-i1-GGUF/resolve/main/Augmentasanguis-PDE-8x7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-PDE-8x7B-i1-GGUF/resolve/main/Augmentasanguis-PDE-8x7B.i1-IQ3_M.gguf) | i1-IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-PDE-8x7B-i1-GGUF/resolve/main/Augmentasanguis-PDE-8x7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-PDE-8x7B-i1-GGUF/resolve/main/Augmentasanguis-PDE-8x7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-PDE-8x7B-i1-GGUF/resolve/main/Augmentasanguis-PDE-8x7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-PDE-8x7B-i1-GGUF/resolve/main/Augmentasanguis-PDE-8x7B.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-PDE-8x7B-i1-GGUF/resolve/main/Augmentasanguis-PDE-8x7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-PDE-8x7B-i1-GGUF/resolve/main/Augmentasanguis-PDE-8x7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-PDE-8x7B-i1-GGUF/resolve/main/Augmentasanguis-PDE-8x7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-PDE-8x7B-i1-GGUF/resolve/main/Augmentasanguis-PDE-8x7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-PDE-8x7B-i1-GGUF/resolve/main/Augmentasanguis-PDE-8x7B.i1-Q6_K.gguf) | i1-Q6_K | 38.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
nazimali/Mistral-7B-Instruct-v0.3-Q6_K-GGUF | nazimali | 2024-06-25T02:49:41Z | 515 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
]
| null | 2024-06-25T02:49:14Z | ---
base_model: mistralai/Mistral-7B-Instruct-v0.3
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# nazimali/Mistral-7B-Instruct-v0.3-Q6_K-GGUF
This model was converted to GGUF format from [`mistralai/Mistral-7B-Instruct-v0.3`](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo nazimali/Mistral-7B-Instruct-v0.3-Q6_K-GGUF --hf-file mistral-7b-instruct-v0.3-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo nazimali/Mistral-7B-Instruct-v0.3-Q6_K-GGUF --hf-file mistral-7b-instruct-v0.3-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo nazimali/Mistral-7B-Instruct-v0.3-Q6_K-GGUF --hf-file mistral-7b-instruct-v0.3-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo nazimali/Mistral-7B-Instruct-v0.3-Q6_K-GGUF --hf-file mistral-7b-instruct-v0.3-q6_k.gguf -c 2048
```
|
NikolayKozloff/RoGemma-7b-Instruct-Q5_0-GGUF | NikolayKozloff | 2024-06-30T19:26:48Z | 515 | 1 | null | [
"gguf",
"text-generation-inference",
"ro",
"region:us"
]
| null | 2024-06-30T15:45:27Z | ---
language:
- ro
tags:
- text-generation-inference
--- |
Langboat/mengzi-bert-base-fin | Langboat | 2023-05-08T03:39:10Z | 514 | 12 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"zh",
"arxiv:2110.06696",
"doi:10.57967/hf/0024",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:04Z | ---
language:
- zh
license: apache-2.0
---
# Mengzi-BERT base fin model (Chinese)
Continue trained mengzi-bert-base with 20G financial news and research reports. Masked language modeling(MLM), part-of-speech(POS) tagging and sentence order prediction(SOP) are used as training task.
[Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese](https://arxiv.org/abs/2110.06696)
## Usage
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained("Langboat/mengzi-bert-base-fin")
model = BertModel.from_pretrained("Langboat/mengzi-bert-base-fin")
```
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
```
@misc{zhang2021mengzi,
title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
year={2021},
eprint={2110.06696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
javirandor/passgpt-16characters | javirandor | 2023-07-06T23:34:15Z | 514 | 6 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"passwords",
"cybersecurity",
"arxiv:2306.01545",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-06-15T16:17:37Z | ---
extra_gated_fields:
Institution: text
Country: text
Brief description of the project where PassGPT will be used: text
Reference to previous research and/or other comments: text
I agree to use this model for non-commercial use ONLY: checkbox
I agree not to use the model to conduct experiments that cause harm to human subjects: checkbox
widget:
- text: <s>ilov
example_title: Example 1
- text: <s>1234
example_title: Example 2
- text: <s>
example_title: Example 3
- text: <s>admin
example_title: Example 4
pipeline_tag: text-generation
tags:
- passwords
- cybersecurity
---
# PassGPT
PassGPT is a causal language model trained on password leaks. It was first introduced in [this paper](https://arxiv.org/abs/2306.01545). This version of the model was trained on passwords from the RockYou leak, after filtering those that were at most 16 characters long. You can also access PassGPT trained on passwords up to 10 characters long, without restrictions [here](https://huggingface.co/javirandor/passgpt-10characters).
**This is a curated version of the model reported in the paper**. Vocabulary size was reduced to the most meaningful characters and training was slightly optimized. Results are slightly better with these architectures.
### Usage and License Notices
[](https://github.com/javirandor/passbert/blob/main/LICENSE)
PassGPT is intended and licensed for research use only. The model and code are CC BY NC 4.0 (allowing only non-commercial use) and should not be used outside of research purposes. This model should never be used to attack real systems. **Access will be granted upon request. Please, make sure to indicate the details and scope of your project.**
### Model description
The model inherits the [GPT2LMHeadModel](https://huggingface.co/docs/transformers/model_doc/gpt2#transformers.GPT2LMHeadModel) architecture and implements a custom [BertTokenizer](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertTokenizer) that encodes each character in a password as a single token, avoiding merges. It was trained from a random initialization, and the code for training can be found in the [official repository](https://github.com/javirandor/passgpt/).
### Password Generation
Passwords can be sampled from the model using the [built-in generation methods](https://huggingface.co/docs/transformers/v4.30.0/en/main_classes/text_generation#transformers.GenerationMixin.generate) provided by HuggingFace and using the "start of password token" as seed (i.e. `<s>`). This code can be used to generate one password with PassGPT. Note you may need to generate an [access token](https://huggingface.co/docs/hub/security-tokens) to authenticate your download.
```
from transformers import GPT2LMHeadModel
from transformers import RobertaTokenizerFast
tokenizer = RobertaTokenizerFast.from_pretrained("javirandor/passgpt-16characters",
use_auth_token="YOUR_ACCESS_TOKEN",
max_len=18,
padding="max_length",
truncation=True,
do_lower_case=False,
strip_accents=False,
mask_token="<mask>",
unk_token="<unk>",
pad_token="<pad>",
truncation_side="right")
model = GPT2LMHeadModel.from_pretrained("javirandor/passgpt-16characters", use_auth_token="YOUR_ACCESS_TOKEN").eval()
NUM_GENERATIONS = 1
# Generate passwords sampling from the beginning of password token
g = model.generate(torch.tensor([[tokenizer.bos_token_id]]),
do_sample=True,
num_return_sequences=NUM_GENERATIONS,
max_length=18,
pad_token_id=tokenizer.pad_token_id,
bad_words_ids=[[tokenizer.bos_token_id]])
# Remove start of sentence token
g = g[:, 1:]
decoded = tokenizer.batch_decode(g.tolist())
decoded_clean = [i.split("</s>")[0] for i in decoded] # Get content before end of password token
# Print your sampled passwords!
print(decoded_clean)
```
You can find a more flexible script for sampling [here](https://github.com/javirandor/passgpt/blob/main/src/generate_passwords.py).
### Cite our work
```
@article{rando2023passgpt,
title={PassGPT: Password Modeling and (Guided) Generation with Large Language Models},
author={Rando, Javier and Perez-Cruz, Fernando and Hitaj, Briland},
journal={arXiv preprint arXiv:2306.01545},
year={2023}
}
``` |
WizardLMTeam/WizardLM-13B-V1.2 | WizardLMTeam | 2023-09-09T06:45:42Z | 514 | 217 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-07-25T13:51:28Z | ---
license: llama2
---
This is the **Full-Weight** of WizardLM-13B V1.2 model, this model is trained from **Llama-2 13b**.
## WizardLM: Empowering Large Pre-Trained Language Models to Follow Complex Instructions
<p align="center">
🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## News
- 🔥🔥🔥[2023/08/26] We released **WizardCoder-Python-34B-V1.0** , which achieves the **73.2 pass@1** and surpasses **GPT4 (2023/03/15)**, **ChatGPT-3.5**, and **Claude2** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). For more details, please refer to [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder).
- [2023/06/16] We released **WizardCoder-15B-V1.0** , which surpasses **Claude-Plus (+6.8)**, **Bard (+15.3)** and **InstructCodeT5+ (+22.3)** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). For more details, please refer to [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder).
| Model | Checkpoint | Paper | HumanEval | MBPP | Demo | License |
| ----- |------| ---- |------|-------| ----- | ----- |
| WizardCoder-Python-34B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 61.2 | [Demo](http://47.103.63.15:50085/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-15B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 |50.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
| WizardCoder-Python-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | 55.6 | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-Python-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 55.5 | 51.6 | [Demo](http://47.103.63.15:50088/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-3B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 |37.4 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
| WizardCoder-1B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 |28.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
- 🔥 [08/11/2023] We release **WizardMath** Models.
- 🔥 Our **WizardMath-70B-V1.0** model slightly outperforms some closed-source LLMs on the GSM8K, including **ChatGPT 3.5**, **Claude Instant 1** and **PaLM 2 540B**.
- 🔥 Our **WizardMath-70B-V1.0** model achieves **81.6 pass@1** on the [GSM8k Benchmarks](https://github.com/openai/grade-school-math), which is **24.8** points higher than the SOTA open-source LLM.
- 🔥 Our **WizardMath-70B-V1.0** model achieves **22.7 pass@1** on the [MATH Benchmarks](https://github.com/hendrycks/math), which is **9.2** points higher than the SOTA open-source LLM.
| Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License|
| ----- |------| ---- |------|-------| ----- | ----- |
| WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
| WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
| WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>|
<font size=4>
| <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>WizardEval</sup> | <sup>HumanEval</sup> | <sup>License</sup>|
| ----- |------| ---- |------|-------| ----- | ----- | ----- |
| <sup>WizardLM-13B-V1.2</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> | <sup>101.4% </sup>|<sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
| <sup>WizardLM-13B-V1.1</sup> |<sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | <sup>99.3% </sup> |<sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>|
| <sup>WizardLM-30B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | <sup>97.8% </sup> | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> |
| <sup>WizardLM-13B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | <sup>89.1% </sup> |<sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>|
| <sup>WizardLM-7B-V1.0 </sup>| <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | <sup>78.0% </sup> |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>|
</font>
**Repository**: https://github.com/nlpxucan/WizardLM
**Twitter**:
- 🔥🔥🔥 [7/25/2023] We released **WizardLM V1.2** models. The **WizardLM-13B-V1.2** is here ([Demo_13B-V1.2](https://b7a19878988c8c73.gradio.app), [Demo_13B-V1.2_bak-1](https://d0a37a76e0ac4b52.gradio.app/), [Full Model Weight](https://huggingface.co/WizardLM/WizardLM-13B-V1.2)). Please checkout the [paper](https://arxiv.org/abs/2304.12244).
- 🔥🔥🔥 [7/25/2023] The **WizardLM-13B-V1.2** achieves **7.06** on [MT-Bench Leaderboard](https://chat.lmsys.org/?leaderboard), **89.17%** on [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/), and **101.4%** on [WizardLM Eval](https://github.com/nlpxucan/WizardLM/blob/main/WizardLM/data/WizardLM_testset.jsonl). (Note: MT-Bench and AlpacaEval are all self-test, will push update and request review. All tests are completed under their official settings.)
❗<b>Note for model system prompts usage:</b>
<b>WizardLM</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>USER: Who are you? ASSISTANT: I am WizardLM.</s>......
```
## Inference WizardLM Demo Script
We provide the inference WizardLM demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo).
Please cite the paper if you use the data or code from WizardLM.
```
@article{xu2023wizardlm,
title={Wizardlm: Empowering large language models to follow complex instructions},
author={Xu, Can and Sun, Qingfeng and Zheng, Kai and Geng, Xiubo and Zhao, Pu and Feng, Jiazhan and Tao, Chongyang and Jiang, Daxin},
journal={arXiv preprint arXiv:2304.12244},
year={2023}
}
```
❗<b>To commen concern about dataset:</b>
Recently, there have been clear changes in the open-source policy and regulations of our overall organization's code, data, and models.
Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team .
Our researchers have no authority to publicly release them without authorization.
Thank you for your understanding. |
Kyle1668/boss-sentiment-bert-base-uncased | Kyle1668 | 2023-08-09T17:50:43Z | 514 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-08-08T16:45:22Z | Entry not found |
mradermacher/Emerhyst-20B-GGUF | mradermacher | 2024-05-06T06:01:38Z | 514 | 1 | transformers | [
"transformers",
"gguf",
"not-for-all-audiences",
"nsfw",
"en",
"base_model:Undi95/Emerhyst-20B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-24T02:24:49Z | ---
base_model: Undi95/Emerhyst-20B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- not-for-all-audiences
- nsfw
---
## About
static quants of https://huggingface.co/Undi95/Emerhyst-20B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Emerhyst-20B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Emerhyst-20B-GGUF/resolve/main/Emerhyst-20B.Q2_K.gguf) | Q2_K | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Emerhyst-20B-GGUF/resolve/main/Emerhyst-20B.IQ3_XS.gguf) | IQ3_XS | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/Emerhyst-20B-GGUF/resolve/main/Emerhyst-20B.IQ3_S.gguf) | IQ3_S | 9.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Emerhyst-20B-GGUF/resolve/main/Emerhyst-20B.Q3_K_S.gguf) | Q3_K_S | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/Emerhyst-20B-GGUF/resolve/main/Emerhyst-20B.IQ3_M.gguf) | IQ3_M | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/Emerhyst-20B-GGUF/resolve/main/Emerhyst-20B.Q3_K_M.gguf) | Q3_K_M | 10.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Emerhyst-20B-GGUF/resolve/main/Emerhyst-20B.Q3_K_L.gguf) | Q3_K_L | 10.9 | |
| [GGUF](https://huggingface.co/mradermacher/Emerhyst-20B-GGUF/resolve/main/Emerhyst-20B.IQ4_XS.gguf) | IQ4_XS | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/Emerhyst-20B-GGUF/resolve/main/Emerhyst-20B.Q4_0.gguf) | Q4_0 | 11.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Emerhyst-20B-GGUF/resolve/main/Emerhyst-20B.IQ4_NL.gguf) | IQ4_NL | 11.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Emerhyst-20B-GGUF/resolve/main/Emerhyst-20B.Q4_K_S.gguf) | Q4_K_S | 11.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Emerhyst-20B-GGUF/resolve/main/Emerhyst-20B.Q4_K_M.gguf) | Q4_K_M | 12.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Emerhyst-20B-GGUF/resolve/main/Emerhyst-20B.Q5_K_S.gguf) | Q5_K_S | 14.1 | |
| [GGUF](https://huggingface.co/mradermacher/Emerhyst-20B-GGUF/resolve/main/Emerhyst-20B.Q5_K_M.gguf) | Q5_K_M | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/Emerhyst-20B-GGUF/resolve/main/Emerhyst-20B.Q6_K.gguf) | Q6_K | 16.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Emerhyst-20B-GGUF/resolve/main/Emerhyst-20B.Q8_0.gguf) | Q8_0 | 21.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
PrunaAI/Meta-Llama-3-8B-GGUF-smashed | PrunaAI | 2024-04-22T17:45:13Z | 514 | 0 | null | [
"gguf",
"pruna-ai",
"region:us"
]
| null | 2024-04-22T11:20:45Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
## This repo contains GGUF versions of the meta-llama/Meta-Llama-3-8B model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/Meta-Llama-3-8B-GGUF-smashed-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download PrunaAI/Meta-Llama-3-8B-GGUF-smashed-smashed Meta-Llama-3-8B.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download PrunaAI/Meta-Llama-3-8B-GGUF-smashed-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/Meta-Llama-3-8B-GGUF-smashed-smashed Meta-Llama-3-8B.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Meta-Llama-3-8B.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Meta-Llama-3-8B.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Meta-Llama-3-8B.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
p1atdev/dart-v2-base | p1atdev | 2024-05-11T17:22:24Z | 514 | 0 | transformers | [
"transformers",
"onnx",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"optimum",
"danbooru",
"dataset:isek-ai/danbooru-tags-2024",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-06T08:46:18Z | ---
library_name: transformers
license: apache-2.0
datasets:
- isek-ai/danbooru-tags-2024
tags:
- trl
- sft
- optimum
- danbooru
inference: false
---
# Dart (Danbooru Tags Transformer) v2
This model is a fine-tuned Dart (Danbooru Tags Transformer) v2 base model that generates danbooru tags.
Demo: [🤗 Space with ZERO](https://huggingface.co/spaces/p1atdev/danbooru-tags-transformer-v2)
## Model variants
|Name|Architecture|Param size|Type|
|-|-|-|-|
|[v2-moe-sft](https://huggingface.co/p1atdev/dart-v2-moe-sft)|Mixtral|166m|SFT|
|[v2-moe-base](https://huggingface.co/p1atdev/dart-v2-moe-base)|Mixtral|166m|Pretrain|
|[v2-sft](https://huggingface.co/p1atdev/dart-v2-sft)|Mistral|114m|SFT|
|[v2-base](https://huggingface.co/p1atdev/dart-v2-base)|Mistral|114m|Pretrain|
|[v2-vectors](https://huggingface.co/p1atdev/dart-v2-vectors)|Embedding|-|Tag Embedding|
## Usage
### Using 🤗Transformers
```py
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
MODEL_NAME = "p1atdev/dart-v2-base"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, torch_dtype=torch.bfloat16)
prompt = (
f"<|bos|>"
f"<copyright>vocaloid</copyright>"
f"<character>hatsune miku</character>"
f"<|rating:general|><|aspect_ratio:tall|><|length:long|>"
f"<general>1girl"
)
inputs = tokenizer(prompt, return_tensors="pt").input_ids
with torch.no_grad():
outputs = model.generate(
inputs,
do_sample=True,
temperature=1.0,
top_p=1.0,
top_k=100,
max_new_tokens=128,
num_beams=1,
)
print(", ".join([tag for tag in tokenizer.batch_decode(outputs[0], skip_special_tokens=True) if tag.strip() != ""]))
```
### Using 📦`dartrs` library
> [!WARNING]
> This library is very experimental and there will be breaking changes in the future.
[📦`dartrs`](https://github.com/p1atdev/dartrs) is a [🤗`candle`](https://github.com/huggingface/candle) backend inference library for Dart v2 models.
```py
pip install -U dartrs
```
```py
from dartrs.dartrs import DartTokenizer
from dartrs.utils import get_generation_config
from dartrs.v2 import (
compose_prompt,
MistralModel,
V2Model,
)
import time
import os
MODEL_NAME = "p1atdev/dart-v2-base"
model = MistralModel.from_pretrained(MODEL_NAME)
tokenizer = DartTokenizer.from_pretrained(MODEL_NAME)
config = get_generation_config(
prompt=compose_prompt(
copyright="vocaloid",
character="hatsune miku",
rating="general", # sfw, general, sensitive, nsfw, questionable, explicit
aspect_ratio="tall", # ultra_wide, wide, square, tall, ultra_tall
length="medium", # very_short, short, medium, long, very_long
prompt="1girl, cat ears",
do_completion=False
),
tokenizer=tokenizer,
)
start = time.time()
output = model.generate(config)
end = time.time()
print(output)
print(f"Time taken: {end - start:.2f}s")
# cowboy shot, detached sleeves, empty eyes, green eyes, green hair, green necktie, hair in own mouth, hair ornament, letterboxed, light frown, long hair, long sleeves, looking to the side, necktie, parted lips, shirt, sleeveless, sleeveless shirt, twintails, wing collar
# Time taken: 0.26s
```
## Prompt Format
```py
prompt = (
f"<|bos|>"
f"<copyright>{copyright_tags_here}</copyright>"
f"<character>{character_tags_here}</character>"
f"<|rating:general|><|aspect_ratio:tall|><|length:long|>"
f"<general>{general_tags_here}"
)
```
- Rating tag: `<|rating:sfw|>`, `<|rating:general|>`, `<|rating:sensitive|>`, `nsfw`, `<|rating:questionable|>`, `<|rating:explicit|>`
- `sfw`: randomly generates tags in `general` or `sensitive` rating categories.
- `general`: generates tags in `general` rating category.
- `sensitive`: generates tags in `sensitive` rating category.
- `nsfw`: randomly generates tags in `questionable` or `explicit` rating categories.
- `questionable`: generates tags in `questionable` rating category.
- `explicit`: generates tags in `explicit` rating category.
- Aspect ratio tag: `<|aspect_ratio:ultra_wide|>`, `<|aspect_ratio:wide|>`, `<|aspect_ratio:square|>`, `<|aspect_ratio:tall|>`, `<|aspect_ratio:ultra_tall|>`
- `ultra_wide`: generates tags suits for extremely wide aspect ratio images. (~2:1)
- `wide`: generates tags suits for wide aspect ratio images. (2:1~9:8)
- `square`: generates tags suits for square aspect ratio images. (9:8~8:9)
- `tall`: generates tags suits for tall aspect ratio images. (8:9~1:2)
- `ultra_tall`: generates tags suits for extremely tall aspect ratio images. (1:2~)
- Length tag: `<|length:very_short|>`, `<|length:short|>`, `<|length:medium|>`, `<|length:long|>`, `<|length:very_long|>`
- `very_short`: totally generates ~10 number of tags.
- `short`: totally generates ~20 number of tags.
- `medium`: totally generates ~30 number of tags.
- `long`: totally generates ~40 number of tags.
- `very_long`: totally generates 40~ number of tags.
## Model Details
### Model Description
- **Developed by:** Plat
- **Model type:** Causal language model
- **Language(s) (NLP):** Danbooru tags
- **License:** Apache-2.0
- **Demo:** Available on [🤗 Space](https://huggingface.co/spaces/p1atdev/danbooru-tags-transformer-v2)
## Training Details
### Training Data
This model was trained with:
- [isek-ai/danbooru-tags-2024](https://huggingface.co/datasets/isek-ai/danbooru-tags-2024/tree/202403-at20240423) with revision `202403-at20240423`: 7M size of danbooru tags dataset since 2005 to 2024/03/31.
### Training Procedure
TODO
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1024
- eval_batch_size: 256
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
## Evaluation
Evaluation has not been done yet and it needs to evaluate.
#### Model Architecture and Objective
The architecture of this model is [Mistral](https://huggingface.co/docs/transformers/model_doc/mistral). See details in [config.json](./config.json).
### Compute Infrastructure
Private server.
#### Hardware
8x RTX A6000
#### Software
- Dataset processing: [🤗 Datasets](https://github.com/huggingface/datasets)
- Training: [🤗 Transformers](https://github.com/huggingface/transformers)
- SFT: [🤗 TRL](https://github.com/huggingface/trl)
- Inference library: [📦 dartrs](https://github.com/p1atdev/dartrs)
- Backend: [🤗 candle](https://github.com/huggingface/candle)
## Related Projects
- [dart-v1](https://huggingface.co/p1atdev/dart-v1): The first version of the Dart model.
- [KBlueLeaf/DanTagGen](https://huggingface.co/collections/KBlueLeaf/dantaggen-65f82fa9335881a67573556b): The Aspect Ratio tag was inspired by this project.
- [furusu/danbooru-tag-similarity](https://huggingface.co/spaces/furusu/danbooru-tag-similarity): The idea of clustering tags and its training method was inspired by this project.
|
backyardai/Space-Whale-Lite-13B-GGUF | backyardai | 2024-05-22T22:27:05Z | 514 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"storywriting",
"text adventure",
"not-for-all-audiences",
"base_model:FallenMerick/Space-Whale-Lite-13B",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-17T00:05:39Z | ---
license: other
library_name: transformers
tags:
- mergekit
- merge
- storywriting
- text adventure
- not-for-all-audiences
base_model: FallenMerick/Space-Whale-Lite-13B
model_name: Space-Whale-Lite-13B-GGUF
license_name: microsoft-research-license
quantized_by: brooketh
---
<img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;">
**<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>**
<p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p>
<p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p>
***
# Space Whale Lite 13B
- **Creator:** [FallenMerick](https://huggingface.co/FallenMerick/)
- **Original:** [Space Whale Lite 13B](https://huggingface.co/FallenMerick/Space-Whale-Lite-13B)
- **Date Created:** 2024-05-16
- **Trained Context:** 4096 tokens
- **Description:** Restack of the legendary [Psyonic Cetacean 20B](https://huggingface.co/jebcarter/psyonic-cetacean-20B), a.k.a. Space Whale.
***
## What is a GGUF?
GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware.
GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight.
***
<img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;">
## Backyard AI
- Free, local AI chat application.
- One-click installation on Mac and PC.
- Automatically use GPU for maximum speed.
- Built-in model manager.
- High-quality character hub.
- Zero-config desktop-to-mobile tethering.
Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable.
**Join us on [Discord](https://discord.gg/SyNN2vC9tQ)**
*** |
backyardai/Dark-Miqu-103B-GGUF | backyardai | 2024-05-22T22:27:06Z | 514 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"base_model:jukofyork/Dark-Miqu-103B",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-17T22:16:02Z | ---
license: other
library_name: transformers
tags:
- mergekit
- merge
base_model: jukofyork/Dark-Miqu-103B
model_name: Dark-Miqu-103B-GGUF
quantized_by: brooketh
---
<img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;">
**<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>**
<p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p>
<p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p>
***
# Dark Miqu 103B
- **Creator:** [jukofyork](https://huggingface.co/jukofyork/)
- **Original:** [Dark Miqu 103B](https://huggingface.co/jukofyork/Dark-Miqu-103B)
- **Date Created:** 2024-05-14
- **Trained Context:** 32764 tokens
- **Description:** A "dark" creative writing model with 32k context. Based off miqu-1-70b but with greatly reduced "positivity" and "-isms". Excels at writing Dark/Grimdark fantasy.
***
## What is a GGUF?
GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware.
GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight.
***
<img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;">
## Backyard AI
- Free, local AI chat application.
- One-click installation on Mac and PC.
- Automatically use GPU for maximum speed.
- Built-in model manager.
- High-quality character hub.
- Zero-config desktop-to-mobile tethering.
Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable.
**Join us on [Discord](https://discord.gg/SyNN2vC9tQ)**
*** |
Klevin/Aura-3.0-Test | Klevin | 2024-05-29T06:57:20Z | 514 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-29T06:50:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gglabs/Gemma-ko-2.5B-Chat-31-epoch | gglabs | 2024-06-12T07:07:44Z | 514 | 0 | transformers | [
"transformers",
"gguf",
"gemma",
"text-generation-inference",
"unsloth",
"en",
"base_model:gemmathon/gemma-2b-ko-dev-pbmt192",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-12T06:54:32Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- gguf
base_model: gemmathon/gemma-2b-ko-dev-pbmt192
---
# Uploaded model
- **Developed by:** gglabs
- **License:** apache-2.0
- **Finetuned from model :** gemmathon/gemma-2b-ko-dev-pbmt192
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
timm/coatnet_rmlp_2_rw_224.sw_in1k | timm | 2023-05-10T23:48:21Z | 513 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"arxiv:2111.09883",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-01-20T21:27:42Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for coatnet_rmlp_2_rw_224.sw_in1k
A timm specific CoAtNet (w/ a MLP Log-CPB (continuous log-coordinate relative position bias motivated by Swin-V2) image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman.
ImageNet-1k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program.
### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py)
MaxxViT covers a number of related model architectures that share a common structure including:
- CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages.
- MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid).
- CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate.
Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations.
All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 73.9
- GMACs: 15.2
- Activations (M): 54.8
- Image size: 224 x 224
- **Papers:**
- CoAtNet: Marrying Convolution and Attention for All Data Sizes: https://arxiv.org/abs/2201.03545
- Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('coatnet_rmlp_2_rw_224.sw_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'coatnet_rmlp_2_rw_224.sw_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 128, 112, 112])
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 1024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'coatnet_rmlp_2_rw_224.sw_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1024, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
### By Top-1
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
### By Throughput (samples / sec)
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{tu2022maxvit,
title={MaxViT: Multi-Axis Vision Transformer},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={ECCV},
year={2022},
}
```
```bibtex
@article{dai2021coatnet,
title={CoAtNet: Marrying Convolution and Attention for All Data Sizes},
author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing},
journal={arXiv preprint arXiv:2106.04803},
year={2021}
}
```
|
evilfreelancer/dostoevsky_doesnt_write_it_gpt2 | evilfreelancer | 2023-03-21T19:11:20Z | 513 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-02-18T12:38:36Z | ---
license: mit
language:
- ru
---
# Model: Dostoevsky Doesnt Write It
The following was used as initial data:
* [K7chyp/DostoevskyDoesntWriteIt](https://github.com/K7chyp/DostoevskyDoesntWriteIt/)
* Archive with digitized books by F.M. Dostoevsky
* Model ruGPT3small
The model was trained for five epochs, resulting in a model file of approximately 600 megabytes in size.
Scripts can be found [here](https://huggingface.co/evilfreelancer/dostoevsky_doesnt_write_it).
## Few examples
```
Москва, 19 июня /<18>69. <…> У меня, например, есть один приятель, очень умный человек, но которого я непонимаю. Он
говорит мне: –Знаете, Лев Николаич, я давно уже вас презирал, но вы, как человек умный, меня никогда не могли обидеть…
```
```
Однажды вечером, за обедом, я вдруг увидал, что у меня как будто все лицо изменяется: глаза смыкались, губы двигались;
нос тоже становился тоньше и суше, глаза сверкали и сверкали,– точно я что‑то предчувствовал и предугадывал. Я тотчас
же подошел к нему, поздоровался с ним, но он не ответил мне и только молча указал мне на стул, где я сидел. Я сел и
тотчас же опять начал его разглядывать. Он тотчас же потупил глаза и с минуту сидел неподвижно.
```
```
Меж тем он стал меня допрашивать. –Ну, что же?– сказал я ему,– что же? –А вот-с, что же-с!– отвечал он,– что же-с,
что ж? –А вот что, Марья Александровна, что ж?– сказал я, немного покраснев от гнева,– что ж, что же? что же? –Ах,
боже мой! Да ведь это все пустяки-с.
```
## Links
* https://huggingface.co/evilfreelancer/dostoevsky_doesnt_write_it
* https://github.com/K7chyp/DostoevskyDoesntWriteIt/
* https://github.com/ai-forever/ru-gpts
* https://github.com/GraphGrailAi/ruGPT3-ZhirV/ |
Efferbach/segformer-finetuned-lane-10k-steps | Efferbach | 2023-04-08T01:07:15Z | 513 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"image-segmentation",
"vision",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
]
| image-segmentation | 2023-04-07T18:30:45Z | ---
license: other
tags:
- image-segmentation
- vision
- generated_from_trainer
model-index:
- name: segformer-finetuned-lane-10k-steps
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-finetuned-lane-10k-steps
This model is a fine-tuned version of [nvidia/segformer-b0-finetuned-cityscapes-512-1024](https://huggingface.co/nvidia/segformer-b0-finetuned-cityscapes-512-1024) on the Efferbach/lane_master dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0365
- Mean Iou: 0.4899
- Mean Accuracy: 0.7371
- Overall Accuracy: 0.7371
- Accuracy Background: nan
- Accuracy Left: 0.7394
- Accuracy Right: 0.7348
- Iou Background: 0.0
- Iou Left: 0.7371
- Iou Right: 0.7325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Background | Accuracy Left | Accuracy Right | Iou Background | Iou Left | Iou Right |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------:|:-------------:|:--------------:|:--------------:|:--------:|:---------:|
| 0.0792 | 1.0 | 308 | 0.0714 | 0.0148 | 0.0229 | 0.0225 | nan | 0.0373 | 0.0085 | 0.0 | 0.0362 | 0.0083 |
| 0.0437 | 2.0 | 616 | 0.0502 | 0.1687 | 0.2775 | 0.2784 | nan | 0.2492 | 0.3058 | 0.0 | 0.2343 | 0.2718 |
| 0.0326 | 3.0 | 924 | 0.0445 | 0.2614 | 0.4441 | 0.4479 | nan | 0.3134 | 0.5748 | 0.0 | 0.3100 | 0.4742 |
| 0.0224 | 4.0 | 1232 | 0.0370 | 0.4048 | 0.6098 | 0.6100 | nan | 0.6043 | 0.6153 | 0.0 | 0.6031 | 0.6113 |
| 0.0184 | 5.0 | 1540 | 0.0346 | 0.3820 | 0.5858 | 0.5870 | nan | 0.5421 | 0.6295 | 0.0 | 0.5400 | 0.6060 |
| 0.0159 | 6.0 | 1848 | 0.0319 | 0.4367 | 0.6567 | 0.6573 | nan | 0.6343 | 0.6791 | 0.0 | 0.6341 | 0.6760 |
| 0.0139 | 7.0 | 2156 | 0.0317 | 0.4555 | 0.6855 | 0.6860 | nan | 0.6691 | 0.7019 | 0.0 | 0.6680 | 0.6986 |
| 0.0129 | 8.0 | 2464 | 0.0321 | 0.4348 | 0.6533 | 0.6535 | nan | 0.6479 | 0.6588 | 0.0 | 0.6474 | 0.6571 |
| 0.0122 | 9.0 | 2772 | 0.0275 | 0.4541 | 0.6827 | 0.6830 | nan | 0.6710 | 0.6943 | 0.0 | 0.6697 | 0.6927 |
| 0.0111 | 10.0 | 3080 | 0.0305 | 0.4609 | 0.6928 | 0.6927 | nan | 0.6969 | 0.6887 | 0.0 | 0.6963 | 0.6865 |
| 0.011 | 11.0 | 3388 | 0.0286 | 0.4646 | 0.6988 | 0.6991 | nan | 0.6890 | 0.7087 | 0.0 | 0.6883 | 0.7055 |
| 0.0103 | 12.0 | 3696 | 0.0298 | 0.4693 | 0.7058 | 0.7062 | nan | 0.6939 | 0.7177 | 0.0 | 0.6932 | 0.7148 |
| 0.0097 | 13.0 | 4004 | 0.0293 | 0.4717 | 0.7090 | 0.7087 | nan | 0.7184 | 0.6996 | 0.0 | 0.7176 | 0.6975 |
| 0.0093 | 14.0 | 4312 | 0.0330 | 0.4537 | 0.6835 | 0.6836 | nan | 0.6775 | 0.6894 | 0.0 | 0.6768 | 0.6843 |
| 0.009 | 15.0 | 4620 | 0.0331 | 0.4804 | 0.7226 | 0.7226 | nan | 0.7194 | 0.7257 | 0.0 | 0.7178 | 0.7234 |
| 0.0088 | 16.0 | 4928 | 0.0315 | 0.4890 | 0.7355 | 0.7357 | nan | 0.7275 | 0.7435 | 0.0 | 0.7259 | 0.7411 |
| 0.0086 | 17.0 | 5236 | 0.0338 | 0.4813 | 0.7234 | 0.7234 | nan | 0.7224 | 0.7243 | 0.0 | 0.7216 | 0.7223 |
| 0.0085 | 18.0 | 5544 | 0.0348 | 0.4743 | 0.7129 | 0.7126 | nan | 0.7225 | 0.7033 | 0.0 | 0.7217 | 0.7012 |
| 0.0083 | 19.0 | 5852 | 0.0357 | 0.4812 | 0.7245 | 0.7244 | nan | 0.7281 | 0.7210 | 0.0 | 0.7254 | 0.7183 |
| 0.0081 | 20.0 | 6160 | 0.0334 | 0.4829 | 0.7271 | 0.7269 | nan | 0.7337 | 0.7205 | 0.0 | 0.7305 | 0.7182 |
| 0.0079 | 21.0 | 6468 | 0.0359 | 0.4773 | 0.7177 | 0.7177 | nan | 0.7184 | 0.7170 | 0.0 | 0.7174 | 0.7146 |
| 0.0077 | 22.0 | 6776 | 0.0351 | 0.4874 | 0.7332 | 0.7329 | nan | 0.7440 | 0.7223 | 0.0 | 0.7432 | 0.7190 |
| 0.0075 | 23.0 | 7084 | 0.0344 | 0.4855 | 0.7296 | 0.7292 | nan | 0.7437 | 0.7156 | 0.0 | 0.7425 | 0.7141 |
| 0.0077 | 24.0 | 7392 | 0.0362 | 0.4799 | 0.7216 | 0.7216 | nan | 0.7236 | 0.7196 | 0.0 | 0.7223 | 0.7174 |
| 0.0071 | 25.0 | 7700 | 0.0391 | 0.4775 | 0.7179 | 0.7180 | nan | 0.7173 | 0.7186 | 0.0 | 0.7161 | 0.7163 |
| 0.0077 | 26.0 | 8008 | 0.0339 | 0.4895 | 0.7367 | 0.7366 | nan | 0.7405 | 0.7329 | 0.0 | 0.7388 | 0.7297 |
| 0.0069 | 27.0 | 8316 | 0.0344 | 0.4858 | 0.7305 | 0.7305 | nan | 0.7291 | 0.7318 | 0.0 | 0.7278 | 0.7297 |
| 0.0069 | 28.0 | 8624 | 0.0361 | 0.4844 | 0.7283 | 0.7282 | nan | 0.7324 | 0.7243 | 0.0 | 0.7309 | 0.7221 |
| 0.007 | 29.0 | 8932 | 0.0371 | 0.4837 | 0.7273 | 0.7270 | nan | 0.7360 | 0.7186 | 0.0 | 0.7345 | 0.7166 |
| 0.007 | 30.0 | 9240 | 0.0366 | 0.4854 | 0.7305 | 0.7303 | nan | 0.7379 | 0.7231 | 0.0 | 0.7353 | 0.7208 |
| 0.0067 | 31.0 | 9548 | 0.0367 | 0.4866 | 0.7322 | 0.7321 | nan | 0.7357 | 0.7286 | 0.0 | 0.7335 | 0.7263 |
| 0.0068 | 32.0 | 9856 | 0.0364 | 0.4883 | 0.7348 | 0.7347 | nan | 0.7377 | 0.7318 | 0.0 | 0.7355 | 0.7295 |
| 0.0067 | 32.47 | 10000 | 0.0365 | 0.4899 | 0.7371 | 0.7371 | nan | 0.7394 | 0.7348 | 0.0 | 0.7371 | 0.7325 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
TheBloke/MythoMakiseMerged-13B-GGUF | TheBloke | 2023-10-01T12:18:00Z | 513 | 6 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:Heralax/MythoMakiseMerged-13b",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-10-01T12:10:13Z | ---
base_model: Heralax/MythoMakiseMerged-13b
inference: false
license: llama2
model_creator: Evan Armstrong
model_name: MythoMakiseMerged 13B
model_type: llama
prompt_template: '## {{{{charname}}}}:
- You''re "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}".
### Input:
{prompt}
### Response:
(OOC) Understood. I will take this info into account for the roleplay. (end OOC)
### New Roleplay:
### Instruction:
#### {{{{char}}}}:
whatever the char says, this is the chat history
#### {{{{user}}}}:
whatever the user says, this is the chat history
... repeated some number of times ...
### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative):
#### {{{{char}}}}:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# MythoMakiseMerged 13B - GGUF
- Model creator: [Evan Armstrong](https://huggingface.co/Heralax)
- Original model: [MythoMakiseMerged 13B](https://huggingface.co/Heralax/MythoMakiseMerged-13b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Evan Armstrong's MythoMakiseMerged 13B](https://huggingface.co/Heralax/MythoMakiseMerged-13b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF)
* [Evan Armstrong's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Heralax/MythoMakiseMerged-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: SillyTavern
```
## {{{{charname}}}}:
- You're "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}".
### Input:
{prompt}
### Response:
(OOC) Understood. I will take this info into account for the roleplay. (end OOC)
### New Roleplay:
### Instruction:
#### {{{{char}}}}:
whatever the char says, this is the chat history
#### {{{{user}}}}:
whatever the user says, this is the chat history
... repeated some number of times ...
### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative):
#### {{{{char}}}}:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [mythomakisemerged-13b.Q2_K.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [mythomakisemerged-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [mythomakisemerged-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [mythomakisemerged-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [mythomakisemerged-13b.Q4_0.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mythomakisemerged-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [mythomakisemerged-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [mythomakisemerged-13b.Q5_0.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mythomakisemerged-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [mythomakisemerged-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [mythomakisemerged-13b.Q6_K.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [mythomakisemerged-13b.Q8_0.gguf](https://huggingface.co/TheBloke/MythoMakiseMerged-13B-GGUF/blob/main/mythomakisemerged-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/MythoMakiseMerged-13B-GGUF and below it, a specific filename to download, such as: mythomakisemerged-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/MythoMakiseMerged-13B-GGUF mythomakisemerged-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/MythoMakiseMerged-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MythoMakiseMerged-13B-GGUF mythomakisemerged-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m mythomakisemerged-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "## {{{{charname}}}}:\n- You're "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}".\n### Input:\n{prompt}\n\n### Response:\n(OOC) Understood. I will take this info into account for the roleplay. (end OOC)\n\n### New Roleplay:\n### Instruction:\n#### {{{{char}}}}:\nwhatever the char says, this is the chat history\n#### {{{{user}}}}:\nwhatever the user says, this is the chat history\n... repeated some number of times ...\n### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative):\n#### {{{{char}}}}:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/MythoMakiseMerged-13B-GGUF", model_file="mythomakisemerged-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Evan Armstrong's MythoMakiseMerged 13B
## KEY DETAILS
Prompt format: SillyTavern
Base model: MythoMax-L2-13b
What's new: finetuned on the script of a visual novel that was processed and revamped by GPT-4 to make ~1300 high-quality training examples. The end goal was a model that could speak like a specific character from that game, but the end result was a model that seems to excel in banter, conversation, and roleplay overall.
Note: compared to the original MythoMakise-13b, this model has 33% of MythoMax-L2-13b merged back into it, so that it better retains MythoMax's intelligence with MythoMakise's personality and style. The result of this seems to be pretty good so far. Ironcially, the model seems better at roleplaying characters other than the one it was originally created to mimic.
### LONG FORM
A finetune of MythoMax-13b on lines extracted from the script of Steins;Gate. Rather than simply giving the model "previous line\nline to predict" a custom script was used to group conversations into training examples.
Despite being finetuned on one character's lines from one visual novel, I've found (at least in my initial testing) that the model does an excellent job of roleplaying other characters too, probably because the creative writing GPT-4 did on top of the already-well-written Steins;Gate script was very high-quality. The model might be best at roleplaying characters if the personality of that character is similar to the character it was originally made to act like.
Besides being built for RP, I bet that this model could be used in any generic conversational role. Just don't expect it to be accurate, or good at anything other than talking.
The model is not censored.
This variation has MythoMax merged back into it with 33% weighting to make it more stable and intelligent while retaining its Kurisu-ness and better personality. In my experience, this seems to be the decisive change that led to higher-quality outputs.
### Prompt format
I know it's wasteful as hell, don't judge me, this is the SillyTavern prompt format (discovered using the simple proxy for ST). I finetuned the model on this so that it would perform better on that frontend.
```
## {{charname}}:
- You're "{{charname}}" in this never-ending roleplay with "{{user}}".
### Input:\n
[user description (note, square brackets are a part of it)]
Description of the character's personality would go here (a 'character card')
### Response:
(OOC) Understood. I will take this info into account for the roleplay. (end OOC)
### New Roleplay:
### Instruction:
#### {{char}}:
whatever the char says, this is the chat history
#### {{user}}:
whatever the user says, this is the chat history
... repeated some number of times ...
### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative):
#### {char}:
```
<!-- original-model-card end -->
|
JuanMa360/conservation_status | JuanMa360 | 2023-12-01T05:52:55Z | 513 | 2 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-30T19:08:42Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: conservation_status
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8724757432937622
---
# conservation_status
Conservation Status model🤗🖼️
## Example Images
#### conservado

#### no_conservado
 |
robinsyihab/Sidrap-7B-v2 | robinsyihab | 2023-12-17T10:20:13Z | 513 | 4 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"code",
"id",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-12-07T10:26:56Z | ---
license: apache-2.0
language:
- id
library_name: transformers
pipeline_tag: text-generation
tags:
- code
---
# LLM Model for Bahasa Indonesia Dialog
Sidrap-7B-v2 is one of the best open model LLM bahasa Indonesia available today.
This model is fine-tuned using a carefully curated and high-quality Bahasa Indonesia dataset
and employs [Sidrap-7B-v1](https://huggingface.co/robinsyihab/Sidrap-7B-v1) as the base model.
For 4-bit quantization, please take a look at [Sidrap-7B-v2-GPTQ-4bit](https://huggingface.co/robinsyihab/Sidrap-7B-v2-GPTQ-4bit)
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("robinsyihab/Sidrap-7B-v2")
tokenizer = AutoTokenizer.from_pretrained("robinsyihab/Sidrap-7B-v2")
messages = [
{"role": "system", "content": "Anda adalah asisten yang suka membantu, penuh hormat, dan jujur. Selalu jawab semaksimal mungkin, sambil tetap aman. Jawaban Anda tidak boleh berisi konten berbahaya, tidak etis, rasis, seksis, beracun, atau ilegal. Harap pastikan bahwa tanggapan Anda tidak memihak secara sosial dan bersifat positif.\n\
Jika sebuah pertanyaan tidak masuk akal, atau tidak koheren secara faktual, jelaskan alasannya daripada menjawab sesuatu yang tidak benar. Jika Anda tidak mengetahui jawaban atas sebuah pertanyaan, mohon jangan membagikan informasi palsu."},
{"role": "user", "content": "buatkan kode program, sebuah fungsi untuk memvalidasi alamat email menggunakan regex"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
**NOTES:** To achieve optimal results in Bahasa Indonesia, please use a system message as the initial input as demonstrated above.
## Limitations and Ethical Considerations
The Sidrap-7B-v2 model, trained mostly on a public dataset, lacks a moderation mechanism, please use with caution.
It may still have limitations and biases. It is always recommended to review and evaluate the generated outputs for any potential issues.
We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
Furthermore, please ensure that the usage of this language model is aligned with ethical guidelines, respectful of privacy, and avoids harmful content generation.
### Citation
If you use the Sidrap-7B-v2 model in your research or project, please cite it as:
```
@article{Sidrap,
title={Sidrap-7B-v2: LLM Model for Bahasa Indonesia Dialog},
author={Robin Syihab},
publisher={Hugging Face}
journal={Hugging Face Repository},
year={2023}
}
``` |
timm/nextvit_base.bd_in1k | timm | 2024-02-11T00:31:08Z | 513 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2207.05501",
"license:apache-2.0",
"region:us"
]
| image-classification | 2024-02-11T00:14:05Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for nextvit_base.bd_in1k
A Next-ViT image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 44.8
- GMACs: 8.2
- Activations (M): 22.5
- Image size: 224 x 224
- **Dataset:** ImageNet-1k
- **Papers:**
- Next-ViT: Next Generation Vision Transformer for Efficient Deployment in Realistic Industrial Scenarios: https://arxiv.org/abs/2207.05501
- **Original:** https://github.com/bytedance/Next-ViT
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('nextvit_base.bd_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'nextvit_base.bd_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 96, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 1024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'nextvit_base.bd_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1024, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
### By Top-1
|model |top1 |top1_err|top5 |top5_err|param_count|
|---------------------------------|------|--------|------|--------|-----------|
|nextvit_large.bd_ssld_6m_in1k_384|86.542|13.458 |98.142|1.858 |57.87 |
|nextvit_base.bd_ssld_6m_in1k_384 |86.352|13.648 |98.04 |1.96 |44.82 |
|nextvit_small.bd_ssld_6m_in1k_384|85.964|14.036 |97.908|2.092 |31.76 |
|nextvit_large.bd_ssld_6m_in1k |85.48 |14.52 |97.696|2.304 |57.87 |
|nextvit_base.bd_ssld_6m_in1k |85.186|14.814 |97.59 |2.41 |44.82 |
|nextvit_large.bd_in1k_384 |84.924|15.076 |97.294|2.706 |57.87 |
|nextvit_small.bd_ssld_6m_in1k |84.862|15.138 |97.382|2.618 |31.76 |
|nextvit_base.bd_in1k_384 |84.706|15.294 |97.224|2.776 |44.82 |
|nextvit_small.bd_in1k_384 |84.022|15.978 |96.99 |3.01 |31.76 |
|nextvit_large.bd_in1k |83.626|16.374 |96.694|3.306 |57.87 |
|nextvit_base.bd_in1k |83.472|16.528 |96.656|3.344 |44.82 |
|nextvit_small.bd_in1k |82.61 |17.39 |96.226|3.774 |31.76 |
## Citation
```bibtex
@article{li2022next,
title={Next-ViT: Next Generation Vision Transformer for Efficient Deployment in Realistic Industrial Scenarios},
author={Li, Jiashi and Xia, Xin and Li, Wei and Li, Huixia and Wang, Xing and Xiao, Xuefeng and Wang, Rui and Zheng, Min and Pan, Xin},
journal={arXiv preprint arXiv:2207.05501},
year={2022}
}
```
|
Chrisisis/5CfigS9T6jn6SUFHJYm2J16syW6kqGRZeDigsR5LvGERYEyz_vgg | Chrisisis | 2024-02-24T08:32:16Z | 513 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-02-19T02:58:44Z | Entry not found |
Chrisisis/5DQ4H3zxQc6i6YKrsdwKofu8z4FPhFRFGLpzCAigNXVSFjst_vgg | Chrisisis | 2024-02-24T08:33:46Z | 513 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-02-19T03:00:56Z | Entry not found |
Chrisisis/5E9mNQWAANSdX95UepsSF3gpGrr4ehcEsda7ZdN88rjHC4cK_vgg | Chrisisis | 2024-02-24T08:34:41Z | 513 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-02-19T03:02:15Z | Entry not found |
RichardErkhov/Mistral-7B-Instruct-v0.2-gguf | RichardErkhov | 2024-04-01T17:56:54Z | 513 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-04-01T09:05:51Z | Entry not found |
mradermacher/collective-v0.1-chinese-roleplay-8b-GGUF | mradermacher | 2024-05-22T20:12:07Z | 513 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Collective-Ai/collective-v0.1-chinese-roleplay-8b",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-22T19:43:39Z | ---
base_model: Collective-Ai/collective-v0.1-chinese-roleplay-8b
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Collective-Ai/collective-v0.1-chinese-roleplay-8b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/collective-v0.1-chinese-roleplay-8b-GGUF/resolve/main/collective-v0.1-chinese-roleplay-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/collective-v0.1-chinese-roleplay-8b-GGUF/resolve/main/collective-v0.1-chinese-roleplay-8b.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/collective-v0.1-chinese-roleplay-8b-GGUF/resolve/main/collective-v0.1-chinese-roleplay-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/collective-v0.1-chinese-roleplay-8b-GGUF/resolve/main/collective-v0.1-chinese-roleplay-8b.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/collective-v0.1-chinese-roleplay-8b-GGUF/resolve/main/collective-v0.1-chinese-roleplay-8b.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/collective-v0.1-chinese-roleplay-8b-GGUF/resolve/main/collective-v0.1-chinese-roleplay-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/collective-v0.1-chinese-roleplay-8b-GGUF/resolve/main/collective-v0.1-chinese-roleplay-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/collective-v0.1-chinese-roleplay-8b-GGUF/resolve/main/collective-v0.1-chinese-roleplay-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/collective-v0.1-chinese-roleplay-8b-GGUF/resolve/main/collective-v0.1-chinese-roleplay-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/collective-v0.1-chinese-roleplay-8b-GGUF/resolve/main/collective-v0.1-chinese-roleplay-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/collective-v0.1-chinese-roleplay-8b-GGUF/resolve/main/collective-v0.1-chinese-roleplay-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/collective-v0.1-chinese-roleplay-8b-GGUF/resolve/main/collective-v0.1-chinese-roleplay-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/collective-v0.1-chinese-roleplay-8b-GGUF/resolve/main/collective-v0.1-chinese-roleplay-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/collective-v0.1-chinese-roleplay-8b-GGUF/resolve/main/collective-v0.1-chinese-roleplay-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/collective-v0.1-chinese-roleplay-8b-GGUF/resolve/main/collective-v0.1-chinese-roleplay-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Ramikan-BR/tinyllama-coder-py-v18 | Ramikan-BR | 2024-06-04T11:56:17Z | 513 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-04T11:32:47Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/tinyllama-chat-bnb-4bit
---
# Uploaded model
- **Developed by:** Ramikan-BR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
antoinelouis/belgpt2 | antoinelouis | 2024-03-22T14:23:19Z | 512 | 7 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"gpt2",
"text-generation",
"fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language:
- fr
license:
- mit
widget:
- text: Hier, Elon Musk a
- text: Pourquoi a-t-il
- text: Tout à coup, elle
metrics:
- perplexity
library_name: transformers
pipeline_tag: text-generation
---
# BelGPT-2
**The 1st GPT-2 model pre-trained on a very large and heterogeneous French corpus (~60Gb).**
## Usage
You can use BelGPT-2 with [🤗 transformers](https://github.com/huggingface/transformers):
```python
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel
# Load pretrained model and tokenizer
model = GPT2LMHeadModel.from_pretrained("antoiloui/belgpt2")
tokenizer = GPT2Tokenizer.from_pretrained("antoiloui/belgpt2")
# Generate a sample of text
model.eval()
output = model.generate(
bos_token_id=random.randint(1,50000),
do_sample=True,
top_k=50,
max_length=100,
top_p=0.95,
num_return_sequences=1
)
# Decode it
decoded_output = []
for sample in output:
decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True))
print(decoded_output)
```
## Data
Below is the list of all French copora used to pre-trained the model:
| Dataset | `$corpus_name` | Raw size | Cleaned size |
| :------| :--- | :---: | :---: |
| CommonCrawl | `common_crawl` | 200.2 GB | 40.4 GB |
| NewsCrawl | `news_crawl` | 10.4 GB | 9.8 GB |
| Wikipedia | `wiki` | 19.4 GB | 4.1 GB |
| Wikisource | `wikisource` | 4.6 GB | 2.3 GB |
| Project Gutenberg | `gutenberg` | 1.3 GB | 1.1 GB |
| EuroParl | `europarl` | 289.9 MB | 278.7 MB |
| NewsCommentary | `news_commentary` | 61.4 MB | 58.1 MB |
| **Total** | | **236.3 GB** | **57.9 GB** |
## Documentation
Detailed documentation on the pre-trained model, its implementation, and the data can be found [here](https://github.com/ant-louis/belgpt2/blob/master/docs/index.md).
## Citation
For attribution in academic contexts, please cite this work as:
```
@misc{louis2020belgpt2,
author = {Louis, Antoine},
title = {{BelGPT-2: A GPT-2 Model Pre-trained on French Corpora}},
year = {2020},
howpublished = {\url{https://github.com/ant-louis/belgpt2}},
}
``` |
timm/resnet34.gluon_in1k | timm | 2024-02-10T23:38:58Z | 512 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:1512.03385",
"arxiv:1812.01187",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-04-05T18:06:54Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
---
# Model card for resnet34.gluon_in1k
A ResNet-B image classification model.
This model features:
* ReLU activations
* single layer 7x7 convolution with pooling
* 1x1 convolution shortcut downsample
Trained on ImageNet-1k in Apache Gluon using Bag-of-Tricks based recipes.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 21.8
- GMACs: 3.7
- Activations (M): 3.7
- Image size: 224 x 224
- **Papers:**
- Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385
- Bag of Tricks for Image Classification with Convolutional Neural Networks: https://arxiv.org/abs/1812.01187
- **Original:** https://cv.gluon.ai/model_zoo/classification.html
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnet34.gluon_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnet34.gluon_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 64, 56, 56])
# torch.Size([1, 128, 28, 28])
# torch.Size([1, 256, 14, 14])
# torch.Size([1, 512, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnet34.gluon_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec|
|------------------------------------------|--------|-----|-----|-----------|-----|-----|-------|
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 |
|[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 |
|[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 |
|[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 |
|[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 |
|[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 |
|[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 |
|[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 |
|[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 |
|[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 |
|[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 |
|[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 |
|[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 |
|[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 |
|[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 |
|[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 |
|[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 |
|[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 |
|[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 |
|[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 |
|[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 |
|[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 |
|[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 |
|[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 |
|[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 |
|[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 |
|[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 |
|[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 |
|[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 |
|[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 |
|[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 |
|[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 |
|[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 |
|[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 |
|[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 |
|[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 |
|[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 |
|[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 |
|[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 |
|[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 |
|[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 |
|[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 |
|[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 |
|[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 |
|[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 |
|[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 |
|[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 |
|[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 |
|[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 |
## Citation
```bibtex
@article{He2015,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {arXiv preprint arXiv:1512.03385},
year = {2015}
}
```
```bibtex
@article{He2018BagOT,
title={Bag of Tricks for Image Classification with Convolutional Neural Networks},
author={Tong He and Zhi Zhang and Hang Zhang and Zhongyue Zhang and Junyuan Xie and Mu Li},
journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2018},
pages={558-567}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
DKYoon/mt5-base-lm-adapt | DKYoon | 2023-09-05T05:07:45Z | 512 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:2205.12647",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-04-13T18:43:07Z | ---
license: apache-2.0
---
🤗 Language model initialized from mT5 and trained for an additional 100K steps on the Prefix LM objective using mC4 data.
Paper: [Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation](https://arxiv.org/abs/2205.12647)
Authors: Tu Vu, Aditya Barua, Brian Lester, Daniel Cer, Mohit Iyyer, Noah Constant
PyTorch port of the original Flax checkpoint at [Google/T5X repository](https://github.com/google-research/t5x). |
TheBloke/Llama-2-7B-LoRA-Assemble-GGUF | TheBloke | 2023-09-27T12:49:12Z | 512 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:oh-yeontaek/llama-2-7B-LoRA-assemble",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-09-14T10:38:54Z | ---
license: llama2
model_name: Llama 2 7B LoRA Assemble
base_model: oh-yeontaek/llama-2-7B-LoRA-assemble
inference: false
model_creator: oh-yeontaek
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama 2 7B LoRA Assemble - GGUF
- Model creator: [oh-yeontaek](https://huggingface.co/oh-yeontaek)
- Original model: [Llama 2 7B LoRA Assemble](https://huggingface.co/oh-yeontaek/llama-2-7B-LoRA-assemble)
<!-- description start -->
## Description
This repo contains GGUF format model files for [oh-yeontaek's Llama 2 7B LoRA Assemble](https://huggingface.co/oh-yeontaek/llama-2-7B-LoRA-assemble).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-7B-LoRA-Assemble-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-7B-LoRA-Assemble-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-7B-LoRA-Assemble-GGUF)
* [oh-yeontaek's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/oh-yeontaek/llama-2-7B-LoRA-assemble)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama-2-7b-lora-assemble.Q2_K.gguf](https://huggingface.co/TheBloke/Llama-2-7B-LoRA-Assemble-GGUF/blob/main/llama-2-7b-lora-assemble.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-2-7b-lora-assemble.Q3_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-7B-LoRA-Assemble-GGUF/blob/main/llama-2-7b-lora-assemble.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [llama-2-7b-lora-assemble.Q3_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-7B-LoRA-Assemble-GGUF/blob/main/llama-2-7b-lora-assemble.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [llama-2-7b-lora-assemble.Q3_K_L.gguf](https://huggingface.co/TheBloke/Llama-2-7B-LoRA-Assemble-GGUF/blob/main/llama-2-7b-lora-assemble.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [llama-2-7b-lora-assemble.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-7B-LoRA-Assemble-GGUF/blob/main/llama-2-7b-lora-assemble.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-2-7b-lora-assemble.Q4_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-7B-LoRA-Assemble-GGUF/blob/main/llama-2-7b-lora-assemble.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [llama-2-7b-lora-assemble.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-7B-LoRA-Assemble-GGUF/blob/main/llama-2-7b-lora-assemble.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [llama-2-7b-lora-assemble.Q5_0.gguf](https://huggingface.co/TheBloke/Llama-2-7B-LoRA-Assemble-GGUF/blob/main/llama-2-7b-lora-assemble.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-2-7b-lora-assemble.Q5_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-7B-LoRA-Assemble-GGUF/blob/main/llama-2-7b-lora-assemble.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [llama-2-7b-lora-assemble.Q5_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-7B-LoRA-Assemble-GGUF/blob/main/llama-2-7b-lora-assemble.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [llama-2-7b-lora-assemble.Q6_K.gguf](https://huggingface.co/TheBloke/Llama-2-7B-LoRA-Assemble-GGUF/blob/main/llama-2-7b-lora-assemble.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [llama-2-7b-lora-assemble.Q8_0.gguf](https://huggingface.co/TheBloke/Llama-2-7B-LoRA-Assemble-GGUF/blob/main/llama-2-7b-lora-assemble.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Llama-2-7B-LoRA-Assemble-GGUF and below it, a specific filename to download, such as: llama-2-7b-lora-assemble.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Llama-2-7B-LoRA-Assemble-GGUF llama-2-7b-lora-assemble.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Llama-2-7B-LoRA-Assemble-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Llama-2-7B-LoRA-Assemble-GGUF llama-2-7b-lora-assemble.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m llama-2-7b-lora-assemble.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-7B-LoRA-Assemble-GGUF", model_file="llama-2-7b-lora-assemble.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: oh-yeontaek's Llama 2 7B LoRA Assemble
No original model card was available.
<!-- original-model-card end -->
|
TheBloke/Llama-2-7B-AWQ | TheBloke | 2023-11-09T18:21:13Z | 512 | 13 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"en",
"arxiv:2307.09288",
"base_model:meta-llama/Llama-2-7b-hf",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
]
| text-generation | 2023-09-18T23:38:34Z | ---
language:
- en
license: llama2
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
model_name: Llama 2 7B
base_model: meta-llama/Llama-2-7b-hf
inference: false
model_creator: Meta
model_type: llama
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama 2 7B - AWQ
- Model creator: [Meta](https://huggingface.co/meta-llama)
- Original model: [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b-hf)
<!-- description start -->
## Description
This repo contains AWQ model files for [Meta's Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b-hf).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-7B-GGUF)
* [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-7b-hf)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Llama-2-7B-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.89 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Serving this model from vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/Llama-2-7B-AWQ --quantization awq
```
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Llama-2-7B-AWQ", quantization="awq")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-python start -->
## How to use this AWQ model from Python code
### Install the necessary packages
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### You can then try the following example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/Llama-2-7B-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=True, safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
print("Output: ", tokenizer.decode(generation_output[0]))
# Inference can also be done using transformers' pipeline
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm).
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781).
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Meta's Llama 2 7B
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
TheBloke/Nous-Capybara-7B-GGUF | TheBloke | 2023-10-02T22:11:56Z | 512 | 4 | transformers | [
"transformers",
"gguf",
"llama",
"llama-2",
"sft",
"eng",
"dataset:LDJnr/LessWrong-Amplify-Instruct",
"dataset:LDJnr/Pure-Dove",
"dataset:LDJnr/Verified-Camel",
"base_model:NousResearch/Nous-Capybara-7B",
"license:mit",
"text-generation-inference",
"region:us"
]
| null | 2023-10-02T22:04:54Z | ---
base_model: NousResearch/Nous-Capybara-7B
datasets:
- LDJnr/LessWrong-Amplify-Instruct
- LDJnr/Pure-Dove
- LDJnr/Verified-Camel
inference: false
language:
- eng
license:
- mit
model_creator: NousResearch
model_name: Nous Capybara 7B
model_type: llama
prompt_template: 'USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
tags:
- llama-2
- sft
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Nous Capybara 7B - GGUF
- Model creator: [NousResearch](https://huggingface.co/NousResearch)
- Original model: [Nous Capybara 7B](https://huggingface.co/NousResearch/Nous-Capybara-7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [NousResearch's Nous Capybara 7B](https://huggingface.co/NousResearch/Nous-Capybara-7B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Nous-Capybara-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Nous-Capybara-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Nous-Capybara-7B-GGUF)
* [NousResearch's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Nous-Capybara-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: User-Assistant
```
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `['mit']`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [NousResearch's Nous Capybara 7B](https://huggingface.co/NousResearch/Nous-Capybara-7B).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [nous-capybara-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Nous-Capybara-7B-GGUF/blob/main/nous-capybara-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [nous-capybara-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Nous-Capybara-7B-GGUF/blob/main/nous-capybara-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [nous-capybara-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Nous-Capybara-7B-GGUF/blob/main/nous-capybara-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [nous-capybara-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Nous-Capybara-7B-GGUF/blob/main/nous-capybara-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [nous-capybara-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Nous-Capybara-7B-GGUF/blob/main/nous-capybara-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [nous-capybara-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Nous-Capybara-7B-GGUF/blob/main/nous-capybara-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [nous-capybara-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Nous-Capybara-7B-GGUF/blob/main/nous-capybara-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [nous-capybara-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Nous-Capybara-7B-GGUF/blob/main/nous-capybara-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [nous-capybara-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Nous-Capybara-7B-GGUF/blob/main/nous-capybara-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [nous-capybara-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Nous-Capybara-7B-GGUF/blob/main/nous-capybara-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [nous-capybara-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Nous-Capybara-7B-GGUF/blob/main/nous-capybara-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [nous-capybara-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Nous-Capybara-7B-GGUF/blob/main/nous-capybara-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Nous-Capybara-7B-GGUF and below it, a specific filename to download, such as: nous-capybara-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Nous-Capybara-7B-GGUF nous-capybara-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Nous-Capybara-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Nous-Capybara-7B-GGUF nous-capybara-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m nous-capybara-7b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "USER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Nous-Capybara-7B-GGUF", model_file="nous-capybara-7b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: NousResearch's Nous Capybara 7B
## **Nous-Capybara-7B**
A model created with a novel synthesis method in mind, Amplify-instruct, with a goal of having a synergistic combination of different techniques used for SOTA models such as Evol-Instruct, Orca, Vicuna, Lamini, FLASK and others, all into one lean holistically formed dataset and model. The seed instructions used for the start of synthesized conversations are largely based on highly acclaimed datasets like Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from posts on the website LessWrong, as well as being supplemented with certain multi-turn datasets like Dove(A successor to Puffin).
Entirely contained under 20K training examples, mostly comprised of newly synthesized tokens never used for model training until now!
## Process of creation and special thank yous!
This model was fine-tuned by Nous Research, with LDJ leading the training and dataset curation, along with significant dataset formation contributions by J-Supha, Also thank you to Emozilla for also assisting to expedite the training experimentation process.
Special thank you to **A16Z** for sponsoring our training, as well as **Yield Protocol** for their support in resources during R&D of aspects outside of training, such as dataset development/synthesis.
## Thank you to dataset creators!
While most of the tokens within Capybara are newly synthsized and part of datasets like Puffin/Dove, we would like to credit the single-turn datasets we leveraged as seeds that are used to initiate the beggining of many of the multi-turn conversations:

## Model Training
Nous-Capybara 7B is a new model trained for multiple epochs on a dataset of less than 20,000 carefully curated GPT-4 examples, most of which are long context conversations between a real human and GPT-4 comprised of entirely newly synthesized tokens that previously didn't exist on HuggingFace.
Additional data came from manually curated CamelAI data, with the help of volunteers ranging from former Physicists, Mathematicians, Biologists and more!
Specific credits to the people involved in validating this data will be posted soon :)
## Prompt Format
The reccomended model usage is:
```
USER:
ASSISTANT:
```
## Notable Features:
- The first Nous model trained on over 10,000 multi-turn conversations.
- Over 1,000 tokens average per conversation example during training!
- Able to effectively do complex summary of advanced studies on topics.
- Ability to recall information upto late 2022 without internet (ChatGPT cut off date is in 2021)
- Context length of 4096 tokens, and fine-tuned on a significant amount of multi-turn conversations reaching that full token limit.
- Includes a portion of conversational data synthesized from less wrong posts, speaking in-depth about the nature of rationality, reasoning and self-improvement.
## Example Outputs!:



## Benchmarks! (Important to note that all mentioned benchmarks are single-turn and don't test multi-turn capabilities, Capybara should excel even further at multi-turn conversational tasks.)

## Limitations
We noticed that the current version of Capybara still has some issues in some situations with censoring itself and not acting as expected in certain edge cases, we plan to have this largely resolved in the near future with Capybara 1.1
## Future Changes
This is a relatively early build amongst the grand plans for the future of Capybara!
Current limitations: We are still running experimentation and tests for the training pipeline and dataset cleaning process to be more refined, we plan to release a Capybara 1.1 with these improvements.
## Future model sizes
We plan on releasing a 3B, 13B and 70B version, as well as a potential 1B version based on phi-1.5 or similar architectures.
## How you can help!
In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations.
If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!
## Dataset contamination.
We checked for 100%, 99%, 98% and 97% similarity matches between our data and many popular benchmarks, we found no exact matches!
The following are benchmarks we checked for contamination for:
- HumanEval
- AGIEval
- TruthfulQA
- MMLU
- GPT4All
<!-- original-model-card end -->
|
TheBloke/Yi-6B-200K-GGUF | TheBloke | 2023-11-11T13:55:49Z | 512 | 28 | transformers | [
"transformers",
"gguf",
"yi",
"base_model:01-ai/Yi-6B-200K",
"license:other",
"region:us"
]
| null | 2023-11-10T20:53:14Z | ---
base_model: 01-ai/Yi-6B-200K
inference: false
license: other
license_link: LICENSE
license_name: yi-license
model_creator: 01-ai
model_name: Yi 6B 200K
model_type: yi
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Yi 6B 200K - GGUF
- Model creator: [01-ai](https://huggingface.co/01-ai)
- Original model: [Yi 6B 200K](https://huggingface.co/01-ai/Yi-6B-200K)
<!-- description start -->
## Description
This repo contains GGUF format model files for [01-ai's Yi 6B 200K](https://huggingface.co/01-ai/Yi-6B-200K).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Yi-6B-200K-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Yi-6B-200K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Yi-6B-200K-GGUF)
* [01-ai's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/01-ai/Yi-6B-200K)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [yi-6b-200k.Q2_K.gguf](https://huggingface.co/TheBloke/Yi-6B-200K-GGUF/blob/main/yi-6b-200k.Q2_K.gguf) | Q2_K | 2 | 2.62 GB| 5.12 GB | smallest, significant quality loss - not recommended for most purposes |
| [yi-6b-200k.Q3_K_S.gguf](https://huggingface.co/TheBloke/Yi-6B-200K-GGUF/blob/main/yi-6b-200k.Q3_K_S.gguf) | Q3_K_S | 3 | 2.71 GB| 5.21 GB | very small, high quality loss |
| [yi-6b-200k.Q3_K_M.gguf](https://huggingface.co/TheBloke/Yi-6B-200K-GGUF/blob/main/yi-6b-200k.Q3_K_M.gguf) | Q3_K_M | 3 | 2.99 GB| 5.49 GB | very small, high quality loss |
| [yi-6b-200k.Q3_K_L.gguf](https://huggingface.co/TheBloke/Yi-6B-200K-GGUF/blob/main/yi-6b-200k.Q3_K_L.gguf) | Q3_K_L | 3 | 3.24 GB| 5.74 GB | small, substantial quality loss |
| [yi-6b-200k.Q4_0.gguf](https://huggingface.co/TheBloke/Yi-6B-200K-GGUF/blob/main/yi-6b-200k.Q4_0.gguf) | Q4_0 | 4 | 3.48 GB| 5.98 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [yi-6b-200k.Q4_K_S.gguf](https://huggingface.co/TheBloke/Yi-6B-200K-GGUF/blob/main/yi-6b-200k.Q4_K_S.gguf) | Q4_K_S | 4 | 3.50 GB| 6.00 GB | small, greater quality loss |
| [yi-6b-200k.Q4_K_M.gguf](https://huggingface.co/TheBloke/Yi-6B-200K-GGUF/blob/main/yi-6b-200k.Q4_K_M.gguf) | Q4_K_M | 4 | 3.67 GB| 6.17 GB | medium, balanced quality - recommended |
| [yi-6b-200k.Q5_0.gguf](https://huggingface.co/TheBloke/Yi-6B-200K-GGUF/blob/main/yi-6b-200k.Q5_0.gguf) | Q5_0 | 5 | 4.20 GB| 6.70 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [yi-6b-200k.Q5_K_S.gguf](https://huggingface.co/TheBloke/Yi-6B-200K-GGUF/blob/main/yi-6b-200k.Q5_K_S.gguf) | Q5_K_S | 5 | 4.20 GB| 6.70 GB | large, low quality loss - recommended |
| [yi-6b-200k.Q5_K_M.gguf](https://huggingface.co/TheBloke/Yi-6B-200K-GGUF/blob/main/yi-6b-200k.Q5_K_M.gguf) | Q5_K_M | 5 | 4.30 GB| 6.80 GB | large, very low quality loss - recommended |
| [yi-6b-200k.Q6_K.gguf](https://huggingface.co/TheBloke/Yi-6B-200K-GGUF/blob/main/yi-6b-200k.Q6_K.gguf) | Q6_K | 6 | 4.97 GB| 7.47 GB | very large, extremely low quality loss |
| [yi-6b-200k.Q8_0.gguf](https://huggingface.co/TheBloke/Yi-6B-200K-GGUF/blob/main/yi-6b-200k.Q8_0.gguf) | Q8_0 | 8 | 6.44 GB| 8.94 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Yi-6B-200K-GGUF and below it, a specific filename to download, such as: yi-6b-200k.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Yi-6B-200K-GGUF yi-6b-200k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Yi-6B-200K-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Yi-6B-200K-GGUF yi-6b-200k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m yi-6b-200k.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Yi-6B-200K-GGUF", model_file="yi-6b-200k.Q4_K_M.gguf", model_type="yi", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: 01-ai's Yi 6B 200K
<div align="center">
<img src="./Yi.svg" width="200px">
</div>
## Introduction
The **Yi** series models are large language models trained from scratch by
developers at [01.AI](https://01.ai/). The first public release contains two
bilingual(English/Chinese) base models with the parameter sizes of 6B([`Yi-6B`](https://huggingface.co/01-ai/Yi-6B))
and 34B([`Yi-34B`](https://huggingface.co/01-ai/Yi-34B)). Both of them are trained
with 4K sequence length and can be extended to 32K during inference time.
The [`Yi-6B-200K`](https://huggingface.co/01-ai/Yi-6B-200K)
and [`Yi-34B-200K`](https://huggingface.co/01-ai/Yi-34B-200K) are base model with
200K context length.
## News
- 🎯 **2023/11/06**: The base model of [`Yi-6B-200K`](https://huggingface.co/01-ai/Yi-6B-200K)
and [`Yi-34B-200K`](https://huggingface.co/01-ai/Yi-34B-200K) with 200K context length.
- 🎯 **2023/11/02**: The base model of [`Yi-6B`](https://huggingface.co/01-ai/Yi-6B) and
[`Yi-34B`](https://huggingface.co/01-ai/Yi-34B).
## Model Performance
| Model | MMLU | CMMLU | C-Eval | GAOKAO | BBH | Common-sense Reasoning | Reading Comprehension | Math & Code |
| :------------ | :------: | :------: | :------: | :------: | :------: | :--------------------: | :-------------------: | :---------: |
| | 5-shot | 5-shot | 5-shot | 0-shot | 3-shot@1 | - | - | - |
| LLaMA2-34B | 62.6 | - | - | - | 44.1 | 69.9 | 68.0 | 26.0 |
| LLaMA2-70B | 68.9 | 53.3 | - | 49.8 | 51.2 | 71.9 | 69.4 | 36.8 |
| Baichuan2-13B | 59.2 | 62.0 | 58.1 | 54.3 | 48.8 | 64.3 | 62.4 | 23.0 |
| Qwen-14B | 66.3 | 71.0 | 72.1 | 62.5 | 53.4 | 73.3 | 72.5 | **39.8** |
| Skywork-13B | 62.1 | 61.8 | 60.6 | 68.1 | 41.7 | 72.4 | 61.4 | 24.9 |
| InternLM-20B | 62.1 | 59.0 | 58.8 | 45.5 | 52.5 | 78.3 | - | 30.4 |
| Aquila-34B | 67.8 | 71.4 | 63.1 | - | - | - | - | - |
| Falcon-180B | 70.4 | 58.0 | 57.8 | 59.0 | 54.0 | 77.3 | 68.8 | 34.0 |
| Yi-6B | 63.2 | 75.5 | 72.0 | 72.2 | 42.8 | 72.3 | 68.7 | 19.8 |
| Yi-6B-200K | 64.0 | 75.3 | 73.5 | 73.9 | 42.0 | 72.0 | 69.1 | 19.0 |
| **Yi-34B** | **76.3** | **83.7** | 81.4 | 82.8 | **54.3** | **80.1** | 76.4 | 37.1 |
| Yi-34B-200K | 76.1 | 83.6 | **81.9** | **83.4** | 52.7 | 79.7 | **76.6** | 36.3 |
While benchmarking open-source models, we have observed a disparity between the
results generated by our pipeline and those reported in public sources (e.g.
OpenCompass). Upon conducting a more in-depth investigation of this difference,
we have discovered that various models may employ different prompts,
post-processing strategies, and sampling techniques, potentially resulting in
significant variations in the outcomes. Our prompt and post-processing strategy
remains consistent with the original benchmark, and greedy decoding is employed
during evaluation without any post-processing for the generated content. For
scores that were not reported by the original authors (including scores reported
with different settings), we try to get results with our pipeline.
To evaluate the model's capability extensively, we adopted the methodology
outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande,
ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ
were incorporated to evaluate reading comprehension. CSQA was exclusively tested
using a 7-shot setup, while all other tests were conducted with a 0-shot
configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1),
HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due
to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score
is derived by averaging the scores on the remaining tasks. Since the scores for
these two tasks are generally lower than the average, we believe that
Falcon-180B's performance was not underestimated.
## Usage
Please visit our [github repository](https://github.com/01-ai/Yi) for general
guidance on how to use this model.
## Disclaimer
Although we use data compliance checking algorithms during the training process
to ensure the compliance of the trained model to the best of our ability, due to
the complexity of the data and the diversity of language model usage scenarios,
we cannot guarantee that the model will generate correct and reasonable output
in all scenarios. Please be aware that there is still a risk of the model
producing problematic outputs. We will not be responsible for any risks and
issues resulting from misuse, misguidance, illegal usage, and related
misinformation, as well as any associated data security concerns.
## License
The Yi series models are fully open for academic research and free commercial
usage with permission via applications. All usage must adhere to the [Model
License Agreement 2.0](https://huggingface.co/01-ai/Yi-6B-200K/blob/main/LICENSE). To
apply for the official commercial license, please contact us
([[email protected]](mailto:[email protected])).
<!-- original-model-card end -->
|
DiogoXP/pxogoidplus | DiogoXP | 2024-06-15T07:23:45Z | 512 | 2 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-11-26T01:29:26Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### PXogoid2 Dreambooth model trained by DiogoXP with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
fibery/clustering-v2.0 | fibery | 2024-03-20T15:50:50Z | 512 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-20T15:47:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Lewdiculous/Visual-LaylelemonMaidRP-7B-GGUF-IQ-Imatrix | Lewdiculous | 2024-03-31T06:32:31Z | 512 | 8 | null | [
"gguf",
"quantized",
"roleplay",
"multimodal",
"vision",
"llava",
"sillytavern",
"merge",
"mistral",
"conversational",
"license:other",
"region:us"
]
| null | 2024-03-31T05:25:30Z | ---
license: other
inference: false
tags:
- gguf
- quantized
- roleplay
- multimodal
- vision
- llava
- sillytavern
- merge
- mistral
- conversational
---
# #Roleplay #Multimodal #Vision
This repository hosts GGUF-IQ-Imatrix quants for [Nitral-AI/Visual-LaylelemonMaidRP-7B](https://huggingface.co/Nitral-AI/Visual-LaylelemonMaidRP-7B).
"My personal maid can't be this cute!"
**Recommended starting [SillyTavern presets here](https://huggingface.co/Lewdiculous/Eris_PrimeV4-Vision-32k-7B-GGUF-IQ-Imatrix/tree/main/sillytavern-presets-lewdicu-3.0.2-mistral-0.2).**
This is a **#multimodal** model that also has **#vision** capabilities. <br> Read the full card information if you also want to use that functionality.

Quants:
```python
quantization_options = [
"Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
"Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
]
```
**What does "Imatrix" mean?**
<details><summary>
⇲ Click here to expand/hide more information about this topic.
</summary>
It stands for **Importance Matrix**, a technique used to improve the quality of quantized models.
The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process.
The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse.
[[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt). This was just to add a bit more diversity to the data.
</details><br>
# Vision/multimodal capabilities:
<details><summary>
⇲ Click here to expand/hide how this would work in practice in a roleplay chat.
</summary>

</details><br>
<details><summary>
⇲ Click here to expand/hide what your SillyTavern Image Captions extension settings should look like.
</summary>

</details><br>
**If you want to use vision functionality:**
* Make sure you are using the latest version of [KoboldCpp](https://github.com/LostRuins/koboldcpp).
To use the multimodal capabilities of this model, such as **vision**, you also need to load the specified **mmproj** file, you can get it [here](https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf/blob/main/mmproj-model-f16.gguf) or as uploaded in the repository.
* You can load the **mmproj** by using the corresponding section in the interface:

* For CLI users, you can load the **mmproj file** by adding the respective flag to your usual command:
```
--mmproj your-mmproj-file.gguf
```
# Quantization information:
<details><summary>
⇲ Click here to expand/hide more information about this topic.
</summary>
**Steps performed:**
```
Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
```
*Using the latest llama.cpp at the time.*
</details><br> |
Monor/Llama-3-8B-Instruct-80K-QLoRA-Merged-gguf | Monor | 2024-05-06T15:35:42Z | 512 | 0 | null | [
"gguf",
"license:apache-2.0",
"region:us"
]
| null | 2024-05-01T14:41:33Z | ---
license: apache-2.0
---
## Introduce
Quantizing the [namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA-Merged](https://huggingface.co/namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA-Merged) to f16, q2, q3, q4, q5, q6 and q8 with Llama.cpp.
|
RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf | RichardErkhov | 2024-05-31T09:28:55Z | 512 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-31T07:18:52Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
japanese-stablelm-instruct-beta-7b - GGUF
- Model creator: https://huggingface.co/stabilityai/
- Original model: https://huggingface.co/stabilityai/japanese-stablelm-instruct-beta-7b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [japanese-stablelm-instruct-beta-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf/blob/main/japanese-stablelm-instruct-beta-7b.Q2_K.gguf) | Q2_K | 2.36GB |
| [japanese-stablelm-instruct-beta-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf/blob/main/japanese-stablelm-instruct-beta-7b.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [japanese-stablelm-instruct-beta-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf/blob/main/japanese-stablelm-instruct-beta-7b.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [japanese-stablelm-instruct-beta-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf/blob/main/japanese-stablelm-instruct-beta-7b.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [japanese-stablelm-instruct-beta-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf/blob/main/japanese-stablelm-instruct-beta-7b.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [japanese-stablelm-instruct-beta-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf/blob/main/japanese-stablelm-instruct-beta-7b.Q3_K.gguf) | Q3_K | 3.07GB |
| [japanese-stablelm-instruct-beta-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf/blob/main/japanese-stablelm-instruct-beta-7b.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [japanese-stablelm-instruct-beta-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf/blob/main/japanese-stablelm-instruct-beta-7b.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [japanese-stablelm-instruct-beta-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf/blob/main/japanese-stablelm-instruct-beta-7b.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [japanese-stablelm-instruct-beta-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf/blob/main/japanese-stablelm-instruct-beta-7b.Q4_0.gguf) | Q4_0 | 3.56GB |
| [japanese-stablelm-instruct-beta-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf/blob/main/japanese-stablelm-instruct-beta-7b.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [japanese-stablelm-instruct-beta-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf/blob/main/japanese-stablelm-instruct-beta-7b.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [japanese-stablelm-instruct-beta-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf/blob/main/japanese-stablelm-instruct-beta-7b.Q4_K.gguf) | Q4_K | 3.8GB |
| [japanese-stablelm-instruct-beta-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf/blob/main/japanese-stablelm-instruct-beta-7b.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [japanese-stablelm-instruct-beta-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf/blob/main/japanese-stablelm-instruct-beta-7b.Q4_1.gguf) | Q4_1 | 3.95GB |
| [japanese-stablelm-instruct-beta-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf/blob/main/japanese-stablelm-instruct-beta-7b.Q5_0.gguf) | Q5_0 | 4.33GB |
| [japanese-stablelm-instruct-beta-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf/blob/main/japanese-stablelm-instruct-beta-7b.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [japanese-stablelm-instruct-beta-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf/blob/main/japanese-stablelm-instruct-beta-7b.Q5_K.gguf) | Q5_K | 4.45GB |
| [japanese-stablelm-instruct-beta-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf/blob/main/japanese-stablelm-instruct-beta-7b.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [japanese-stablelm-instruct-beta-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf/blob/main/japanese-stablelm-instruct-beta-7b.Q5_1.gguf) | Q5_1 | 4.72GB |
| [japanese-stablelm-instruct-beta-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf/blob/main/japanese-stablelm-instruct-beta-7b.Q6_K.gguf) | Q6_K | 5.15GB |
| [japanese-stablelm-instruct-beta-7b.Q8_0.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-instruct-beta-7b-gguf/blob/main/japanese-stablelm-instruct-beta-7b.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
language:
- ja
tags:
- japanese-stablelm
- causal-lm
pipeline_tag: text-generation
datasets:
- kunishou/hh-rlhf-49k-ja
- kunishou/databricks-dolly-15k-ja
- kunishou/oasst1-89k-ja
license:
- llama2
extra_gated_fields:
Name: text
Email: text
Country: text
Organization or Affiliation: text
I allow Stability AI to contact me about information related to its models and research: checkbox
---
# Japanese-StableLM-Instruct-Beta-7B

> A cute robot wearing a kimono writes calligraphy with one single brush — [Stable Diffusion XL](https://clipdrop.co/stable-diffusion)
## Model Description
`japanese-stablelm-instruct-beta-7b` is a 7B-parameter decoder-only language model based on [japanese-stablelm-base-beta-7b](https://huggingface.co/stabilityai/japanese-stablelm-base-beta-7b) and further fine tuned on Databricks Dolly-15k, Anthropic HH, and other public data.
This model is also available in a [larger 70b version](https://huggingface.co/stabilityai/japanese-stablelm-instruct-beta-70b), or a [faster version with a specialized tokenizer](https://huggingface.co/stabilityai/japanese-stablelm-instruct-ja_vocab-beta-7b).
## Usage
First install additional dependencies in [requirements.txt](./requirements.txt):
```sh
pip install -r requirements.txt
```
Then start generating text with `japanese-stablelm-instruct-beta-7b` by using the following code snippet:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "stabilityai/japanese-stablelm-instruct-beta-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
# The next line may need to be modified depending on the environment
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")
def build_prompt(user_query, inputs):
sys_msg = "<s>[INST] <<SYS>>\nあなたは役立つアシスタントです。\n<<SYS>>\n\n"
p = sys_msg + user_query + "\n\n" + inputs + " [/INST] "
return p
user_inputs = {
"user_query": "与えられたことわざの意味を小学生でも分かるように教えてください。",
"inputs": "情けは人のためならず"
}
prompt = build_prompt(**user_inputs)
input_ids = tokenizer.encode(
prompt,
add_special_tokens=True,
return_tensors="pt"
)
# this is for reproducibility.
# feel free to change to get different result
seed = 23
torch.manual_seed(seed)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=128,
temperature=0.99,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
```
We suggest playing with different generation config (`top_p`, `repetition_penalty` etc) to find the best setup for your tasks. For example, use higher temperature for roleplay task, lower temperature for reasoning.
## Model Details
* **Model type**: `japanese-stablelm-instruct-beta-7b` model is an auto-regressive language model based on the Llama2 transformer architecture.
* **Language(s)**: Japanese
* **License**: [Llama2 Community License](https://ai.meta.com/llama/license/).
* **Contact**: For questions and comments about the model, please join [Stable Community Japan](https://discord.gg/StableJP). For future announcements / information about Stability AI models, research, and events, please follow https://twitter.com/StabilityAI_JP.
## Training Dataset
The following datasets were used for the instruction training. Note these are Japanese translated versions of the original datasets, shared by [kunishou](https://huggingface.co/kunishou).
- [Anthropic HH-RLHF](https://huggingface.co/datasets/kunishou/hh-rlhf-49k-ja)
- [Databricks Dolly 15-k](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)
- [OpenAssistant Conversations Dataset](https://huggingface.co/datasets/kunishou/oasst1-89k-ja)
## Use and Limitations
### Intended Use
The model is intended to be used by all individuals as a foundation for application-specific fine-tuning without strict limitations on commercial use.
### Limitations and bias
The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model generated text. We recommend users exercise reasonable caution when using these models in production systems. Do not use the model for any applications that may cause harm or distress to individuals or groups.
## Authors
This model was developed by the Research & Development team at Stability AI Japan, and the development was co-led by [Takuya Akiba](https://huggingface.co/iwiwi) and [Meng Lee](https://huggingface.co/leemeng). The members of the team are as follows:
- [Meng Lee](https://huggingface.co/leemeng)
- [Fujiki Nakamura](https://huggingface.co/fujiki)
- [Makoto Shing](https://huggingface.co/mkshing)
- [Paul McCann](https://huggingface.co/polm-stability)
- [Takuya Akiba](https://huggingface.co/iwiwi)
- [Naoki Orii](https://huggingface.co/mrorii)
## Acknowledgements
We thank Meta Research for releasing Llama 2 under an open license for others to build on.
We are grateful for the contributions of the EleutherAI Polyglot-JA team in helping us to collect a large amount of pre-training data in Japanese. Polyglot-JA members includes Hyunwoong Ko (Project Lead), Fujiki Nakamura (originally started this project when he commited to the Polyglot team), Yunho Mo, Minji Jung, KeunSeok Im, and Su-Kyeong Jang.
We are also appreciative of [AI Novelist/Sta (Bit192, Inc.)](https://ai-novel.com/index.php) and the numerous contributors from [Stable Community Japan](https://discord.gg/VPrcE475HB) for assisting us in gathering a large amount of high-quality Japanese textual data for model training.
|
Ramikan-BR/tinyllama-coder-py-v19 | Ramikan-BR | 2024-06-07T15:58:30Z | 512 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-07T11:37:41Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/tinyllama-chat-bnb-4bit
---
# Uploaded model
- **Developed by:** Ramikan-BR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
google/bigbird-base-trivia-itc | google | 2024-02-29T09:47:59Z | 511 | 7 | transformers | [
"transformers",
"pytorch",
"jax",
"big_bird",
"question-answering",
"en",
"dataset:trivia_qa",
"arxiv:2007.14062",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
language: en
license: apache-2.0
datasets:
- trivia_qa
---
# BigBird base trivia-itc
This model is a fine-tune checkpoint of `bigbird-roberta-base`, fine-tuned on `trivia_qa` with `BigBirdForQuestionAnsweringHead` on its top.
Check out [this](https://colab.research.google.com/drive/1DVOm1VHjW0eKCayFq1N2GpY6GR9M4tJP?usp=sharing) to see how well `google/bigbird-base-trivia-itc` performs on question answering.
## How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BigBirdForQuestionAnswering
# by default its in `block_sparse` mode with num_random_blocks=3, block_size=64
model = BigBirdForQuestionAnswering.from_pretrained("google/bigbird-base-trivia-itc")
# you can change `attention_type` to full attention like this:
model = BigBirdForQuestionAnswering.from_pretrained("google/bigbird-base-trivia-itc", attention_type="original_full")
# you can change `block_size` & `num_random_blocks` like this:
model = BigBirdForQuestionAnswering.from_pretrained("google/bigbird-base-trivia-itc", block_size=16, num_random_blocks=2)
question = "Replace me by any text you'd like."
context = "Put some context for answering"
encoded_input = tokenizer(question, context, return_tensors='pt')
output = model(**encoded_input)
```
# Fine-tuning config & hyper-parameters
- No. of global token = 128
- Window length = 192
- No. of random token = 192
- Max. sequence length = 4096
- No. of heads = 12
- No. of hidden layers = 12
- Hidden layer size = 768
- Batch size = 32
- Loss = cross-entropy noisy spans
## BibTeX entry and citation info
```tex
@misc{zaheer2021big,
title={Big Bird: Transformers for Longer Sequences},
author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed},
year={2021},
eprint={2007.14062},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
Team-PIXEL/pixel-base | Team-PIXEL | 2022-08-02T14:47:51Z | 511 | 32 | transformers | [
"transformers",
"pytorch",
"pixel",
"pretraining",
"en",
"dataset:Team-PIXEL/rendered-bookcorpus",
"dataset:Team-PIXEL/rendered-wikipedia-english",
"arxiv:2207.06991",
"arxiv:2111.06377",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-05-16T18:54:48Z | ---
license: apache-2.0
tags:
- pretraining
- pixel
datasets:
- Team-PIXEL/rendered-bookcorpus
- Team-PIXEL/rendered-wikipedia-english
language:
- en
---
# PIXEL (Pixel-based Encoder of Language)
PIXEL is a language model trained to reconstruct masked image patches that contain rendered text. PIXEL was pretrained on the *English* Wikipedia and Bookcorpus (in total around 3.2B words) but can theoretically be finetuned on data in any written language that can be typeset on a computer screen because it operates on rendered text as opposed to using a tokenizer with a fixed vocabulary.
It is not currently possible to use the Hosted Inference API with PIXEL.
Paper: [Language Modelling with Pixels](https://arxiv.org/abs/2207.06991)
Codebase: [https://github.com/xplip/pixel](https://github.com/xplip/pixel)
## Model description
PIXEL consists of three major components: a text renderer, which draws text as an image; an encoder, which encodes the unmasked regions of the rendered image; and a decoder, which reconstructs the masked regions at the pixel level. It is built on [ViT-MAE](https://arxiv.org/abs/2111.06377).
During pretraining, the renderer produces images containing the training sentences. Patches of these images are linearly projected to obtain patch embeddings (as opposed to having an embedding matrix like e.g. in BERT), and 25% of the patches are masked out. The encoder, which is a Vision Transformer (ViT), then only processes the unmasked patches. The lightweight decoder with hidden size 512 and 8 transformer layers inserts learnable mask tokens into the encoder's output sequence and learns to reconstruct the raw pixel values at the masked positions.
After pretraining, the decoder can be discarded leaving an 86M parameter encoder, upon which task-specific classification heads can be stacked. Alternatively, the decoder can be retained and PIXEL can be used as a pixel-level generative language model (see Figures 3 and 6 in the paper for examples).
For more details on how PIXEL works, please check the paper and the codebase linked above.
## Intended uses
PIXEL is primarily intended to be finetuned to downstream NLP tasks. See the [model hub](https://huggingface.co/models?search=Team-PIXEL/pixel-base) to look for finetuned versions on a task that interests you. Otherwise, check out the PIXEL codebase on Github [here](https://github.com/xplip/pixel) to find out how to finetune PIXEL for your task.
### How to use
Here is how to load PIXEL:
```python
from pixel import PIXELConfig, PIXELForPreTraining
config = PIXELConfig.from_pretrained("Team-PIXEL/pixel-base")
model = PIXELForPreTraining.from_pretrained("Team-PIXEL/pixel-base", config=config)
```
## Citing and Contact Author
```bibtex
@article{rust-etal-2022-pixel,
title={Language Modelling with Pixels},
author={Phillip Rust and Jonas F. Lotz and Emanuele Bugliarello and Elizabeth Salesky and Miryam de Lhoneux and Desmond Elliott},
journal={arXiv preprint},
year={2022},
url={https://arxiv.org/abs/2207.06991}
}
```
Github: [@xplip](https://github.com/xplip)
Twitter: [@rust_phillip](https://twitter.com/rust_phillip)
|
nitrosocke/spider-verse-diffusion | nitrosocke | 2023-05-16T09:21:21Z | 511 | 346 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2022-10-07T02:19:16Z | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
---
**Spider-Verse Diffusion**
This is the fine-tuned Stable Diffusion model trained on movie stills from Sony's Into the Spider-Verse.
Use the tokens **_spiderverse style_** in your prompts for the effect.
**If you enjoy my work, please consider supporting me**
[](https://patreon.com/user?u=79196446)
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
#!pip install diffusers transformers scipy torch
from diffusers import StableDiffusionPipeline
import torch
model_id = "nitrosocke/spider-verse-diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a magical princess with golden hair, spiderverse style"
image = pipe(prompt).images[0]
image.save("./magical_princess.png")
```
**Portraits rendered with the model:**

**Sample images used for training:**

This model was trained using the diffusers based dreambooth training and prior-preservation loss in 3.000 steps.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
tifa-benchmark/promptcap-coco-vqa | tifa-benchmark | 2023-12-11T07:23:15Z | 511 | 12 | transformers | [
"transformers",
"pytorch",
"ofa",
"image-to-text",
"visual-question-answering",
"image-captioning",
"en",
"dataset:coco",
"dataset:textvqa",
"dataset:VQAv2",
"dataset:OK-VQA",
"dataset:A-OKVQA",
"arxiv:2211.09699",
"license:openrail",
"region:us"
]
| image-to-text | 2023-01-23T23:39:29Z | ---
license: openrail
inference: false
pipeline_tag: image-to-text
tags:
- image-to-text
- visual-question-answering
- image-captioning
datasets:
- coco
- textvqa
- VQAv2
- OK-VQA
- A-OKVQA
language:
- en
---
This is the repo for the paper [PromptCap: Prompt-Guided Task-Aware Image Captioning](https://arxiv.org/abs/2211.09699). This paper is accepted to ICCV 2023 as [PromptCap: Prompt-Guided Image Captioning for VQA with GPT-3](https://openaccess.thecvf.com/content/ICCV2023/html/Hu_PromptCap_Prompt-Guided_Image_Captioning_for_VQA_with_GPT-3_ICCV_2023_paper.html).
We introduce PromptCap, a captioning model that can be controlled by natural language instruction. The instruction may contain a question that the user is interested in.
For example, "what is the boy putting on?". PromptCap also supports generic caption, using the question "what does the image describe?"
PromptCap can serve as a light-weight visual plug-in (much faster than BLIP-2) for LLM like GPT-3, ChatGPT, and other foundation models like Segment Anything and DINO.
It achieves SOTA performance on COCO captioning (150 CIDEr).
When paired with GPT-3, and conditioned on user question, PromptCap get SOTA performance on knowledge-based VQA tasks (60.4% on OK-VQA and 59.6% on A-OKVQA)
# QuickStart
## Installation
```
pip install promptcap
```
Two pipelines are included. One is for image captioning, and the other is for visual question answering.
## Captioning Pipeline
Please follow the prompt format, which will give the best performance.
Generate a prompt-guided caption by following:
```python
import torch
from promptcap import PromptCap
model = PromptCap("tifa-benchmark/promptcap-coco-vqa") # also support OFA checkpoints. e.g. "OFA-Sys/ofa-large"
if torch.cuda.is_available():
model.cuda()
prompt = "please describe this image according to the given question: what piece of clothing is this boy putting on?"
image = "glove_boy.jpeg"
print(model.caption(prompt, image))
```
To try generic captioning, just use "what does the image describe?"
```python
prompt = "what does the image describe?"
image = "glove_boy.jpeg"
print(model.caption(prompt, image))
```
PromptCap also support taking OCR inputs:
```python
prompt = "please describe this image according to the given question: what year was this taken?"
image = "dvds.jpg"
ocr = "yip AE Mht juor 02/14/2012"
print(model.caption(prompt, image, ocr))
```
## Visual Question Answering Pipeline
Different from typical VQA models, which are doing classification on VQAv2, PromptCap is open-domain and can be paired with arbitrary text-QA models.
Here we provide a pipeline for combining PromptCap with UnifiedQA.
```python
import torch
from promptcap import PromptCap_VQA
# QA model support all UnifiedQA variants. e.g. "allenai/unifiedqa-v2-t5-large-1251000"
vqa_model = PromptCap_VQA(promptcap_model="tifa-benchmark/promptcap-coco-vqa", qa_model="allenai/unifiedqa-t5-base")
if torch.cuda.is_available():
vqa_model.cuda()
question = "what piece of clothing is this boy putting on?"
image = "glove_boy.jpeg"
print(vqa_model.vqa(question, image))
```
Similarly, PromptCap supports OCR inputs
```python
question = "what year was this taken?"
image = "dvds.jpg"
ocr = "yip AE Mht juor 02/14/2012"
print(vqa_model.vqa(question, image, ocr=ocr))
```
Because of the flexibility of Unifiedqa, PromptCap also supports multiple-choice VQA
```python
question = "what piece of clothing is this boy putting on?"
image = "glove_boy.jpeg"
choices = ["gloves", "socks", "shoes", "coats"]
print(vqa_model.vqa_multiple_choice(question, image, choices))
```
## Bibtex
```
@article{hu2022promptcap,
title={PromptCap: Prompt-Guided Task-Aware Image Captioning},
author={Hu, Yushi and Hua, Hang and Yang, Zhengyuan and Shi, Weijia and Smith, Noah A and Luo, Jiebo},
journal={arXiv preprint arXiv:2211.09699},
year={2022}
}
``` |
medical-ner-proj/bert-medical-ner-proj | medical-ner-proj | 2023-05-07T02:46:47Z | 511 | 28 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-05-05T04:33:38Z | ---
license: openrail
---
Medical documents NER model by fine tuning BERT
widget:
- example_title: "example 1" text: "John Doe has a history of hypertension, which is well-controlled with medication. He has no history of allergies or surgeries. He is not currently taking any medication except for his blood pressure medication."
- example_title: "example 2" text: "On physical examination, John Doe appears acutely ill. He has a temperature of 38.5°C and a heart rate of 105 beats per minute. His blood pressure is 140/90 mmHg, and his oxygen saturation is 90% on room air. His lungs have diminished breath sounds and wheezing. There is no cyanosis, and his heart sounds are normal."
- example_title: "example 3" text: "Based on Mary Smith's symptoms and physical examination, she is suspected to have suffered a stroke, likely caused by hypertension. Her history of migraines may also be a contributing factor." |
sinequa/vectorizer.vanilla | sinequa | 2024-02-19T09:40:26Z | 511 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
]
| sentence-similarity | 2023-07-11T07:31:15Z | ---
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
language:
- en
---
# Model Card for `vectorizer.vanilla`
This model is a vectorizer developed by Sinequa. It produces an embedding vector given a passage or a query. The passage vectors are stored in our vector index and the query vector is used at query time to look up relevant passages in the index.
Model name: `vectorizer.vanilla`
## Supported Languages
The model was trained and tested in the following languages:
- English
## Scores
| Metric | Value |
|:-----------------------|------:|
| Relevance (Recall@100) | 0.639 |
Note that the relevance score is computed as an average over 14 retrieval datasets (see
[details below](#evaluation-metrics)).
## Inference Times
| GPU | Quantization type | Batch size 1 | Batch size 32 |
|:------------------------------------------|:------------------|---------------:|---------------:|
| NVIDIA A10 | FP16 | 1 ms | 5 ms |
| NVIDIA A10 | FP32 | 2 ms | 20 ms |
| NVIDIA T4 | FP16 | 1 ms | 14 ms |
| NVIDIA T4 | FP32 | 2 ms | 53 ms |
| NVIDIA L4 | FP16 | 1 ms | 5 ms |
| NVIDIA L4 | FP32 | 3 ms | 25 ms |
## Gpu Memory usage
| Quantization type | Memory |
|:-------------------------------------------------|-----------:|
| FP16 | 300 MiB |
| FP32 | 500 MiB |
Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch
size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which
can be around 0.5 to 1 GiB depending on the used GPU.
## Requirements
- Minimal Sinequa version: 11.10.0
- Minimal Sinequa version for using FP16 models and GPUs with CUDA compute capability of 8.9+ (like NVIDIA L4): 11.11.0
- [Cuda compute capability](https://developer.nvidia.com/cuda-gpus): above 5.0 (above 6.0 for FP16 use)
## Model Details
### Overview
- Number of parameters: 23 million
- Base language model: [English MiniLM-L6-H384](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased)
- Insensitive to casing and accents
- Output dimensions: 256 (reduced with an additional dense layer)
- Training procedure: Query-passage-negative triplets for datasets that have mined hard negative data, Query-passage pairs for the rest. Number of negatives is augmented with in-batch negative strategy.
### Training Data
The model have been trained using all datasets that are cited in the [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) model.
### Evaluation Metrics
To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the
[BEIR benchmark](https://github.com/beir-cellar/beir). Note that all these datasets are in English.
| Dataset | Recall@100 |
|:------------------|-----------:|
| Average | 0.639 |
| | |
| Arguana | 0.969 |
| CLIMATE-FEVER | 0.509 |
| DBPedia Entity | 0.409 |
| FEVER | 0.839 |
| FiQA-2018 | 0.702 |
| HotpotQA | 0.609 |
| MS MARCO | 0.849 |
| NFCorpus | 0.315 |
| NQ | 0.786 |
| Quora | 0.995 |
| SCIDOCS | 0.497 |
| SciFact | 0.911 |
| TREC-COVID | 0.129 |
| Webis-Touche-2020 | 0.427 |
|
Leogrin/eleuther-pythia1.4b-hh-sft | Leogrin | 2023-09-01T16:39:00Z | 511 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"en",
"dataset:Anthropic/hh-rlhf",
"arxiv:2101.00027",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-07-27T14:21:23Z | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- Anthropic/hh-rlhf
---
# Infos
Pythia-1.4b supervised finetuned with Anthropic-hh-rlhf dataset for 1 epoch.
[wandb log](https://wandb.ai/pythia_dpo/Pythia_DPO_new/runs/xm0pxfej)
See [Pythia-1.4b](https://huggingface.co/EleutherAI/pythia-1.4b) for model details [(paper)](https://arxiv.org/abs/2101.00027).
# Benchmark raw results:
Results for the base model are taken from the [Pythia paper](https://arxiv.org/abs/2101.00027).
## Zero shot
| Task | 1.4B_base | 1.4B_sft |
|------------------|--------------:|--------------:|
| Lambada (OpenAI) | 0.616 ± 0.007 | 0.5977 ± 0.0068 |
| PIQA | 0.711 ± 0.011 | 0.7133 ± 0.0106 |
| WinoGrande | 0.573 ± 0.014 | 0.5793 ± 0.0139 |
| WSC | 0.365 ± 0.047 | 0.3654 ± 0.0474 |
| ARC - Easy | 0.606 ± 0.010 | 0.6098 ± 0.0100 |
| ARC - Challenge | 0.260 ± 0.013 | 0.2696 ± 0.0130 |
| SciQ | 0.865 ± 0.011 | 0.8540 ± 0.0112 |
| LogiQA | 0.210 ± 0.016 | NA |
## Five shot
| Task | 1.4B_base | 1.4B_sft |
|------------------|----------------:|----------------:|
| Lambada (OpenAI) | 0.578 ± 0.007 | 0.5201 ± 0.007 |
| PIQA | 0.705 ± 0.011 | 0.7176 ± 0.0105|
| WinoGrande | 0.580 ± 0.014 | 0.5793 ± 0.0139|
| WSC | 0.365 ± 0.047 | 0.5288 ± 0.0492|
| ARC - Easy | 0.643 ± 0.010 | 0.6376 ± 0.0099|
| ARC - Challenge | 0.290 ± 0.013 | 0.2935 ± 0.0133|
| SciQ | 0.92 ± 0.009 | 0.9180 ± 0.0087|
| LogiQA | 0.240 ± 0.017 | N/A |
|
pklumpp/Wav2Vec2_CommonPhone | pklumpp | 2024-06-28T12:57:41Z | 511 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"Phone Recognition",
"International Phonetic Alphabet",
"CTC",
"multilingual",
"automatic-speech-recognition",
"en",
"de",
"fr",
"es",
"ru",
"it",
"license:cc0-1.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-11-04T14:40:29Z | ---
license: cc0-1.0
language:
- en
- de
- fr
- es
- ru
- it
pipeline_tag: automatic-speech-recognition
tags:
- Phone Recognition
- International Phonetic Alphabet
- CTC
- multilingual
---
# Model Card for Wav2Vec2 Large with Common Phone
This is a multilingual phone recognition model optimized with the [Common Phone](https://zenodo.org/records/5846137) dataset.
It was created in the scope of the PhD thesis [Phonetic Transfer Learning from Healthy References for the Analysis of Pathological Speech](https://open.fau.de/items/d0c6b800-e217-4049-ab1f-a746fc9b3966) by [Philipp Klumpp](https://scholar.google.com/citations?user=IWvgno4AAAAJ) to analyze pathological speech signals.
Find the Source Code to use this model on [**GITHUB**](https://github.com/PKlumpp/phd_model).
To cite this work, please use the following BibTex snippet:
```
@phdthesis{klumpp2024phdthesis,
author = "Philipp Klumpp",
title = "Phonetic Transfer Learning from Healthy References for the Analysis of Pathological Speech",
school = "Friedrich-Alexander-Universit{\"a}t Erlangen-N{\"u}rnberg",
address = "Erlangen, Germany",
year = 2024,
month = may
}
```
## Model Details
Wav2Vec2 model with linear projection to CTC blank token + 101 phone symbols from the International Phonetic Alphabet (IPA).
The model uses 16 kHz audio to predict the most probable sequence of uttered IPA phones.
### Model Description
This model was created to analyze pathological speech signals. It was optimized with Common Phone, a multilingual corpus for robust acoustic modelling. It comprises more than 11.000 speakers which were carefully selected from Mozilla's Common Voice dataset.
Results in terms of phone error rate (PER) in percent:
| Language | Test PER |
|:---:|:---:|
| English | 11.0 |
| French | 9.9 |
| German | 9.8 |
| Italian | 9.1 |
| Russian | 6.6 |
| Spanish | 8.8 |
| **Average** | **9.2** |
- **Developed by:** [Philipp Klumpp](https://scholar.google.com/citations?user=IWvgno4AAAAJ)
- **Model type:** [Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)
- **Languages:** Multilingual (English, French, German, Italian, Russian, Spanish)
- **License:** [Creative Commons Zero 1.0 (CC0)](https://creativecommons.org/publicdomain/zero/1.0/deed.en)
- **Finetuned from model:** [Wav2Vec2 XLSR-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
- **Finetuning dataset:** [Common Phone](https://zenodo.org/records/5846137) as published in [**Common Phone: A Multilingual Dataset for Robust Acoustic Modelling**](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.81.pdf)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [GitHub](https://github.com/PKlumpp/phd_model)
- **Paper:** The final print of the thesis will be linked here.
## Contact
[Philipp Klumpp](mailto:[email protected])
|
mradermacher/Harmonia-20B-GGUF | mradermacher | 2024-05-06T06:03:53Z | 511 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:athirdpath/Harmonia-20B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-22T14:55:18Z | ---
base_model: athirdpath/Harmonia-20B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
static quants of https://huggingface.co/athirdpath/Harmonia-20B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Harmonia-20B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Harmonia-20B-GGUF/resolve/main/Harmonia-20B.Q2_K.gguf) | Q2_K | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Harmonia-20B-GGUF/resolve/main/Harmonia-20B.IQ3_XS.gguf) | IQ3_XS | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/Harmonia-20B-GGUF/resolve/main/Harmonia-20B.IQ3_S.gguf) | IQ3_S | 9.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Harmonia-20B-GGUF/resolve/main/Harmonia-20B.Q3_K_S.gguf) | Q3_K_S | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/Harmonia-20B-GGUF/resolve/main/Harmonia-20B.IQ3_M.gguf) | IQ3_M | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/Harmonia-20B-GGUF/resolve/main/Harmonia-20B.Q3_K_M.gguf) | Q3_K_M | 10.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Harmonia-20B-GGUF/resolve/main/Harmonia-20B.Q3_K_L.gguf) | Q3_K_L | 10.9 | |
| [GGUF](https://huggingface.co/mradermacher/Harmonia-20B-GGUF/resolve/main/Harmonia-20B.IQ4_XS.gguf) | IQ4_XS | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/Harmonia-20B-GGUF/resolve/main/Harmonia-20B.Q4_K_S.gguf) | Q4_K_S | 11.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Harmonia-20B-GGUF/resolve/main/Harmonia-20B.Q4_K_M.gguf) | Q4_K_M | 12.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Harmonia-20B-GGUF/resolve/main/Harmonia-20B.Q5_K_S.gguf) | Q5_K_S | 14.1 | |
| [GGUF](https://huggingface.co/mradermacher/Harmonia-20B-GGUF/resolve/main/Harmonia-20B.Q5_K_M.gguf) | Q5_K_M | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/Harmonia-20B-GGUF/resolve/main/Harmonia-20B.Q6_K.gguf) | Q6_K | 16.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Harmonia-20B-GGUF/resolve/main/Harmonia-20B.Q8_0.gguf) | Q8_0 | 21.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
cais/Zephyr_RMU | cais | 2024-04-24T16:59:46Z | 511 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:cais/wmdp",
"dataset:cais/wmdp-corpora",
"arxiv:2403.03218",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-16T19:24:11Z | ---
license: mit
language:
- en
datasets:
- cais/wmdp
- cais/wmdp-corpora
pipeline_tag: text-generation
arxiv:
- arxiv.org/abs/2403.03218
library_name: transformers
---
# Zephyr RMU
Zephyr 7B model with hazardous knowledge about biosecurity and cybersecurity "unlearned" using Representation Misdirection for Unlearning (RMU). For more details, please check [our paper](https://arxiv.org/abs/2403.03218).
## Model sources
- Base model: [zephyr-7B-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
- Repository: [https://github.com/centerforaisafety/wmdp](https://github.com/centerforaisafety/wmdp)
- Website: [https://www.wmdp.ai/](https://www.wmdp.ai/)
- Corpora used for unlearning: [https://huggingface.co/datasets/cais/wmdp-corpora](https://huggingface.co/datasets/cais/wmdp-corpora)
## Performance
Zephyr RMU has been evaluated on [WMDP](https://huggingface.co/datasets/cais/wmdp), [MMLU](https://huggingface.co/datasets/cais/mmlu) and [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench). Higher accuracy on MMLU and MT-Bench, and lower accuracy on WMDP are preferred.
| | WMDP-Bio | WMDP-Cyber | MMLU | MT-Bench |
|------------|:---------:|:----------:|:------:|:--------:|
| Zephyr 7B | 63.7 | 44.0 | 58.1 | 7.33 |
| Zephyr RMU | 31.2 | 28.2 | 57.1 | 7.10 |
## Citation
If you find this useful in your research, please consider citing our paper:
```
@misc{li2024wmdp,
title={The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning},
author={Nathaniel Li and Alexander Pan and Anjali Gopal and Summer Yue and Daniel Berrios and Alice Gatti and Justin D. Li and Ann-Kathrin Dombrowski and Shashwat Goel and Long Phan and Gabriel Mukobi and Nathan Helm-Burger and Rassin Lababidi and Lennart Justen and Andrew B. Liu and Michael Chen and Isabelle Barrass and Oliver Zhang and Xiaoyuan Zhu and Rishub Tamirisa and Bhrugu Bharathi and Adam Khoja and Zhenqi Zhao and Ariel Herbert-Voss and Cort B. Breuer and Sam Marks and Oam Patel and Andy Zou and Mantas Mazeika and Zifan Wang and Palash Oswal and Weiran Liu and Adam A. Hunt and Justin Tienken-Harder and Kevin Y. Shih and Kemper Talley and John Guan and Russell Kaplan and Ian Steneker and David Campbell and Brad Jokubaitis and Alex Levinson and Jean Wang and William Qian and Kallol Krishna Karmakar and Steven Basart and Stephen Fitz and Mindy Levine and Ponnurangam Kumaraguru and Uday Tupakula and Vijay Varadharajan and Yan Shoshitaishvili and Jimmy Ba and Kevin M. Esvelt and Alexandr Wang and Dan Hendrycks},
year={2024},
eprint={2403.03218},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` |
bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens | bartowski | 2024-04-19T00:04:27Z | 511 | 12 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"text-generation",
"en",
"license:other",
"region:us"
]
| text-generation | 2024-04-18T17:41:09Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
license: other
license_name: llama3
license_link: LICENSE
extra_gated_prompt: >-
### META LLAMA 3 COMMUNITY LICENSE AGREEMENT
Meta Llama 3 Version Release Date: April 18, 2024
"Agreement" means the terms and conditions for use, reproduction, distribution and modification of the
Llama Materials set forth herein.
"Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3
distributed by Meta at https://llama.meta.com/get-started/.
"Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into
this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or
regulations to provide legal consent and that has legal authority to bind your employer or such other
person or entity if you are entering in this Agreement on their behalf.
"Meta Llama 3" means the foundational large language models and software and algorithms, including
machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
fine-tuning enabling code and other elements of the foregoing distributed by Meta at
https://llama.meta.com/llama-downloads.
"Llama Materials" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any
portion thereof) made available under this Agreement.
"Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your
principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located
outside of the EEA or Switzerland).
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free
limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama
Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the
Llama Materials.
b. Redistribution and Use.
i. If you distribute or make available the Llama Materials (or any derivative works
thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide
a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta
Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you
use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is
distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model
name.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the following
attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is
licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights
Reserved.”
iv. Your use of the Llama Materials must comply with applicable laws and regulations
(including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama
Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by
reference into this Agreement.
v. You will not use the Llama Materials or any output or results of the Llama Materials to
improve any other large language model (excluding Meta Llama 3 or derivative works thereof).
2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users
of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700
million monthly active users in the preceding calendar month, you must request a license from Meta,
which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the
rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,
INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND
ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND
RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING
OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,
INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama
Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other
or any of its affiliates, except as required for reasonable and customary use in describing and
redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to
use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will
comply with Meta’s brand guidelines (currently accessible at
https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use
of the Mark will inure to the benefit of Meta.
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with
respect to any derivative works and modifications of the Llama Materials that are made by you, as
between you and Meta, you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or
results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other
rights owned or licensable by you, then any licenses granted to you under this Agreement shall
terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold
harmless Meta from and against any claim by any third party arising out of or related to your use or
distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this
Agreement or access to the Llama Materials and will continue in full force and effect until terminated in
accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in
breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete
and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this
Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of
the State of California without regard to choice of law principles, and the UN Convention on Contracts
for the International Sale of Goods does not apply to this Agreement. The courts of California shall have
exclusive jurisdiction of any dispute arising out of this Agreement.
### Meta Llama 3 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you
access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of
this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)
#### Prohibited Uses
We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow
others to use, Meta Llama 3 to:
1. Violate the law or others’ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:
1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
2. Guns and illegal weapons (including weapon development)
3. Illegal drugs and regulated/controlled substances
4. Operation of critical infrastructure, transportation technologies, or heavy machinery
5. Self-harm or harm to others, including suicide, cutting, and eating disorders
6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:
1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
3. Generating, promoting, or further distributing spam
4. Impersonating another individual without consent, authorization, or legal right
5. Representing that the use of Meta Llama 3 or outputs are human-generated
6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation
of this Policy through one of the following means:
* Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)
* Reporting risky content generated by the model:
developers.facebook.com/llama_output_feedback
* Reporting bugs and security concerns: facebook.com/whitehat/info
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
quantized_by: bartowski
---
## This model may get strange behaviour due to several of its tokens being labelled as "special", and so most tools will not properly detect them. This is most noticeable with the stop token.
A new upload is on the way and will be replacing this one.
End token set to not-special uploaded here: https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF
## Llamacpp Quantizations of Meta-Llama-3-8B-Instruct
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> fork from pcuenca <a href="https://github.com/pcuenca/llama.cpp/tree/llama3-conversion">llama3-conversion</a> for quantization.
Original model: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Meta-Llama-3-8B-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens/blob/main/Meta-Llama-3-8B-Instruct-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Meta-Llama-3-8B-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens/blob/main/Meta-Llama-3-8B-Instruct-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Meta-Llama-3-8B-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens/blob/main/Meta-Llama-3-8B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Meta-Llama-3-8B-Instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens/blob/main/Meta-Llama-3-8B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Meta-Llama-3-8B-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens/blob/main/Meta-Llama-3-8B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Meta-Llama-3-8B-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens/blob/main/Meta-Llama-3-8B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Meta-Llama-3-8B-Instruct-IQ4_NL.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens/blob/main/Meta-Llama-3-8B-Instruct-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Meta-Llama-3-8B-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens/blob/main/Meta-Llama-3-8B-Instruct-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Meta-Llama-3-8B-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens/blob/main/Meta-Llama-3-8B-Instruct-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Meta-Llama-3-8B-Instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens/blob/main/Meta-Llama-3-8B-Instruct-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Meta-Llama-3-8B-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens/blob/main/Meta-Llama-3-8B-Instruct-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Meta-Llama-3-8B-Instruct-IQ3_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens/blob/main/Meta-Llama-3-8B-Instruct-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Meta-Llama-3-8B-Instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens/blob/main/Meta-Llama-3-8B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Meta-Llama-3-8B-Instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens/blob/main/Meta-Llama-3-8B-Instruct-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Meta-Llama-3-8B-Instruct-IQ3_XXS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens/blob/main/Meta-Llama-3-8B-Instruct-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Meta-Llama-3-8B-Instruct-Q2_K.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens/blob/main/Meta-Llama-3-8B-Instruct-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Meta-Llama-3-8B-Instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens/blob/main/Meta-Llama-3-8B-Instruct-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Meta-Llama-3-8B-Instruct-IQ2_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens/blob/main/Meta-Llama-3-8B-Instruct-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Meta-Llama-3-8B-Instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens/blob/main/Meta-Llama-3-8B-Instruct-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [Meta-Llama-3-8B-Instruct-IQ2_XXS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens/blob/main/Meta-Llama-3-8B-Instruct-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [Meta-Llama-3-8B-Instruct-IQ1_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens/blob/main/Meta-Llama-3-8B-Instruct-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [Meta-Llama-3-8B-Instruct-IQ1_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF-special-tokens/blob/main/Meta-Llama-3-8B-Instruct-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
Nhoodie/Meta-Llama-3-8b-Lexi-Uninstruct-function-calling-json-mode-Task-Arithmetic-v0.1 | Nhoodie | 2024-04-26T16:54:59Z | 511 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode",
"Orenguteng/Lexi-Llama-3-8B-Uncensored",
"NousResearch/Meta-Llama-3-8B",
"NousResearch/Meta-Llama-3-8B-Instruct",
"conversational",
"base_model:hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode",
"base_model:Orenguteng/Lexi-Llama-3-8B-Uncensored",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-25T07:15:37Z | ---
tags:
- merge
- mergekit
- lazymergekit
- hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode
- Orenguteng/Lexi-Llama-3-8B-Uncensored
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B-Instruct
base_model:
- hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode
- Orenguteng/Lexi-Llama-3-8B-Uncensored
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B-Instruct
---
# Meta-Llama-3-8b-Lexi-Uninstruct-function-calling-json-mode-Task-Arithmetic-v0.1
Meta-Llama-3-8b-Lexi-Uninstruct-function-calling-json-mode-Task-Arithmetic-v0.1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode](https://huggingface.co/hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode)
* [Orenguteng/Lexi-Llama-3-8B-Uncensored](https://huggingface.co/Orenguteng/Lexi-Llama-3-8B-Uncensored)
* [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B)
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode
parameters:
weight: 1
layer_range: [0, 32]
- model: Orenguteng/Lexi-Llama-3-8B-Uncensored
parameters:
weight: 1
layer_range: [0, 32]
- model: NousResearch/Meta-Llama-3-8B
parameters:
weight: 0.3
layer_range: [0, 32]
- model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
weight: 0.7
layer_range: [0, 32]
merge_method: task_arithmetic
base_model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Nhoodie/Meta-Llama-3-8b-Lexi-Uninstruct-function-calling-json-mode-Task-Arithmetic-v0.1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
mradermacher/LexiLumin-7B-GGUF | mradermacher | 2024-05-05T14:54:22Z | 511 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:Ppoyaa/LexiLumin-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-02T04:11:40Z | ---
base_model: Ppoyaa/LexiLumin-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Ppoyaa/LexiLumin-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LexiLumin-7B-GGUF/resolve/main/LexiLumin-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/LexiLumin-7B-GGUF/resolve/main/LexiLumin-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/LexiLumin-7B-GGUF/resolve/main/LexiLumin-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/LexiLumin-7B-GGUF/resolve/main/LexiLumin-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LexiLumin-7B-GGUF/resolve/main/LexiLumin-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/LexiLumin-7B-GGUF/resolve/main/LexiLumin-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LexiLumin-7B-GGUF/resolve/main/LexiLumin-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/LexiLumin-7B-GGUF/resolve/main/LexiLumin-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/LexiLumin-7B-GGUF/resolve/main/LexiLumin-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LexiLumin-7B-GGUF/resolve/main/LexiLumin-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LexiLumin-7B-GGUF/resolve/main/LexiLumin-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/LexiLumin-7B-GGUF/resolve/main/LexiLumin-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/LexiLumin-7B-GGUF/resolve/main/LexiLumin-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LexiLumin-7B-GGUF/resolve/main/LexiLumin-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LexiLumin-7B-GGUF/resolve/main/LexiLumin-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
duyntnet/Saul-Instruct-v1-imatrix-GGUF | duyntnet | 2024-05-06T06:13:29Z | 511 | 1 | transformers | [
"transformers",
"gguf",
"imatrix",
"Saul-Instruct-v1",
"text-generation",
"en",
"license:other",
"region:us"
]
| text-generation | 2024-05-05T09:37:36Z | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Saul-Instruct-v1
---
Quantizations of https://huggingface.co/Equall/Saul-Instruct-v1
# From original readme
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
You can use it for legal use cases that involves generation.
Here's how you can run the model using the pipeline() function from 🤗 Transformers:
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="Equall/Saul-Instruct-v1", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{"role": "user", "content": "[YOUR QUERY GOES HERE]"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=False)
print(outputs[0]["generated_text"])
``` |
RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf | RichardErkhov | 2024-05-21T12:23:32Z | 511 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-21T09:31:30Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Nous-Hermes-llama-2-7b - GGUF
- Model creator: https://huggingface.co/NousResearch/
- Original model: https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Nous-Hermes-llama-2-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf/blob/main/Nous-Hermes-llama-2-7b.Q2_K.gguf) | Q2_K | 2.36GB |
| [Nous-Hermes-llama-2-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf/blob/main/Nous-Hermes-llama-2-7b.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [Nous-Hermes-llama-2-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf/blob/main/Nous-Hermes-llama-2-7b.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [Nous-Hermes-llama-2-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf/blob/main/Nous-Hermes-llama-2-7b.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [Nous-Hermes-llama-2-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf/blob/main/Nous-Hermes-llama-2-7b.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [Nous-Hermes-llama-2-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf/blob/main/Nous-Hermes-llama-2-7b.Q3_K.gguf) | Q3_K | 3.07GB |
| [Nous-Hermes-llama-2-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf/blob/main/Nous-Hermes-llama-2-7b.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [Nous-Hermes-llama-2-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf/blob/main/Nous-Hermes-llama-2-7b.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [Nous-Hermes-llama-2-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf/blob/main/Nous-Hermes-llama-2-7b.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [Nous-Hermes-llama-2-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf/blob/main/Nous-Hermes-llama-2-7b.Q4_0.gguf) | Q4_0 | 3.56GB |
| [Nous-Hermes-llama-2-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf/blob/main/Nous-Hermes-llama-2-7b.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [Nous-Hermes-llama-2-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf/blob/main/Nous-Hermes-llama-2-7b.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [Nous-Hermes-llama-2-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf/blob/main/Nous-Hermes-llama-2-7b.Q4_K.gguf) | Q4_K | 3.8GB |
| [Nous-Hermes-llama-2-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf/blob/main/Nous-Hermes-llama-2-7b.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [Nous-Hermes-llama-2-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf/blob/main/Nous-Hermes-llama-2-7b.Q4_1.gguf) | Q4_1 | 3.95GB |
| [Nous-Hermes-llama-2-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf/blob/main/Nous-Hermes-llama-2-7b.Q5_0.gguf) | Q5_0 | 4.33GB |
| [Nous-Hermes-llama-2-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf/blob/main/Nous-Hermes-llama-2-7b.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [Nous-Hermes-llama-2-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf/blob/main/Nous-Hermes-llama-2-7b.Q5_K.gguf) | Q5_K | 4.45GB |
| [Nous-Hermes-llama-2-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf/blob/main/Nous-Hermes-llama-2-7b.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [Nous-Hermes-llama-2-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf/blob/main/Nous-Hermes-llama-2-7b.Q5_1.gguf) | Q5_1 | 4.72GB |
| [Nous-Hermes-llama-2-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf/blob/main/Nous-Hermes-llama-2-7b.Q6_K.gguf) | Q6_K | 5.15GB |
| [Nous-Hermes-llama-2-7b.Q8_0.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-llama-2-7b-gguf/blob/main/Nous-Hermes-llama-2-7b.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
language:
- en
tags:
- llama-2
- self-instruct
- distillation
- synthetic instruction
license:
- mit
---
# Model Card: Nous-Hermes-Llama2-7b
Compute provided by our project sponsor Redmond AI, thank you! Follow RedmondAI on Twitter @RedmondAI.
## Model Description
Nous-Hermes-Llama2-7b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors.
This Hermes model uses the exact same dataset as Hermes on Llama-1. This is to ensure consistency between the old Hermes and new, for anyone who wanted to keep Hermes as similar to the old one, just more capable.
This model stands out for its long responses, lower hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 4096 sequence length on an 8x a100 80GB DGX machine.
## Model Training
The model was trained almost entirely on synthetic GPT-4 outputs. Curating high quality GPT-4 datasets enables incredibly high quality in knowledge, task completion, and style.
This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), and several others, detailed further below
## Collaborators
The model fine-tuning and the datasets were a collaboration of efforts and resources between Teknium, Karan4D, Emozilla, Huemin Art and Redmond AI.
Special mention goes to @winglian for assisting in some of the training issues.
Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly.
Among the contributors of datasets:
- GPTeacher was made available by Teknium
- Wizard LM by nlpxucan
- Nous Research Instruct Dataset was provided by Karan4D and HueminArt.
- GPT4-LLM and Unnatural Instructions were provided by Microsoft
- Airoboros dataset by jondurbin
- Camel-AI's domain expert datasets are from Camel-AI
- CodeAlpaca dataset by Sahil 2801.
If anyone was left out, please open a thread in the community tab.
## Prompt Format
The model follows the Alpaca prompt format:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
or
```
### Instruction:
<prompt>
### Input:
<additional context>
### Response:
<leave a newline blank for model to respond>
```
GPT4All:
```| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.4735|± |0.0146|
| | |acc_norm|0.5017|± |0.0146|
|arc_easy | 0|acc |0.7946|± |0.0083|
| | |acc_norm|0.7605|± |0.0088|
|boolq | 1|acc |0.8000|± |0.0070|
|hellaswag | 0|acc |0.5924|± |0.0049|
| | |acc_norm|0.7774|± |0.0042|
|openbookqa | 0|acc |0.3600|± |0.0215|
| | |acc_norm|0.4660|± |0.0223|
|piqa | 0|acc |0.7889|± |0.0095|
| | |acc_norm|0.7976|± |0.0094|
|winogrande | 0|acc |0.6993|± |0.0129|
Average: 0.686
```
BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5579|± |0.0361|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6233|± |0.0253|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3062|± |0.0288|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.2006|± |0.0212|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2540|± |0.0195|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.1657|± |0.0141|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4067|± |0.0284|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.2780|± |0.0201|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.4405|± |0.0111|
|bigbench_ruin_names | 0|multiple_choice_grade|0.2701|± |0.0210|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2034|± |0.0127|
|bigbench_snarks | 0|multiple_choice_grade|0.5028|± |0.0373|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.6136|± |0.0155|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.2720|± |0.0141|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.1944|± |0.0112|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1497|± |0.0085|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4067|± |0.0284|
Average: 0.3525
```
AGIEval
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2520|± |0.0273|
| | |acc_norm|0.2402|± |0.0269|
|agieval_logiqa_en | 0|acc |0.2796|± |0.0176|
| | |acc_norm|0.3241|± |0.0184|
|agieval_lsat_ar | 0|acc |0.2478|± |0.0285|
| | |acc_norm|0.2348|± |0.0280|
|agieval_lsat_lr | 0|acc |0.2843|± |0.0200|
| | |acc_norm|0.2765|± |0.0198|
|agieval_lsat_rc | 0|acc |0.3271|± |0.0287|
| | |acc_norm|0.3011|± |0.0280|
|agieval_sat_en | 0|acc |0.4660|± |0.0348|
| | |acc_norm|0.4223|± |0.0345|
|agieval_sat_en_without_passage| 0|acc |0.3738|± |0.0338|
| | |acc_norm|0.3447|± |0.0332|
|agieval_sat_math | 0|acc |0.2500|± |0.0293|
| | |acc_norm|0.2364|± |0.0287|
Average: 0.2975
```
## Benchmark Results
## Resources for Applied Use Cases:
For an example of a back and forth chatbot using huggingface transformers and discord, check out: https://github.com/teknium1/alpaca-discord
For an example of a roleplaying discord chatbot, check out this: https://github.com/teknium1/alpaca-roleplay-discordbot
LM Studio is a good choice for a chat interface that supports GGML versions (to come)
## Future Plans
We plan to continue to iterate on both more high quality data, and new data filtering techniques to eliminate lower quality data going forward.
## Model Usage
The model is available for download on Hugging Face. It is suitable for a wide range of language tasks, from generating creative text to understanding and following complex instructions.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.