modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/llama-3-8b-chat-music-v2-GGUF | mradermacher | "2024-06-16T22:58:39Z" | 3,254 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:wstcpyt1988/llama-3-8b-chat-music-v2",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T20:59:46Z" | ---
base_model: wstcpyt1988/llama-3-8b-chat-music-v2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/wstcpyt1988/llama-3-8b-chat-music-v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-music-v2-GGUF/resolve/main/llama-3-8b-chat-music-v2.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-music-v2-GGUF/resolve/main/llama-3-8b-chat-music-v2.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-music-v2-GGUF/resolve/main/llama-3-8b-chat-music-v2.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-music-v2-GGUF/resolve/main/llama-3-8b-chat-music-v2.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-music-v2-GGUF/resolve/main/llama-3-8b-chat-music-v2.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-music-v2-GGUF/resolve/main/llama-3-8b-chat-music-v2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-music-v2-GGUF/resolve/main/llama-3-8b-chat-music-v2.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-music-v2-GGUF/resolve/main/llama-3-8b-chat-music-v2.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-music-v2-GGUF/resolve/main/llama-3-8b-chat-music-v2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-music-v2-GGUF/resolve/main/llama-3-8b-chat-music-v2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-music-v2-GGUF/resolve/main/llama-3-8b-chat-music-v2.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-music-v2-GGUF/resolve/main/llama-3-8b-chat-music-v2.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-music-v2-GGUF/resolve/main/llama-3-8b-chat-music-v2.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-music-v2-GGUF/resolve/main/llama-3-8b-chat-music-v2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-music-v2-GGUF/resolve/main/llama-3-8b-chat-music-v2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
timm/vit_large_patch32_384.orig_in21k_ft_in1k | timm | "2023-05-06T00:26:10Z" | 3,252 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-21k",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | "2022-12-22T07:51:06Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-21k
---
# Model card for vit_large_patch32_384.orig_in21k_ft_in1k
A Vision Transformer (ViT) image classification model. Trained on ImageNet-21k and fine-tuned on ImageNet-1k in JAX by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 306.6
- GMACs: 44.3
- Activations (M): 32.2
- Image size: 384 x 384
- **Papers:**
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-21k
- **Original:** https://github.com/google-research/vision_transformer
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_large_patch32_384.orig_in21k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_large_patch32_384.orig_in21k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 145, 1024) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
uukuguy/speechless-codellama-orca-airoboros-13b-0.10e | uukuguy | "2023-09-04T10:17:20Z" | 3,252 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-2",
"en",
"dataset:garage-bAInd/Open-Platypus",
"arxiv:2308.12950",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-04T09:49:05Z" | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- garage-bAInd/Open-Platypus
tags:
- llama-2
license: llama2
---
<p><h1> speechless-codellama-orca-airoboros-13b <h1></p>
Fine-tune the codellama/CodeLlama-13b-hf with Orca and Airoboros datasets.
| Metric | Value |
| --- | --- |
| ARC | |
| HellaSwag | |
| MMLU | |
| TruthfulQA | |
| Average | |
# **Code Llama**
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 13B version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.
| | Base Model | Python | Instruct |
| --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) |
| 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) |
| 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) |
## Model Use
To use this model, please make sure to install transformers from `main` until the next version is released:
```bash
pip install git+https://github.com/huggingface/transformers.git@main accelerate
```
Model capabilities:
- [x] Code completion.
- [x] Infilling.
- [ ] Instructions / chat.
- [ ] Python specialist.
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "codellama/CodeLlama-13b-hf"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'import socket\n\ndef ping_exponential_backoff(host: str):',
do_sample=True,
top_k=10,
temperature=0.1,
top_p=0.95,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Model Details
*Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).
**Model Developers** Meta
**Variations** Code Llama comes in three model sizes, and three variants:
* Code Llama: base models designed for general code synthesis and understanding
* Code Llama - Python: designed specifically for Python
* Code Llama - Instruct: for instruction following and safer deployment
All variants are available in sizes of 7B, 13B and 34B parameters.
**This repository contains the base version of the 13B parameters model.**
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture.
**Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950).
## Intended Use
**Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
## Hardware and Software
**Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
**Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program.
## Training Data
All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details).
## Evaluation Results
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
## Ethical Considerations and Limitations
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
|
NousResearch/CodeLlama-34b-hf | NousResearch | "2023-08-24T17:57:35Z" | 3,250 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-08-24T17:51:05Z" | Entry not found |
jiaowobaba02/stable-diffusion-v2-1-GGUF | jiaowobaba02 | "2024-01-16T14:37:10Z" | 3,250 | 11 | null | [
"gguf",
"art",
"stable-diffusion",
"text-to-image",
"region:us"
] | text-to-image | "2024-01-16T12:53:54Z" | ---
pipeline_tag: text-to-image
tags:
- art
- stable-diffusion
---
# Stable-diffusion-GGUF
There are some files quantitated to q8_0 , q5_0 , q5_1 , q4_1 .
To run these models, you can go to [this page](https://github.com/leejet/stable-diffusion.cpp) to download the code or run this command
```
git clone --recursive https://github.com/leejet/stable-diffusion.cpp.git
```
\
And then compile it just as the instructions on the github page. \
Finally,run
```
./sd -m '/model/stable_diffusion-ema-pruned-v2-1_768.q8_0.gguf' -p "a lovely cat" -s -1
```
. Then you can see the 'output.png'. |
mradermacher/MyAlee-Mistral-Instruct-v2-32k-v3-merged-GGUF | mradermacher | "2024-06-05T14:14:05Z" | 3,250 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:arcee-ai/MyAlee-Mistral-Instruct-v2-32k-v3-merged",
"endpoints_compatible",
"region:us"
] | null | "2024-06-05T12:17:53Z" | ---
base_model: arcee-ai/MyAlee-Mistral-Instruct-v2-32k-v3-merged
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/arcee-ai/MyAlee-Mistral-Instruct-v2-32k-v3-merged
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v3-merged-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v3-merged-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v3-merged.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v3-merged-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v3-merged.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v3-merged-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v3-merged.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v3-merged-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v3-merged.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v3-merged-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v3-merged.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v3-merged-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v3-merged.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v3-merged-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v3-merged.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v3-merged-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v3-merged.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v3-merged-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v3-merged.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v3-merged-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v3-merged.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v3-merged-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v3-merged.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v3-merged-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v3-merged.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v3-merged-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v3-merged.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v3-merged-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v3-merged.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v3-merged-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v3-merged.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
liddlefish/privacy_embedding_rag_10k_base_final | liddlefish | "2024-06-10T06:39:48Z" | 3,250 | 1 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"en",
"arxiv:2401.03462",
"arxiv:2312.15503",
"arxiv:2311.13534",
"arxiv:2310.07554",
"arxiv:2309.07597",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | "2024-06-10T06:39:16Z" | ---
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
model-index:
- name: bge-base-en-v1.5
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.14925373134328
- type: ap
value: 39.32336517995478
- type: f1
value: 70.16902252611425
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.386825
- type: ap
value: 90.21276917991995
- type: f1
value: 93.37741030006174
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.846000000000004
- type: f1
value: 48.14646269778261
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.754000000000005
- type: map_at_10
value: 55.761
- type: map_at_100
value: 56.330999999999996
- type: map_at_1000
value: 56.333999999999996
- type: map_at_3
value: 51.92
- type: map_at_5
value: 54.010999999999996
- type: mrr_at_1
value: 41.181
- type: mrr_at_10
value: 55.967999999999996
- type: mrr_at_100
value: 56.538
- type: mrr_at_1000
value: 56.542
- type: mrr_at_3
value: 51.980000000000004
- type: mrr_at_5
value: 54.208999999999996
- type: ndcg_at_1
value: 40.754000000000005
- type: ndcg_at_10
value: 63.605000000000004
- type: ndcg_at_100
value: 66.05199999999999
- type: ndcg_at_1000
value: 66.12
- type: ndcg_at_3
value: 55.708
- type: ndcg_at_5
value: 59.452000000000005
- type: precision_at_1
value: 40.754000000000005
- type: precision_at_10
value: 8.841000000000001
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.238
- type: precision_at_5
value: 15.149000000000001
- type: recall_at_1
value: 40.754000000000005
- type: recall_at_10
value: 88.407
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 66.714
- type: recall_at_5
value: 75.747
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 48.74884539679369
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 42.8075893810716
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.128470519187736
- type: mrr
value: 74.28065778481289
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 89.24629081484655
- type: cos_sim_spearman
value: 86.93752309911496
- type: euclidean_pearson
value: 87.58589628573816
- type: euclidean_spearman
value: 88.05622328825284
- type: manhattan_pearson
value: 87.5594959805773
- type: manhattan_spearman
value: 88.19658793233961
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 86.9512987012987
- type: f1
value: 86.92515357973708
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.10263762928872
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.69711517426737
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.327
- type: map_at_10
value: 44.099
- type: map_at_100
value: 45.525
- type: map_at_1000
value: 45.641999999999996
- type: map_at_3
value: 40.47
- type: map_at_5
value: 42.36
- type: mrr_at_1
value: 39.199
- type: mrr_at_10
value: 49.651
- type: mrr_at_100
value: 50.29
- type: mrr_at_1000
value: 50.329
- type: mrr_at_3
value: 46.924
- type: mrr_at_5
value: 48.548
- type: ndcg_at_1
value: 39.199
- type: ndcg_at_10
value: 50.773
- type: ndcg_at_100
value: 55.67999999999999
- type: ndcg_at_1000
value: 57.495
- type: ndcg_at_3
value: 45.513999999999996
- type: ndcg_at_5
value: 47.703
- type: precision_at_1
value: 39.199
- type: precision_at_10
value: 9.914000000000001
- type: precision_at_100
value: 1.5310000000000001
- type: precision_at_1000
value: 0.198
- type: precision_at_3
value: 21.984
- type: precision_at_5
value: 15.737000000000002
- type: recall_at_1
value: 32.327
- type: recall_at_10
value: 63.743
- type: recall_at_100
value: 84.538
- type: recall_at_1000
value: 96.089
- type: recall_at_3
value: 48.065000000000005
- type: recall_at_5
value: 54.519
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.671
- type: map_at_10
value: 42.954
- type: map_at_100
value: 44.151
- type: map_at_1000
value: 44.287
- type: map_at_3
value: 39.912
- type: map_at_5
value: 41.798
- type: mrr_at_1
value: 41.465
- type: mrr_at_10
value: 49.351
- type: mrr_at_100
value: 49.980000000000004
- type: mrr_at_1000
value: 50.016000000000005
- type: mrr_at_3
value: 47.144000000000005
- type: mrr_at_5
value: 48.592999999999996
- type: ndcg_at_1
value: 41.465
- type: ndcg_at_10
value: 48.565999999999995
- type: ndcg_at_100
value: 52.76499999999999
- type: ndcg_at_1000
value: 54.749
- type: ndcg_at_3
value: 44.57
- type: ndcg_at_5
value: 46.759
- type: precision_at_1
value: 41.465
- type: precision_at_10
value: 9.107999999999999
- type: precision_at_100
value: 1.433
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 21.423000000000002
- type: precision_at_5
value: 15.414
- type: recall_at_1
value: 32.671
- type: recall_at_10
value: 57.738
- type: recall_at_100
value: 75.86500000000001
- type: recall_at_1000
value: 88.36
- type: recall_at_3
value: 45.626
- type: recall_at_5
value: 51.812000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 41.185
- type: map_at_10
value: 53.929
- type: map_at_100
value: 54.92
- type: map_at_1000
value: 54.967999999999996
- type: map_at_3
value: 50.70400000000001
- type: map_at_5
value: 52.673
- type: mrr_at_1
value: 47.398
- type: mrr_at_10
value: 57.303000000000004
- type: mrr_at_100
value: 57.959
- type: mrr_at_1000
value: 57.985
- type: mrr_at_3
value: 54.932
- type: mrr_at_5
value: 56.464999999999996
- type: ndcg_at_1
value: 47.398
- type: ndcg_at_10
value: 59.653
- type: ndcg_at_100
value: 63.627
- type: ndcg_at_1000
value: 64.596
- type: ndcg_at_3
value: 54.455
- type: ndcg_at_5
value: 57.245000000000005
- type: precision_at_1
value: 47.398
- type: precision_at_10
value: 9.524000000000001
- type: precision_at_100
value: 1.243
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 24.389
- type: precision_at_5
value: 16.752
- type: recall_at_1
value: 41.185
- type: recall_at_10
value: 73.193
- type: recall_at_100
value: 90.357
- type: recall_at_1000
value: 97.253
- type: recall_at_3
value: 59.199999999999996
- type: recall_at_5
value: 66.118
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.27
- type: map_at_10
value: 36.223
- type: map_at_100
value: 37.218
- type: map_at_1000
value: 37.293
- type: map_at_3
value: 33.503
- type: map_at_5
value: 35.097
- type: mrr_at_1
value: 29.492
- type: mrr_at_10
value: 38.352000000000004
- type: mrr_at_100
value: 39.188
- type: mrr_at_1000
value: 39.247
- type: mrr_at_3
value: 35.876000000000005
- type: mrr_at_5
value: 37.401
- type: ndcg_at_1
value: 29.492
- type: ndcg_at_10
value: 41.239
- type: ndcg_at_100
value: 46.066
- type: ndcg_at_1000
value: 47.992000000000004
- type: ndcg_at_3
value: 36.11
- type: ndcg_at_5
value: 38.772
- type: precision_at_1
value: 29.492
- type: precision_at_10
value: 6.260000000000001
- type: precision_at_100
value: 0.914
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 15.104000000000001
- type: precision_at_5
value: 10.644
- type: recall_at_1
value: 27.27
- type: recall_at_10
value: 54.589
- type: recall_at_100
value: 76.70700000000001
- type: recall_at_1000
value: 91.158
- type: recall_at_3
value: 40.974
- type: recall_at_5
value: 47.327000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.848
- type: map_at_10
value: 26.207
- type: map_at_100
value: 27.478
- type: map_at_1000
value: 27.602
- type: map_at_3
value: 23.405
- type: map_at_5
value: 24.98
- type: mrr_at_1
value: 21.891
- type: mrr_at_10
value: 31.041999999999998
- type: mrr_at_100
value: 32.092
- type: mrr_at_1000
value: 32.151999999999994
- type: mrr_at_3
value: 28.358
- type: mrr_at_5
value: 29.969
- type: ndcg_at_1
value: 21.891
- type: ndcg_at_10
value: 31.585
- type: ndcg_at_100
value: 37.531
- type: ndcg_at_1000
value: 40.256
- type: ndcg_at_3
value: 26.508
- type: ndcg_at_5
value: 28.894
- type: precision_at_1
value: 21.891
- type: precision_at_10
value: 5.795999999999999
- type: precision_at_100
value: 0.9990000000000001
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 12.769
- type: precision_at_5
value: 9.279
- type: recall_at_1
value: 17.848
- type: recall_at_10
value: 43.452
- type: recall_at_100
value: 69.216
- type: recall_at_1000
value: 88.102
- type: recall_at_3
value: 29.18
- type: recall_at_5
value: 35.347
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.94
- type: map_at_10
value: 41.248000000000005
- type: map_at_100
value: 42.495
- type: map_at_1000
value: 42.602000000000004
- type: map_at_3
value: 37.939
- type: map_at_5
value: 39.924
- type: mrr_at_1
value: 37.824999999999996
- type: mrr_at_10
value: 47.041
- type: mrr_at_100
value: 47.83
- type: mrr_at_1000
value: 47.878
- type: mrr_at_3
value: 44.466
- type: mrr_at_5
value: 46.111999999999995
- type: ndcg_at_1
value: 37.824999999999996
- type: ndcg_at_10
value: 47.223
- type: ndcg_at_100
value: 52.394
- type: ndcg_at_1000
value: 54.432
- type: ndcg_at_3
value: 42.032000000000004
- type: ndcg_at_5
value: 44.772
- type: precision_at_1
value: 37.824999999999996
- type: precision_at_10
value: 8.393
- type: precision_at_100
value: 1.2890000000000001
- type: precision_at_1000
value: 0.164
- type: precision_at_3
value: 19.698
- type: precision_at_5
value: 14.013
- type: recall_at_1
value: 30.94
- type: recall_at_10
value: 59.316
- type: recall_at_100
value: 80.783
- type: recall_at_1000
value: 94.15400000000001
- type: recall_at_3
value: 44.712
- type: recall_at_5
value: 51.932
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.104
- type: map_at_10
value: 36.675999999999995
- type: map_at_100
value: 38.076
- type: map_at_1000
value: 38.189
- type: map_at_3
value: 33.733999999999995
- type: map_at_5
value: 35.287
- type: mrr_at_1
value: 33.904
- type: mrr_at_10
value: 42.55
- type: mrr_at_100
value: 43.434
- type: mrr_at_1000
value: 43.494
- type: mrr_at_3
value: 40.126
- type: mrr_at_5
value: 41.473
- type: ndcg_at_1
value: 33.904
- type: ndcg_at_10
value: 42.414
- type: ndcg_at_100
value: 48.203
- type: ndcg_at_1000
value: 50.437
- type: ndcg_at_3
value: 37.633
- type: ndcg_at_5
value: 39.67
- type: precision_at_1
value: 33.904
- type: precision_at_10
value: 7.82
- type: precision_at_100
value: 1.2409999999999999
- type: precision_at_1000
value: 0.159
- type: precision_at_3
value: 17.884
- type: precision_at_5
value: 12.648000000000001
- type: recall_at_1
value: 27.104
- type: recall_at_10
value: 53.563
- type: recall_at_100
value: 78.557
- type: recall_at_1000
value: 93.533
- type: recall_at_3
value: 39.92
- type: recall_at_5
value: 45.457
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.707749999999997
- type: map_at_10
value: 36.961
- type: map_at_100
value: 38.158833333333334
- type: map_at_1000
value: 38.270333333333326
- type: map_at_3
value: 34.07183333333334
- type: map_at_5
value: 35.69533333333334
- type: mrr_at_1
value: 32.81875
- type: mrr_at_10
value: 41.293
- type: mrr_at_100
value: 42.116499999999995
- type: mrr_at_1000
value: 42.170249999999996
- type: mrr_at_3
value: 38.83983333333333
- type: mrr_at_5
value: 40.29775
- type: ndcg_at_1
value: 32.81875
- type: ndcg_at_10
value: 42.355
- type: ndcg_at_100
value: 47.41374999999999
- type: ndcg_at_1000
value: 49.5805
- type: ndcg_at_3
value: 37.52825
- type: ndcg_at_5
value: 39.83266666666667
- type: precision_at_1
value: 32.81875
- type: precision_at_10
value: 7.382416666666666
- type: precision_at_100
value: 1.1640833333333334
- type: precision_at_1000
value: 0.15383333333333335
- type: precision_at_3
value: 17.134166666666665
- type: precision_at_5
value: 12.174833333333336
- type: recall_at_1
value: 27.707749999999997
- type: recall_at_10
value: 53.945
- type: recall_at_100
value: 76.191
- type: recall_at_1000
value: 91.101
- type: recall_at_3
value: 40.39083333333334
- type: recall_at_5
value: 46.40083333333333
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.482
- type: map_at_10
value: 33.201
- type: map_at_100
value: 34.107
- type: map_at_1000
value: 34.197
- type: map_at_3
value: 31.174000000000003
- type: map_at_5
value: 32.279
- type: mrr_at_1
value: 29.908
- type: mrr_at_10
value: 36.235
- type: mrr_at_100
value: 37.04
- type: mrr_at_1000
value: 37.105
- type: mrr_at_3
value: 34.355999999999995
- type: mrr_at_5
value: 35.382999999999996
- type: ndcg_at_1
value: 29.908
- type: ndcg_at_10
value: 37.325
- type: ndcg_at_100
value: 41.795
- type: ndcg_at_1000
value: 44.105
- type: ndcg_at_3
value: 33.555
- type: ndcg_at_5
value: 35.266999999999996
- type: precision_at_1
value: 29.908
- type: precision_at_10
value: 5.721
- type: precision_at_100
value: 0.8630000000000001
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 14.008000000000001
- type: precision_at_5
value: 9.754999999999999
- type: recall_at_1
value: 26.482
- type: recall_at_10
value: 47.072
- type: recall_at_100
value: 67.27
- type: recall_at_1000
value: 84.371
- type: recall_at_3
value: 36.65
- type: recall_at_5
value: 40.774
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.815
- type: map_at_10
value: 26.369999999999997
- type: map_at_100
value: 27.458
- type: map_at_1000
value: 27.588
- type: map_at_3
value: 23.990000000000002
- type: map_at_5
value: 25.345000000000002
- type: mrr_at_1
value: 22.953000000000003
- type: mrr_at_10
value: 30.342999999999996
- type: mrr_at_100
value: 31.241000000000003
- type: mrr_at_1000
value: 31.319000000000003
- type: mrr_at_3
value: 28.16
- type: mrr_at_5
value: 29.406
- type: ndcg_at_1
value: 22.953000000000003
- type: ndcg_at_10
value: 31.151
- type: ndcg_at_100
value: 36.309000000000005
- type: ndcg_at_1000
value: 39.227000000000004
- type: ndcg_at_3
value: 26.921
- type: ndcg_at_5
value: 28.938000000000002
- type: precision_at_1
value: 22.953000000000003
- type: precision_at_10
value: 5.602
- type: precision_at_100
value: 0.9530000000000001
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 12.606
- type: precision_at_5
value: 9.119
- type: recall_at_1
value: 18.815
- type: recall_at_10
value: 41.574
- type: recall_at_100
value: 64.84400000000001
- type: recall_at_1000
value: 85.406
- type: recall_at_3
value: 29.694
- type: recall_at_5
value: 34.935
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.840999999999998
- type: map_at_10
value: 36.797999999999995
- type: map_at_100
value: 37.993
- type: map_at_1000
value: 38.086999999999996
- type: map_at_3
value: 34.050999999999995
- type: map_at_5
value: 35.379
- type: mrr_at_1
value: 32.649
- type: mrr_at_10
value: 41.025
- type: mrr_at_100
value: 41.878
- type: mrr_at_1000
value: 41.929
- type: mrr_at_3
value: 38.573
- type: mrr_at_5
value: 39.715
- type: ndcg_at_1
value: 32.649
- type: ndcg_at_10
value: 42.142
- type: ndcg_at_100
value: 47.558
- type: ndcg_at_1000
value: 49.643
- type: ndcg_at_3
value: 37.12
- type: ndcg_at_5
value: 38.983000000000004
- type: precision_at_1
value: 32.649
- type: precision_at_10
value: 7.08
- type: precision_at_100
value: 1.1039999999999999
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 16.698
- type: precision_at_5
value: 11.511000000000001
- type: recall_at_1
value: 27.840999999999998
- type: recall_at_10
value: 54.245
- type: recall_at_100
value: 77.947
- type: recall_at_1000
value: 92.36999999999999
- type: recall_at_3
value: 40.146
- type: recall_at_5
value: 44.951
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.529000000000003
- type: map_at_10
value: 35.010000000000005
- type: map_at_100
value: 36.647
- type: map_at_1000
value: 36.857
- type: map_at_3
value: 31.968000000000004
- type: map_at_5
value: 33.554
- type: mrr_at_1
value: 31.818
- type: mrr_at_10
value: 39.550999999999995
- type: mrr_at_100
value: 40.54
- type: mrr_at_1000
value: 40.596
- type: mrr_at_3
value: 36.726
- type: mrr_at_5
value: 38.416
- type: ndcg_at_1
value: 31.818
- type: ndcg_at_10
value: 40.675
- type: ndcg_at_100
value: 46.548
- type: ndcg_at_1000
value: 49.126
- type: ndcg_at_3
value: 35.829
- type: ndcg_at_5
value: 38.0
- type: precision_at_1
value: 31.818
- type: precision_at_10
value: 7.826
- type: precision_at_100
value: 1.538
- type: precision_at_1000
value: 0.24
- type: precision_at_3
value: 16.601
- type: precision_at_5
value: 12.095
- type: recall_at_1
value: 26.529000000000003
- type: recall_at_10
value: 51.03
- type: recall_at_100
value: 77.556
- type: recall_at_1000
value: 93.804
- type: recall_at_3
value: 36.986000000000004
- type: recall_at_5
value: 43.096000000000004
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.480999999999998
- type: map_at_10
value: 30.817
- type: map_at_100
value: 31.838
- type: map_at_1000
value: 31.932
- type: map_at_3
value: 28.011999999999997
- type: map_at_5
value: 29.668
- type: mrr_at_1
value: 25.323
- type: mrr_at_10
value: 33.072
- type: mrr_at_100
value: 33.926
- type: mrr_at_1000
value: 33.993
- type: mrr_at_3
value: 30.436999999999998
- type: mrr_at_5
value: 32.092
- type: ndcg_at_1
value: 25.323
- type: ndcg_at_10
value: 35.514
- type: ndcg_at_100
value: 40.489000000000004
- type: ndcg_at_1000
value: 42.908
- type: ndcg_at_3
value: 30.092000000000002
- type: ndcg_at_5
value: 32.989000000000004
- type: precision_at_1
value: 25.323
- type: precision_at_10
value: 5.545
- type: precision_at_100
value: 0.861
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 12.446
- type: precision_at_5
value: 9.131
- type: recall_at_1
value: 23.480999999999998
- type: recall_at_10
value: 47.825
- type: recall_at_100
value: 70.652
- type: recall_at_1000
value: 88.612
- type: recall_at_3
value: 33.537
- type: recall_at_5
value: 40.542
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.333999999999998
- type: map_at_10
value: 22.524
- type: map_at_100
value: 24.506
- type: map_at_1000
value: 24.715
- type: map_at_3
value: 19.022
- type: map_at_5
value: 20.693
- type: mrr_at_1
value: 29.186
- type: mrr_at_10
value: 41.22
- type: mrr_at_100
value: 42.16
- type: mrr_at_1000
value: 42.192
- type: mrr_at_3
value: 38.013000000000005
- type: mrr_at_5
value: 39.704
- type: ndcg_at_1
value: 29.186
- type: ndcg_at_10
value: 31.167
- type: ndcg_at_100
value: 38.879000000000005
- type: ndcg_at_1000
value: 42.376000000000005
- type: ndcg_at_3
value: 25.817
- type: ndcg_at_5
value: 27.377000000000002
- type: precision_at_1
value: 29.186
- type: precision_at_10
value: 9.693999999999999
- type: precision_at_100
value: 1.8030000000000002
- type: precision_at_1000
value: 0.246
- type: precision_at_3
value: 19.11
- type: precision_at_5
value: 14.344999999999999
- type: recall_at_1
value: 13.333999999999998
- type: recall_at_10
value: 37.092000000000006
- type: recall_at_100
value: 63.651
- type: recall_at_1000
value: 83.05
- type: recall_at_3
value: 23.74
- type: recall_at_5
value: 28.655
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.151
- type: map_at_10
value: 19.653000000000002
- type: map_at_100
value: 28.053
- type: map_at_1000
value: 29.709000000000003
- type: map_at_3
value: 14.191
- type: map_at_5
value: 16.456
- type: mrr_at_1
value: 66.25
- type: mrr_at_10
value: 74.4
- type: mrr_at_100
value: 74.715
- type: mrr_at_1000
value: 74.726
- type: mrr_at_3
value: 72.417
- type: mrr_at_5
value: 73.667
- type: ndcg_at_1
value: 54.25
- type: ndcg_at_10
value: 40.77
- type: ndcg_at_100
value: 46.359
- type: ndcg_at_1000
value: 54.193000000000005
- type: ndcg_at_3
value: 44.832
- type: ndcg_at_5
value: 42.63
- type: precision_at_1
value: 66.25
- type: precision_at_10
value: 32.175
- type: precision_at_100
value: 10.668
- type: precision_at_1000
value: 2.067
- type: precision_at_3
value: 47.667
- type: precision_at_5
value: 41.3
- type: recall_at_1
value: 9.151
- type: recall_at_10
value: 25.003999999999998
- type: recall_at_100
value: 52.976
- type: recall_at_1000
value: 78.315
- type: recall_at_3
value: 15.487
- type: recall_at_5
value: 18.999
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 51.89999999999999
- type: f1
value: 46.47777925067403
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 73.706
- type: map_at_10
value: 82.423
- type: map_at_100
value: 82.67999999999999
- type: map_at_1000
value: 82.694
- type: map_at_3
value: 81.328
- type: map_at_5
value: 82.001
- type: mrr_at_1
value: 79.613
- type: mrr_at_10
value: 87.07000000000001
- type: mrr_at_100
value: 87.169
- type: mrr_at_1000
value: 87.17
- type: mrr_at_3
value: 86.404
- type: mrr_at_5
value: 86.856
- type: ndcg_at_1
value: 79.613
- type: ndcg_at_10
value: 86.289
- type: ndcg_at_100
value: 87.201
- type: ndcg_at_1000
value: 87.428
- type: ndcg_at_3
value: 84.625
- type: ndcg_at_5
value: 85.53699999999999
- type: precision_at_1
value: 79.613
- type: precision_at_10
value: 10.399
- type: precision_at_100
value: 1.1079999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 32.473
- type: precision_at_5
value: 20.132
- type: recall_at_1
value: 73.706
- type: recall_at_10
value: 93.559
- type: recall_at_100
value: 97.188
- type: recall_at_1000
value: 98.555
- type: recall_at_3
value: 88.98700000000001
- type: recall_at_5
value: 91.373
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.841
- type: map_at_10
value: 32.643
- type: map_at_100
value: 34.575
- type: map_at_1000
value: 34.736
- type: map_at_3
value: 28.317999999999998
- type: map_at_5
value: 30.964000000000002
- type: mrr_at_1
value: 39.660000000000004
- type: mrr_at_10
value: 48.620000000000005
- type: mrr_at_100
value: 49.384
- type: mrr_at_1000
value: 49.415
- type: mrr_at_3
value: 45.988
- type: mrr_at_5
value: 47.361
- type: ndcg_at_1
value: 39.660000000000004
- type: ndcg_at_10
value: 40.646
- type: ndcg_at_100
value: 47.657
- type: ndcg_at_1000
value: 50.428
- type: ndcg_at_3
value: 36.689
- type: ndcg_at_5
value: 38.211
- type: precision_at_1
value: 39.660000000000004
- type: precision_at_10
value: 11.235000000000001
- type: precision_at_100
value: 1.8530000000000002
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_3
value: 24.587999999999997
- type: precision_at_5
value: 18.395
- type: recall_at_1
value: 19.841
- type: recall_at_10
value: 48.135
- type: recall_at_100
value: 74.224
- type: recall_at_1000
value: 90.826
- type: recall_at_3
value: 33.536
- type: recall_at_5
value: 40.311
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.358
- type: map_at_10
value: 64.497
- type: map_at_100
value: 65.362
- type: map_at_1000
value: 65.41900000000001
- type: map_at_3
value: 61.06700000000001
- type: map_at_5
value: 63.317
- type: mrr_at_1
value: 80.716
- type: mrr_at_10
value: 86.10799999999999
- type: mrr_at_100
value: 86.265
- type: mrr_at_1000
value: 86.27
- type: mrr_at_3
value: 85.271
- type: mrr_at_5
value: 85.82499999999999
- type: ndcg_at_1
value: 80.716
- type: ndcg_at_10
value: 72.597
- type: ndcg_at_100
value: 75.549
- type: ndcg_at_1000
value: 76.61
- type: ndcg_at_3
value: 67.874
- type: ndcg_at_5
value: 70.655
- type: precision_at_1
value: 80.716
- type: precision_at_10
value: 15.148
- type: precision_at_100
value: 1.745
- type: precision_at_1000
value: 0.188
- type: precision_at_3
value: 43.597
- type: precision_at_5
value: 28.351
- type: recall_at_1
value: 40.358
- type: recall_at_10
value: 75.739
- type: recall_at_100
value: 87.259
- type: recall_at_1000
value: 94.234
- type: recall_at_3
value: 65.39500000000001
- type: recall_at_5
value: 70.878
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 90.80799999999998
- type: ap
value: 86.81350378180757
- type: f1
value: 90.79901248314215
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.096
- type: map_at_10
value: 34.384
- type: map_at_100
value: 35.541
- type: map_at_1000
value: 35.589999999999996
- type: map_at_3
value: 30.496000000000002
- type: map_at_5
value: 32.718
- type: mrr_at_1
value: 22.750999999999998
- type: mrr_at_10
value: 35.024
- type: mrr_at_100
value: 36.125
- type: mrr_at_1000
value: 36.168
- type: mrr_at_3
value: 31.225
- type: mrr_at_5
value: 33.416000000000004
- type: ndcg_at_1
value: 22.750999999999998
- type: ndcg_at_10
value: 41.351
- type: ndcg_at_100
value: 46.92
- type: ndcg_at_1000
value: 48.111
- type: ndcg_at_3
value: 33.439
- type: ndcg_at_5
value: 37.407000000000004
- type: precision_at_1
value: 22.750999999999998
- type: precision_at_10
value: 6.564
- type: precision_at_100
value: 0.935
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.288
- type: precision_at_5
value: 10.581999999999999
- type: recall_at_1
value: 22.096
- type: recall_at_10
value: 62.771
- type: recall_at_100
value: 88.529
- type: recall_at_1000
value: 97.55
- type: recall_at_3
value: 41.245
- type: recall_at_5
value: 50.788
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.16780665754673
- type: f1
value: 93.96331194859894
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 76.90606475148198
- type: f1
value: 58.58344986604187
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.14660390047075
- type: f1
value: 74.31533923533614
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.16139878950908
- type: f1
value: 80.18532656824924
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.949880906135085
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.56300351524862
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.196521894371315
- type: mrr
value: 32.22644231694389
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.783
- type: map_at_10
value: 14.549000000000001
- type: map_at_100
value: 18.433
- type: map_at_1000
value: 19.949
- type: map_at_3
value: 10.936
- type: map_at_5
value: 12.514
- type: mrr_at_1
value: 47.368
- type: mrr_at_10
value: 56.42
- type: mrr_at_100
value: 56.908
- type: mrr_at_1000
value: 56.95
- type: mrr_at_3
value: 54.283
- type: mrr_at_5
value: 55.568
- type: ndcg_at_1
value: 45.666000000000004
- type: ndcg_at_10
value: 37.389
- type: ndcg_at_100
value: 34.253
- type: ndcg_at_1000
value: 43.059999999999995
- type: ndcg_at_3
value: 42.725
- type: ndcg_at_5
value: 40.193
- type: precision_at_1
value: 47.368
- type: precision_at_10
value: 27.988000000000003
- type: precision_at_100
value: 8.672
- type: precision_at_1000
value: 2.164
- type: precision_at_3
value: 40.248
- type: precision_at_5
value: 34.737
- type: recall_at_1
value: 6.783
- type: recall_at_10
value: 17.838
- type: recall_at_100
value: 33.672000000000004
- type: recall_at_1000
value: 66.166
- type: recall_at_3
value: 11.849
- type: recall_at_5
value: 14.205000000000002
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.698999999999998
- type: map_at_10
value: 46.556
- type: map_at_100
value: 47.652
- type: map_at_1000
value: 47.68
- type: map_at_3
value: 42.492000000000004
- type: map_at_5
value: 44.763999999999996
- type: mrr_at_1
value: 35.747
- type: mrr_at_10
value: 49.242999999999995
- type: mrr_at_100
value: 50.052
- type: mrr_at_1000
value: 50.068
- type: mrr_at_3
value: 45.867000000000004
- type: mrr_at_5
value: 47.778999999999996
- type: ndcg_at_1
value: 35.717999999999996
- type: ndcg_at_10
value: 54.14600000000001
- type: ndcg_at_100
value: 58.672999999999995
- type: ndcg_at_1000
value: 59.279
- type: ndcg_at_3
value: 46.407
- type: ndcg_at_5
value: 50.181
- type: precision_at_1
value: 35.717999999999996
- type: precision_at_10
value: 8.844000000000001
- type: precision_at_100
value: 1.139
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 20.993000000000002
- type: precision_at_5
value: 14.791000000000002
- type: recall_at_1
value: 31.698999999999998
- type: recall_at_10
value: 74.693
- type: recall_at_100
value: 94.15299999999999
- type: recall_at_1000
value: 98.585
- type: recall_at_3
value: 54.388999999999996
- type: recall_at_5
value: 63.08200000000001
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.283
- type: map_at_10
value: 85.24000000000001
- type: map_at_100
value: 85.882
- type: map_at_1000
value: 85.897
- type: map_at_3
value: 82.326
- type: map_at_5
value: 84.177
- type: mrr_at_1
value: 82.21000000000001
- type: mrr_at_10
value: 88.228
- type: mrr_at_100
value: 88.32
- type: mrr_at_1000
value: 88.32
- type: mrr_at_3
value: 87.323
- type: mrr_at_5
value: 87.94800000000001
- type: ndcg_at_1
value: 82.17999999999999
- type: ndcg_at_10
value: 88.9
- type: ndcg_at_100
value: 90.079
- type: ndcg_at_1000
value: 90.158
- type: ndcg_at_3
value: 86.18299999999999
- type: ndcg_at_5
value: 87.71799999999999
- type: precision_at_1
value: 82.17999999999999
- type: precision_at_10
value: 13.464
- type: precision_at_100
value: 1.533
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.693
- type: precision_at_5
value: 24.792
- type: recall_at_1
value: 71.283
- type: recall_at_10
value: 95.742
- type: recall_at_100
value: 99.67200000000001
- type: recall_at_1000
value: 99.981
- type: recall_at_3
value: 87.888
- type: recall_at_5
value: 92.24
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.24267063669042
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.88056988932578
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.903
- type: map_at_10
value: 13.202
- type: map_at_100
value: 15.5
- type: map_at_1000
value: 15.870999999999999
- type: map_at_3
value: 9.407
- type: map_at_5
value: 11.238
- type: mrr_at_1
value: 24.2
- type: mrr_at_10
value: 35.867
- type: mrr_at_100
value: 37.001
- type: mrr_at_1000
value: 37.043
- type: mrr_at_3
value: 32.5
- type: mrr_at_5
value: 34.35
- type: ndcg_at_1
value: 24.2
- type: ndcg_at_10
value: 21.731
- type: ndcg_at_100
value: 30.7
- type: ndcg_at_1000
value: 36.618
- type: ndcg_at_3
value: 20.72
- type: ndcg_at_5
value: 17.954
- type: precision_at_1
value: 24.2
- type: precision_at_10
value: 11.33
- type: precision_at_100
value: 2.4410000000000003
- type: precision_at_1000
value: 0.386
- type: precision_at_3
value: 19.667
- type: precision_at_5
value: 15.86
- type: recall_at_1
value: 4.903
- type: recall_at_10
value: 22.962
- type: recall_at_100
value: 49.563
- type: recall_at_1000
value: 78.238
- type: recall_at_3
value: 11.953
- type: recall_at_5
value: 16.067999999999998
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.12694254604078
- type: cos_sim_spearman
value: 80.30141815181918
- type: euclidean_pearson
value: 81.34015449877128
- type: euclidean_spearman
value: 80.13984197010849
- type: manhattan_pearson
value: 81.31767068124086
- type: manhattan_spearman
value: 80.11720513114103
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.13112984010417
- type: cos_sim_spearman
value: 78.03063573402875
- type: euclidean_pearson
value: 83.51928418844804
- type: euclidean_spearman
value: 78.4045235411144
- type: manhattan_pearson
value: 83.49981637388689
- type: manhattan_spearman
value: 78.4042575139372
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 82.50327987379504
- type: cos_sim_spearman
value: 84.18556767756205
- type: euclidean_pearson
value: 82.69684424327679
- type: euclidean_spearman
value: 83.5368106038335
- type: manhattan_pearson
value: 82.57967581007374
- type: manhattan_spearman
value: 83.43009053133697
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.50756863007814
- type: cos_sim_spearman
value: 82.27204331279108
- type: euclidean_pearson
value: 81.39535251429741
- type: euclidean_spearman
value: 81.84386626336239
- type: manhattan_pearson
value: 81.34281737280695
- type: manhattan_spearman
value: 81.81149375673166
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.8727714856726
- type: cos_sim_spearman
value: 87.95738287792312
- type: euclidean_pearson
value: 86.62920602795887
- type: euclidean_spearman
value: 87.05207355381243
- type: manhattan_pearson
value: 86.53587918472225
- type: manhattan_spearman
value: 86.95382961029586
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.52240359769479
- type: cos_sim_spearman
value: 85.47685776238286
- type: euclidean_pearson
value: 84.25815333483058
- type: euclidean_spearman
value: 85.27415639683198
- type: manhattan_pearson
value: 84.29127757025637
- type: manhattan_spearman
value: 85.30226224917351
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.42501708915708
- type: cos_sim_spearman
value: 86.42276182795041
- type: euclidean_pearson
value: 86.5408207354761
- type: euclidean_spearman
value: 85.46096321750838
- type: manhattan_pearson
value: 86.54177303026881
- type: manhattan_spearman
value: 85.50313151916117
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.86521089250766
- type: cos_sim_spearman
value: 65.94868540323003
- type: euclidean_pearson
value: 67.16569626533084
- type: euclidean_spearman
value: 66.37667004134917
- type: manhattan_pearson
value: 67.1482365102333
- type: manhattan_spearman
value: 66.53240122580029
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.64746265365318
- type: cos_sim_spearman
value: 86.41888825906786
- type: euclidean_pearson
value: 85.27453642725811
- type: euclidean_spearman
value: 85.94095796602544
- type: manhattan_pearson
value: 85.28643660505334
- type: manhattan_spearman
value: 85.95028003260744
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.48903153618527
- type: mrr
value: 96.41081503826601
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 58.594
- type: map_at_10
value: 69.296
- type: map_at_100
value: 69.782
- type: map_at_1000
value: 69.795
- type: map_at_3
value: 66.23
- type: map_at_5
value: 68.293
- type: mrr_at_1
value: 61.667
- type: mrr_at_10
value: 70.339
- type: mrr_at_100
value: 70.708
- type: mrr_at_1000
value: 70.722
- type: mrr_at_3
value: 68.0
- type: mrr_at_5
value: 69.56700000000001
- type: ndcg_at_1
value: 61.667
- type: ndcg_at_10
value: 74.039
- type: ndcg_at_100
value: 76.103
- type: ndcg_at_1000
value: 76.47800000000001
- type: ndcg_at_3
value: 68.967
- type: ndcg_at_5
value: 71.96900000000001
- type: precision_at_1
value: 61.667
- type: precision_at_10
value: 9.866999999999999
- type: precision_at_100
value: 1.097
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 27.111
- type: precision_at_5
value: 18.2
- type: recall_at_1
value: 58.594
- type: recall_at_10
value: 87.422
- type: recall_at_100
value: 96.667
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 74.217
- type: recall_at_5
value: 81.539
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.85049504950496
- type: cos_sim_ap
value: 96.33111544137081
- type: cos_sim_f1
value: 92.35443037974684
- type: cos_sim_precision
value: 93.53846153846153
- type: cos_sim_recall
value: 91.2
- type: dot_accuracy
value: 99.82376237623762
- type: dot_ap
value: 95.38082527310888
- type: dot_f1
value: 90.90909090909092
- type: dot_precision
value: 92.90187891440502
- type: dot_recall
value: 89.0
- type: euclidean_accuracy
value: 99.84851485148515
- type: euclidean_ap
value: 96.32316003996347
- type: euclidean_f1
value: 92.2071392659628
- type: euclidean_precision
value: 92.71991911021233
- type: euclidean_recall
value: 91.7
- type: manhattan_accuracy
value: 99.84851485148515
- type: manhattan_ap
value: 96.3655668249217
- type: manhattan_f1
value: 92.18356026222895
- type: manhattan_precision
value: 92.98067141403867
- type: manhattan_recall
value: 91.4
- type: max_accuracy
value: 99.85049504950496
- type: max_ap
value: 96.3655668249217
- type: max_f1
value: 92.35443037974684
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 65.94861371629051
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 35.009430451385
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.61164066427969
- type: mrr
value: 55.49710603938544
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.622620124907662
- type: cos_sim_spearman
value: 31.0678351356163
- type: dot_pearson
value: 30.863727693306814
- type: dot_spearman
value: 31.230306567021255
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22
- type: map_at_10
value: 2.011
- type: map_at_100
value: 10.974
- type: map_at_1000
value: 25.819
- type: map_at_3
value: 0.6649999999999999
- type: map_at_5
value: 1.076
- type: mrr_at_1
value: 86.0
- type: mrr_at_10
value: 91.8
- type: mrr_at_100
value: 91.8
- type: mrr_at_1000
value: 91.8
- type: mrr_at_3
value: 91.0
- type: mrr_at_5
value: 91.8
- type: ndcg_at_1
value: 82.0
- type: ndcg_at_10
value: 78.07300000000001
- type: ndcg_at_100
value: 58.231
- type: ndcg_at_1000
value: 51.153000000000006
- type: ndcg_at_3
value: 81.123
- type: ndcg_at_5
value: 81.059
- type: precision_at_1
value: 86.0
- type: precision_at_10
value: 83.0
- type: precision_at_100
value: 59.38
- type: precision_at_1000
value: 22.55
- type: precision_at_3
value: 87.333
- type: precision_at_5
value: 86.8
- type: recall_at_1
value: 0.22
- type: recall_at_10
value: 2.2079999999999997
- type: recall_at_100
value: 14.069
- type: recall_at_1000
value: 47.678
- type: recall_at_3
value: 0.7040000000000001
- type: recall_at_5
value: 1.161
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.809
- type: map_at_10
value: 10.394
- type: map_at_100
value: 16.598
- type: map_at_1000
value: 18.142
- type: map_at_3
value: 5.572
- type: map_at_5
value: 7.1370000000000005
- type: mrr_at_1
value: 32.653
- type: mrr_at_10
value: 46.564
- type: mrr_at_100
value: 47.469
- type: mrr_at_1000
value: 47.469
- type: mrr_at_3
value: 42.177
- type: mrr_at_5
value: 44.524
- type: ndcg_at_1
value: 30.612000000000002
- type: ndcg_at_10
value: 25.701
- type: ndcg_at_100
value: 37.532
- type: ndcg_at_1000
value: 48.757
- type: ndcg_at_3
value: 28.199999999999996
- type: ndcg_at_5
value: 25.987
- type: precision_at_1
value: 32.653
- type: precision_at_10
value: 23.469
- type: precision_at_100
value: 7.9799999999999995
- type: precision_at_1000
value: 1.5350000000000001
- type: precision_at_3
value: 29.932
- type: precision_at_5
value: 26.122
- type: recall_at_1
value: 2.809
- type: recall_at_10
value: 16.887
- type: recall_at_100
value: 48.67
- type: recall_at_1000
value: 82.89699999999999
- type: recall_at_3
value: 6.521000000000001
- type: recall_at_5
value: 9.609
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.57860000000001
- type: ap
value: 13.82629211536393
- type: f1
value: 54.59860966183956
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.38030560271647
- type: f1
value: 59.69685552567865
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 51.4736717043405
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.92853311080646
- type: cos_sim_ap
value: 77.67872502591382
- type: cos_sim_f1
value: 70.33941236068895
- type: cos_sim_precision
value: 67.63273258645884
- type: cos_sim_recall
value: 73.27176781002639
- type: dot_accuracy
value: 85.79603027954938
- type: dot_ap
value: 73.73786190233379
- type: dot_f1
value: 67.3437901774235
- type: dot_precision
value: 65.67201604814443
- type: dot_recall
value: 69.10290237467018
- type: euclidean_accuracy
value: 86.94045419324074
- type: euclidean_ap
value: 77.6687791535167
- type: euclidean_f1
value: 70.47209214023542
- type: euclidean_precision
value: 67.7207492094381
- type: euclidean_recall
value: 73.45646437994723
- type: manhattan_accuracy
value: 86.87488823985218
- type: manhattan_ap
value: 77.63373392430728
- type: manhattan_f1
value: 70.40920716112532
- type: manhattan_precision
value: 68.31265508684864
- type: manhattan_recall
value: 72.63852242744063
- type: max_accuracy
value: 86.94045419324074
- type: max_ap
value: 77.67872502591382
- type: max_f1
value: 70.47209214023542
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.67155664221679
- type: cos_sim_ap
value: 85.64591703003417
- type: cos_sim_f1
value: 77.59531005352656
- type: cos_sim_precision
value: 73.60967184801382
- type: cos_sim_recall
value: 82.03726516784724
- type: dot_accuracy
value: 88.41541506578181
- type: dot_ap
value: 84.6482788957769
- type: dot_f1
value: 77.04748541466657
- type: dot_precision
value: 74.02440754931176
- type: dot_recall
value: 80.3279950723745
- type: euclidean_accuracy
value: 88.63080684596576
- type: euclidean_ap
value: 85.44570045321562
- type: euclidean_f1
value: 77.28769403336106
- type: euclidean_precision
value: 72.90600040958427
- type: euclidean_recall
value: 82.22975053895904
- type: manhattan_accuracy
value: 88.59393798269105
- type: manhattan_ap
value: 85.40271361038187
- type: manhattan_f1
value: 77.17606419344392
- type: manhattan_precision
value: 72.4447747078295
- type: manhattan_recall
value: 82.5685247921158
- type: max_accuracy
value: 88.67155664221679
- type: max_ap
value: 85.64591703003417
- type: max_f1
value: 77.59531005352656
license: mit
language:
- en
---
<h1 align="center">FlagEmbedding</h1>
<h4 align="center">
<p>
<a href=#model-list>Model List</a> |
<a href=#frequently-asked-questions>FAQ</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#train">Train</a> |
<a href="#contact">Contact</a> |
<a href="#citation">Citation</a> |
<a href="#license">License</a>
<p>
</h4>
For more details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
If you are looking for a model that supports more languages, longer texts, and other retrieval methods, you can try using [bge-m3](https://huggingface.co/BAAI/bge-m3).
[English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
FlagEmbedding focuses on retrieval-augmented LLMs, consisting of the following projects currently:
- **Long-Context LLM**: [Activation Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon)
- **Fine-tuning of LM** : [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail)
- **Dense Retrieval**: [BGE-M3](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3), [LLM Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), [BGE Embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding)
- **Reranker Model**: [BGE Reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
- **Benchmark**: [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB)
## News
- 1/30/2024: Release **BGE-M3**, a new member to BGE model series! M3 stands for **M**ulti-linguality (100+ languages), **M**ulti-granularities (input length up to 8192), **M**ulti-Functionality (unification of dense, lexical, multi-vec/colbert retrieval).
It is the first embedding model which supports all three retrieval methods, achieving new SOTA on multi-lingual (MIRACL) and cross-lingual (MKQA) benchmarks.
[Technical Report](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/BGE_M3/BGE_M3.pdf) and [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3). :fire:
- 1/9/2024: Release [Activation-Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon), an effective, efficient, compatible, and low-cost (training) method to extend the context length of LLM. [Technical Report](https://arxiv.org/abs/2401.03462) :fire:
- 12/24/2023: Release **LLaRA**, a LLaMA-7B based dense retriever, leading to state-of-the-art performances on MS MARCO and BEIR. Model and code will be open-sourced. Please stay tuned. [Technical Report](https://arxiv.org/abs/2312.15503) :fire:
- 11/23/2023: Release [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail), a method to maintain general capabilities during fine-tuning by merging multiple language models. [Technical Report](https://arxiv.org/abs/2311.13534) :fire:
- 10/12/2023: Release [LLM-Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Technical Report](https://arxiv.org/pdf/2310.07554.pdf)
- 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) and [massive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released
- 09/12/2023: New models:
- **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
- **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
<details>
<summary>More</summary>
<!-- ### More -->
- 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
- 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
</details>
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | | Description | query instruction for retrieval [1] |
|:-------------------------------|:--------:| :--------:| :--------:|:--------:|
| [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | Multilingual | [Inference](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3#usage) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3) | Multi-Functionality(dense retrieval, sparse retrieval, multi-vector(colbert)), Multi-Linguality, and Multi-Granularity(8192 tokens) | |
| [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
[1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
[2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI.
If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models .
## Frequently asked questions
<details>
<summary>1. How to fine-tune bge embedding model?</summary>
<!-- ### How to fine-tune bge embedding model? -->
Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model.
Some suggestions:
- Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance.
- If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
- If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.
</details>
<details>
<summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary>
<!-- ### The similarity score between two dissimilar sentences is higher than 0.5 -->
**Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.**
Since we finetune the models by contrastive learning with a temperature of 0.01,
the similarity distribution of the current BGE model is about in the interval \[0.6, 1\].
So a similarity score greater than 0.5 does not indicate that the two sentences are similar.
For downstream tasks, such as passage retrieval or semantic similarity,
**what matters is the relative order of the scores, not the absolute value.**
If you need to filter similar sentences based on a similarity threshold,
please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).
</details>
<details>
<summary>3. When does the query instruction need to be used</summary>
<!-- ### When does the query instruction need to be used -->
For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction.
No instruction only has a slight degradation in retrieval performance compared with using instruction.
So you can generate embedding without instruction in all cases for convenience.
For a retrieval task that uses short queries to find long related documents,
it is recommended to add instructions for these short queries.
**The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
In all cases, the documents/passages do not need to add the instruction.
</details>
## Usage
### Usage for Embedding Model
Here are some examples for using `bge` models with
[FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers).
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
```python
from FlagEmbedding import FlagModel
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = FlagModel('BAAI/bge-large-zh-v1.5',
query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:",
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
embeddings_1 = model.encode(sentences_1)
embeddings_2 = model.encode(sentences_2)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query
# corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode(passages)
scores = q_embeddings @ p_embeddings.T
```
For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs.
You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable.
#### Using Sentence-Transformers
You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net):
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
For s2p(short query to long passage) retrieval task,
each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
But the instruction is not needed for passages.
```python
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
instruction = "为这个句子生成表示以用于检索相关文章:"
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
#### Using Langchain
You can use `bge` in langchain like this:
```python
from langchain.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-large-en-v1.5"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
model = HuggingFaceBgeEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs,
query_instruction="为这个句子生成表示以用于检索相关文章:"
)
model.query_instruction = "为这个句子生成表示以用于检索相关文章:"
```
#### Using HuggingFace Transformers
With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5')
model.eval()
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
#### Usage of the ONNX files
```python
from optimum.onnxruntime import ORTModelForFeatureExtraction # type: ignore
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-en-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-large-en-v1.5', revision="refs/pr/13")
model_ort = ORTModelForFeatureExtraction.from_pretrained('BAAI/bge-large-en-v1.5', revision="refs/pr/13",file_name="onnx/model.onnx")
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
model_output_ort = model_ort(**encoded_input)
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# model_output and model_output_ort are identical
```
#### Usage via infinity
Its also possible to deploy the onnx files with the [infinity_emb](https://github.com/michaelfeil/infinity) pip package.
```python
import asyncio
from infinity_emb import AsyncEmbeddingEngine, EngineArgs
sentences = ["Embed this is sentence via Infinity.", "Paris is in France."]
engine = AsyncEmbeddingEngine.from_args(
EngineArgs(model_name_or_path = "BAAI/bge-large-en-v1.5", device="cpu", engine="optimum" # or engine="torch"
))
async def main():
async with engine:
embeddings, usage = await engine.embed(sentences=sentences)
asyncio.run(main())
```
### Usage for Reranker
Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.
You can get a relevance score by inputting query and passage to the reranker.
The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
Get relevance scores (higher scores indicate more relevance):
```python
from FlagEmbedding import FlagReranker
reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
score = reranker.compute_score(['query', 'passage'])
print(score)
scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']])
print(scores)
```
#### Using Huggingface transformers
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')
model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large')
model.eval()
pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
with torch.no_grad():
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores)
```
## Evaluation
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
- **MTEB**:
| Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 |
| [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 |
| [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
| [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
| [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
| [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
| [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
| [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
- **C-MTEB**:
We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
| Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 |
| [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 |
| [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 |
| [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 |
| [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 |
| [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 |
| [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 |
| [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 |
| [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 |
- **Reranking**:
See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script.
| Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 |
| multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 |
| multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 |
| multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 |
| m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 |
| m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 |
| bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 |
| bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 |
\* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks
## Train
### BAAI Embedding
We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning.
**You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).**
We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain).
Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
### BGE Reranker
Cross-encoder will perform full-attention over the input pair,
which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.
Therefore, it can be used to re-rank the top-k documents returned by embedding model.
We train the cross-encoder on a multilingual pair data,
The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
## Contact
If you have any question or suggestion related to this project, feel free to open an issue or pull request.
You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]).
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{bge_embedding,
title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
year={2023},
eprint={2309.07597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
|
NousResearch/CodeLlama-13b-hf | NousResearch | "2023-08-24T17:34:57Z" | 3,248 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-08-24T17:32:04Z" | Entry not found |
RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf | RichardErkhov | "2024-06-26T00:14:58Z" | 3,248 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-25T23:58:56Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pip-sql-1.3b - GGUF
- Model creator: https://huggingface.co/PipableAI/
- Original model: https://huggingface.co/PipableAI/pip-sql-1.3b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [pip-sql-1.3b.Q2_K.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf/blob/main/pip-sql-1.3b.Q2_K.gguf) | Q2_K | 0.52GB |
| [pip-sql-1.3b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf/blob/main/pip-sql-1.3b.IQ3_XS.gguf) | IQ3_XS | 0.57GB |
| [pip-sql-1.3b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf/blob/main/pip-sql-1.3b.IQ3_S.gguf) | IQ3_S | 0.6GB |
| [pip-sql-1.3b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf/blob/main/pip-sql-1.3b.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [pip-sql-1.3b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf/blob/main/pip-sql-1.3b.IQ3_M.gguf) | IQ3_M | 0.63GB |
| [pip-sql-1.3b.Q3_K.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf/blob/main/pip-sql-1.3b.Q3_K.gguf) | Q3_K | 0.66GB |
| [pip-sql-1.3b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf/blob/main/pip-sql-1.3b.Q3_K_M.gguf) | Q3_K_M | 0.66GB |
| [pip-sql-1.3b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf/blob/main/pip-sql-1.3b.Q3_K_L.gguf) | Q3_K_L | 0.69GB |
| [pip-sql-1.3b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf/blob/main/pip-sql-1.3b.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [pip-sql-1.3b.Q4_0.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf/blob/main/pip-sql-1.3b.Q4_0.gguf) | Q4_0 | 0.72GB |
| [pip-sql-1.3b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf/blob/main/pip-sql-1.3b.IQ4_NL.gguf) | IQ4_NL | 0.73GB |
| [pip-sql-1.3b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf/blob/main/pip-sql-1.3b.Q4_K_S.gguf) | Q4_K_S | 0.76GB |
| [pip-sql-1.3b.Q4_K.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf/blob/main/pip-sql-1.3b.Q4_K.gguf) | Q4_K | 0.81GB |
| [pip-sql-1.3b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf/blob/main/pip-sql-1.3b.Q4_K_M.gguf) | Q4_K_M | 0.81GB |
| [pip-sql-1.3b.Q4_1.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf/blob/main/pip-sql-1.3b.Q4_1.gguf) | Q4_1 | 0.8GB |
| [pip-sql-1.3b.Q5_0.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf/blob/main/pip-sql-1.3b.Q5_0.gguf) | Q5_0 | 0.87GB |
| [pip-sql-1.3b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf/blob/main/pip-sql-1.3b.Q5_K_S.gguf) | Q5_K_S | 0.89GB |
| [pip-sql-1.3b.Q5_K.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf/blob/main/pip-sql-1.3b.Q5_K.gguf) | Q5_K | 0.93GB |
| [pip-sql-1.3b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf/blob/main/pip-sql-1.3b.Q5_K_M.gguf) | Q5_K_M | 0.93GB |
| [pip-sql-1.3b.Q5_1.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf/blob/main/pip-sql-1.3b.Q5_1.gguf) | Q5_1 | 0.95GB |
| [pip-sql-1.3b.Q6_K.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf/blob/main/pip-sql-1.3b.Q6_K.gguf) | Q6_K | 1.09GB |
| [pip-sql-1.3b.Q8_0.gguf](https://huggingface.co/RichardErkhov/PipableAI_-_pip-sql-1.3b-gguf/blob/main/pip-sql-1.3b.Q8_0.gguf) | Q8_0 | 1.33GB |
Original model description:
---
license: apache-2.0
datasets:
- PipableAI/pip-txt-to-sql-spider-bird-dataset
language:
- en
metrics:
- accuracy
tags:
- sql
- code
- text2sql
- instruction_tuned
- basemodel
- jax
- pytorch
- text-generation-inference
library_name: transformers
pipeline_tag: text-generation
widget:
- text: >-
<schema>CREATE TABLE system(JobID: String,GID: String, UID: String,
Start:Time(yyyy/mm/dd), End: Time,ElapsedRaw: Time, CPUTimeRAW: Time,NCPUS:
Number,NNodes: Number, NodeList: List, State:String, Timelimit:
Time);</schema><question>Get UID and job id for Jobs that started on Jan 20
, 2023 ended on feb 14 2023 and has job id 20</question><sql>
example_title: example
---
# pipSQL-1.3b
[pipableAi](https://www.linkedin.com/company/pipable.ai/about/)
[colab_notebook](https://colab.research.google.com/drive/1insSxvc3jjAXe0zmdIjmbG3ttb5mpRgQ?usp=sharing)
## What have we built?
A 1.3 bn SQL model that outperforms most SQL expert models and chatgpt on popular benchmarks.
This is a distilled model built on the deepseek base model.
Please refer to https://huggingface.co/PipableAI/pip-library-etl-1.3b for our state of the art model.
## How we built it?
We used softmax cross entropy and a modified form of policy grad along with Q loss, optimized in an EM set up.
Loss behaviour in the set up mentioned above -

## Benchmarking :
For benchmarking purposes we are using Semantic Evaluation for Text-to-SQL with
Distilled Test Suites, an officially accepted evaluation framework for Spider, SParC, and CoSQL which was proposed by a research team of Yale and Berkeley.
The benchmark contains 2200 test data points
Here is the link to run the evaluation:
[Test Suite SQL Eval](https://github.com/taoyds/test-suite-sql-eval)
|model|easy|medium|hard|extra|
|-----|----|------|----|-----|
|sqlcoder-7b-2|72.0|58.0|40.6|37.3|
|pipSQL-1.3b|78.5|57.5|42.1|28.3|
|pipSQL-7b|63.0|40.0|30.2|25.0|
|sqlcoder-7b|60.6|48.2|28.3|20.4|
|gpt-3.5|58.8|44.7|31.0|28.4|
We have also benchmarked it on defog eval.
It contains 200 test data points handpicked by defog team.
Here is the link to it:
[Defog SQL-Eval](https://github.com/defog-ai/sql-eval)
These are the results -

## License
The model is open source under apache 2.0. License
## Usage
### Installation
```bash
pip install transformers
```
### Prompt
```python
prompt = f"""<schema>{schema}</schema>
<question>{question}</question>
<sql>"""
```
### PyTorch
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
model = AutoModelForCausalLM.from_pretrained("PipableAI/pip-sql-1.3b")
tokenizer = AutoTokenizer.from_pretrained("PipableAI/pip-sql-1.3b")
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True).split('<sql>')[1].split('</sql>')[0])
```
### Flax
```python
from transformers import FlaxAutoModelForCausalLM, AutoTokenizer
device = "cuda"
model = FlaxAutoModelForCausalLM.from_pretrained("PipableAI/pip-sql-1.3b",from_pt=True)
tokenizer = AutoTokenizer.from_pretrained("PipableAI/pip-sql-1.3b")
inputs = tokenizer(text, return_tensors="jax")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True).split('<sql>')[1].split('</sql>')[0])
```
## Examples
### Schema
```sql
CREATE TABLE Products (
product_id number,
parent_product_id number,
product_name text,
product_price number,
product_color text,
product_size text,
product_description text);
CREATE TABLE Customers (
customer_id number,
gender_code text,
customer_first_name text,
customer_middle_initial text,
customer_last_name text,
email_address text,
login_name text,
login_password text,
phone_number text,
address_line_1 text,
town_city text,
county text,
country text);
CREATE TABLE Customer_Payment_Methods (
customer_id number,
payment_method_code text);
CREATE TABLE Invoices (
invoice_number number,
invoice_status_code text,
invoice_date time);
CREATE TABLE Orders (
order_id number,
customer_id number,
order_status_code text,
date_order_placed time);
CREATE TABLE Order_Items (
order_item_id number,
product_id number,
order_id number,
order_item_status_code text);
CREATE TABLE Shipments (
shipment_id number,
order_id number,
invoice_number number,
shipment_tracking_number text,
shipment_date time);
CREATE TABLE Shipment_Items (
shipment_id number,
order_item_id number);
```
### Questions
What are the email address, town and county of the customers who are of the least common gender?
```sql
SELECT email_address , town_city , county FROM customers GROUP BY gender_code ORDER BY count(*) ASC LIMIT 1
```
What are the product price and the product size of the products whose price is above average?
```sql
SELECT product_price , product_size FROM products WHERE product_price > (SELECT avg(product_price) FROM products)
```
Which customers did not make any orders? List the first name, middle initial and last name.
```sql
SELECT T1.customer_first_name , T1.customer_middle_initial , T1.customer_last_name FROM Customers AS T1 WHERE T1.customer_id NOT IN (SELECT T2.customer_id FROM Orders AS T2)
```
### Team
Avi Kothari, Pratham Gupta, Ritvik Aryan Kalra, Rohan Bhatial, Soham Acharya
|
Yntec/sexyToons | Yntec | "2023-08-04T04:24:16Z" | 3,247 | 16 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"alexds9",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-07-27T09:52:16Z" | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- alexds9
---
# Sexy Toons feat. Pipa
Original pages:
https://civitai.com/models/35549/sexy-toons-feat-pipa |
HuggingFaceTB/cosmo-1b | HuggingFaceTB | "2024-02-23T23:01:10Z" | 3,247 | 117 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:HuggingFaceTB/cosmopedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-19T16:14:36Z" | ---
license: apache-2.0
datasets:
- HuggingFaceTB/cosmopedia
language:
- en
inference:
parameters:
temperature: 0.6
top_p: 0.95
top_k: 50
repetition_penalty: 1.2
widget:
- text: 'Photosynthesis is'
example_title: Textbook
group: Completion
- text: '<s> [INST] How to take care of plants? [/INST] '
example_title: Wikihow
group: Completion
- text: '<s> [INST] Generate a story about a flying cat [/INST] '
example_title: Story
group: Completion
---
# Model Summary
This is a 1.8B model trained on [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia) synthetic dataset.
# Training dataset
The training corpus consisted of 30B tokens, 25B of which are synthetic from Cosmopedia. Since we didn't explore the synthetic generation of code, we augmented the dataset with 5B tokens of non-synthetic sources like the `code-python-0.60-to-1.00` and `web-0.50-to-1.00` subsets of [AutoMathText](https://huggingface.co/datasets/math-ai/AutoMathText). We also added 1M files from [The Stack](https://huggingface.co/datasets/bigcode/the-stack)'s Jupyter Notebooks, converted to script. They tend to have educational code interleaved with text.
We also included [ultrachat](https://huggingface.co/datasets/stingning/ultrachat) formatted in the chat format of `LlaMa` models, so we don't have to instruction-tune the model after the pre-training. Additionally, we upsampled twice the data from these seed sources to help with commonsense and reasoning: stories, AutoMathText & KhanAcademy.
We trained for 6 epochs, resulting in a model trained on 180B tokens with a sequence length of 2k, a global batch size of 1.3M tokens and a learning rate of 3e-4 with a cosine schedule for 140k steps.
We used the tokenizer from [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1/).
# How to use
Although the model wasn't instruction-tuned after the pre-training. However, given that included UltraChat in the pre-training , you can use it in a Chat format using:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/cosmo-1b")
model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/cosmo-1b").to(device)
prompt = "Generate a story involving a dog, an astronaut and a baker"
prompt= tokenizer.apply_chat_template([{"role": "user", "content": prompt}], tokenize=False)
inputs = tokenizer(prompt, return_tensors="pt").to(device)
output = model.generate(**inputs, max_length=300, do_sample=True, temperature=0.6, top_p=0.95, repetition_penalty=1.2)
print(tokenizer.decode(output[0]))
```
Output:
```
<s><s> [INST] Generate a story involving a dog, an astronaut and a baker [/INST] Once upon a time, there was a sweet little terrier named Max who lived in the bustling city of New York. He loved nothing more than chasing after his owner, Dr. Sarah Johnson, as she worked tirelessly on her latest invention - a spaceship that would take humans to distant galaxies!
One day, Dr. Johnson decided it was time for her to leave Earth's atmosphere behind and embark on this exciting adventure with her loyal companion, Max. She knew he had always been fascinated by space travel, so she hoped he would be just as excited about the journey ahead.
As they boarded their rocket ship and blasted off into outer space, Max felt both nervous and thrilled at the same time. His ears perked up every time they passed clouds or saw stars twinkling far out from earth. But as days turned into weeks, Max started feeling homesick. The vast emptiness around him made him feel lonely and isolated.
Meanwhile back on planet Earth, Mr. Baker was busy baking cookies when suddenly, an idea popped into his head. Why not send some treats along with Dr. Johnson's family? It might make them all feel better knowing that someone else was also having fun exploring the universe.
```
You can also use the model in text completion mode i.e without applying the chat template, but it might not follow isntructions.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/cosmo-1b")
model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/cosmo-1b").to(device)
prompt = "Photosynthesis is"
inputs = tokenizer(prompt, return_tensors="pt").to(device)
output = model.generate(**inputs, max_length=300, do_sample=True, temperature=0.6, top_p=0.95, repetition_penalty=1.2)
print(tokenizer.decode(output[0]))
```
Output:
```
<s> Photosynthesis is the process by which green plants, algae and some bacteria convert light energy into chemical energy in order to fuel their metabolic processes. The reaction takes place within specialized cells called chloroplasts. This article focuses on the electron transport chain (ETC), a critical part of photosystem II where most of the solar-driven electrons are passed through before being reduced to water.
```
# Evaluation
Below are the evaluation results of Cosmo-1B. The model is better than TinyLlama 1.1B on ARC-easy, ARC-challenge, OpenBookQA and MMLU, and has comparable performance to Qwen-1.5-1B on ARC-challenge and OpenBookQA.
However, we notice some perfoamnce gaps compared to Phi-1.5 suggesting a better synthetic generation quality which can be related to the LLM used for generation, topic coverage or prompts.

# Limitations
This is a small 1.8B model trained on synthetic data, so it might hallucinate, give incomplete or incorrect answers.
# Training
## Model
- **Architecture:** Llama-2
- **Pretraining steps:** 120k
- **Pretraining tokens:** 180B
- **Precision:** bfloat16
## Hardware
- **GPUs:** 160 H100
- **Training time:** 15hours
The training loss:

|
OpenBuddy/openbuddy-codellama2-34b-v11.1-bf16 | OpenBuddy | "2023-09-20T06:40:58Z" | 3,246 | 11 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-08T02:21:55Z" | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
pipeline_tag: text-generation
inference: false
library_name: transformers
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)

# Copyright Notice
This model is built upon Meta's LLaMA series of models and is subject to Meta's licensing agreement.
This model is intended for use only by individuals who have obtained approval from Meta and are eligible to download LLaMA.
If you have not obtained approval from Meta, you must visit the https://ai.meta.com/llama/ page, read and agree to the model's licensing agreement, submit an application, and wait for approval from Meta before downloading the model from this page.
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。 |
mradermacher/L3-SnowStorm-v1.15-4x8B-B-GGUF | mradermacher | "2024-05-28T22:58:03Z" | 3,244 | 2 | transformers | [
"transformers",
"gguf",
"moe",
"en",
"base_model:xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-05-28T21:29:30Z" | ---
base_model: xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-B
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- moe
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/xxx777xxxASD/L3-SnowStorm-v1.15-4x8B-B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.Q2_K.gguf) | Q2_K | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.IQ3_XS.gguf) | IQ3_XS | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.Q3_K_S.gguf) | Q3_K_S | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.IQ3_S.gguf) | IQ3_S | 11.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.IQ3_M.gguf) | IQ3_M | 11.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.Q3_K_M.gguf) | Q3_K_M | 12.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.Q3_K_L.gguf) | Q3_K_L | 13.1 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.IQ4_XS.gguf) | IQ4_XS | 13.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.Q4_K_S.gguf) | Q4_K_S | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.Q4_K_M.gguf) | Q4_K_M | 15.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.Q5_K_S.gguf) | Q5_K_S | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.Q5_K_M.gguf) | Q5_K_M | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.Q6_K.gguf) | Q6_K | 20.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/L3-SnowStorm-v1.15-4x8B-B-GGUF/resolve/main/L3-SnowStorm-v1.15-4x8B-B.Q8_0.gguf) | Q8_0 | 26.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
GraydientPlatformAPI/realpony-xl | GraydientPlatformAPI | "2024-03-30T03:12:08Z" | 3,243 | 1 | diffusers | [
"diffusers",
"safetensors",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-03-30T02:49:35Z" | ---
license: openrail
---
|
mradermacher/Mixtral-4x7B-Chat-Chinese-GGUF | mradermacher | "2024-06-07T19:55:25Z" | 3,242 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:XuYipei/Mixtral-4x7B-Chat-Chinese",
"endpoints_compatible",
"region:us"
] | null | "2024-06-07T18:25:46Z" | ---
base_model: XuYipei/Mixtral-4x7B-Chat-Chinese
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/XuYipei/Mixtral-4x7B-Chat-Chinese
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral-4x7B-Chat-Chinese-GGUF/resolve/main/Mixtral-4x7B-Chat-Chinese.Q2_K.gguf) | Q2_K | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-4x7B-Chat-Chinese-GGUF/resolve/main/Mixtral-4x7B-Chat-Chinese.IQ3_XS.gguf) | IQ3_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-4x7B-Chat-Chinese-GGUF/resolve/main/Mixtral-4x7B-Chat-Chinese.Q3_K_S.gguf) | Q3_K_S | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-4x7B-Chat-Chinese-GGUF/resolve/main/Mixtral-4x7B-Chat-Chinese.IQ3_S.gguf) | IQ3_S | 10.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-4x7B-Chat-Chinese-GGUF/resolve/main/Mixtral-4x7B-Chat-Chinese.IQ3_M.gguf) | IQ3_M | 10.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-4x7B-Chat-Chinese-GGUF/resolve/main/Mixtral-4x7B-Chat-Chinese.Q3_K_M.gguf) | Q3_K_M | 11.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-4x7B-Chat-Chinese-GGUF/resolve/main/Mixtral-4x7B-Chat-Chinese.Q3_K_L.gguf) | Q3_K_L | 12.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-4x7B-Chat-Chinese-GGUF/resolve/main/Mixtral-4x7B-Chat-Chinese.IQ4_XS.gguf) | IQ4_XS | 13.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-4x7B-Chat-Chinese-GGUF/resolve/main/Mixtral-4x7B-Chat-Chinese.Q4_K_S.gguf) | Q4_K_S | 14.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-4x7B-Chat-Chinese-GGUF/resolve/main/Mixtral-4x7B-Chat-Chinese.Q4_K_M.gguf) | Q4_K_M | 14.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-4x7B-Chat-Chinese-GGUF/resolve/main/Mixtral-4x7B-Chat-Chinese.Q5_K_S.gguf) | Q5_K_S | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-4x7B-Chat-Chinese-GGUF/resolve/main/Mixtral-4x7B-Chat-Chinese.Q5_K_M.gguf) | Q5_K_M | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-4x7B-Chat-Chinese-GGUF/resolve/main/Mixtral-4x7B-Chat-Chinese.Q6_K.gguf) | Q6_K | 20.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-4x7B-Chat-Chinese-GGUF/resolve/main/Mixtral-4x7B-Chat-Chinese.Q8_0.gguf) | Q8_0 | 26.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
NbAiLab/wav2vec2-large-danish-npsc-nst | NbAiLab | "2023-09-13T08:38:18Z" | 3,241 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:chcaa/xls-r-300m-danish",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-08-19T12:20:01Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: chcaa/xls-r-300m-danish
model-index:
- name: wav2vec2-large-danish-npsc-nst
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-danish-npsc-nst
This model is a fine-tuned version of [chcaa/xls-r-300m-danish](https://huggingface.co/chcaa/xls-r-300m-danish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0587
- Wer: 0.0669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:------:|
| 3.0474 | 0.05 | 500 | 2.9879 | 1.0 |
| 1.7255 | 0.11 | 1000 | 1.1271 | 0.9732 |
| 0.8932 | 0.16 | 1500 | 0.5474 | 0.5983 |
| 0.7358 | 0.21 | 2000 | 0.4152 | 0.4584 |
| 0.5939 | 0.26 | 2500 | 0.3455 | 0.3860 |
| 0.5437 | 0.32 | 3000 | 0.3024 | 0.3473 |
| 0.5128 | 0.37 | 3500 | 0.2708 | 0.3043 |
| 0.4682 | 0.42 | 4000 | 0.2462 | 0.2841 |
| 0.4392 | 0.47 | 4500 | 0.2374 | 0.2639 |
| 0.4022 | 0.53 | 5000 | 0.2182 | 0.2457 |
| 0.4106 | 0.58 | 5500 | 0.2066 | 0.2331 |
| 0.3883 | 0.63 | 6000 | 0.1997 | 0.2267 |
| 0.3701 | 0.69 | 6500 | 0.1862 | 0.2164 |
| 0.3628 | 0.74 | 7000 | 0.1817 | 0.2051 |
| 0.3468 | 0.79 | 7500 | 0.1716 | 0.1963 |
| 0.3311 | 0.84 | 8000 | 0.1704 | 0.1924 |
| 0.3329 | 0.9 | 8500 | 0.1620 | 0.1873 |
| 0.3179 | 0.95 | 9000 | 0.1575 | 0.1811 |
| 0.3162 | 1.0 | 9500 | 0.1519 | 0.1729 |
| 0.296 | 1.05 | 10000 | 0.1503 | 0.1701 |
| 0.2912 | 1.11 | 10500 | 0.1473 | 0.1673 |
| 0.296 | 1.16 | 11000 | 0.1422 | 0.1649 |
| 0.3615 | 1.21 | 11500 | 0.1373 | 0.1607 |
| 0.276 | 1.26 | 12000 | 0.1376 | 0.1572 |
| 0.2719 | 1.32 | 12500 | 0.1349 | 0.1540 |
| 0.2717 | 1.37 | 13000 | 0.1306 | 0.1524 |
| 0.2742 | 1.42 | 13500 | 0.1280 | 0.1515 |
| 0.261 | 1.48 | 14000 | 0.1246 | 0.1480 |
| 0.2532 | 1.53 | 14500 | 0.1266 | 0.1460 |
| 0.2501 | 1.58 | 15000 | 0.1227 | 0.1435 |
| 0.2705 | 1.63 | 15500 | 0.1202 | 0.1400 |
| 0.2433 | 1.69 | 16000 | 0.1190 | 0.1396 |
| 0.2526 | 1.74 | 16500 | 0.1182 | 0.1381 |
| 0.2362 | 1.79 | 17000 | 0.1169 | 0.1363 |
| 0.2359 | 1.84 | 17500 | 0.1166 | 0.1362 |
| 0.2341 | 1.9 | 18000 | 0.1133 | 0.1336 |
| 0.2347 | 1.95 | 18500 | 0.1121 | 0.1311 |
| 0.2428 | 2.0 | 19000 | 0.1109 | 0.1293 |
| 0.229 | 2.06 | 19500 | 0.1104 | 0.1275 |
| 0.2307 | 2.11 | 20000 | 0.1084 | 0.1278 |
| 0.2287 | 2.16 | 20500 | 0.1070 | 0.1260 |
| 0.217 | 2.21 | 21000 | 0.1066 | 0.1242 |
| 0.2056 | 2.27 | 21500 | 0.1058 | 0.1235 |
| 0.2039 | 2.32 | 22000 | 0.1013 | 0.1211 |
| 0.192 | 2.37 | 22500 | 0.1028 | 0.1198 |
| 0.2129 | 2.42 | 23000 | 0.1035 | 0.1202 |
| 0.1972 | 2.48 | 23500 | 0.1002 | 0.1197 |
| 0.2136 | 2.53 | 24000 | 0.1014 | 0.1183 |
| 0.2176 | 2.58 | 24500 | 0.0990 | 0.1181 |
| 0.2029 | 2.64 | 25000 | 0.0975 | 0.1170 |
| 0.2015 | 2.69 | 25500 | 0.0981 | 0.1168 |
| 0.2 | 2.74 | 26000 | 0.0960 | 0.1145 |
| 0.2018 | 2.79 | 26500 | 0.0966 | 0.1152 |
| 0.1935 | 2.85 | 27000 | 0.0936 | 0.1144 |
| 0.1944 | 2.9 | 27500 | 0.0931 | 0.1129 |
| 0.1862 | 2.95 | 28000 | 0.0920 | 0.1126 |
| 0.1805 | 3.0 | 28500 | 0.0941 | 0.1103 |
| 0.1764 | 3.06 | 29000 | 0.0940 | 0.1103 |
| 0.1859 | 3.11 | 29500 | 0.0915 | 0.1095 |
| 0.1865 | 3.16 | 30000 | 0.0925 | 0.1099 |
| 0.1713 | 3.21 | 30500 | 0.0907 | 0.1085 |
| 0.1917 | 3.27 | 31000 | 0.0898 | 0.1075 |
| 0.1708 | 3.32 | 31500 | 0.0909 | 0.1067 |
| 0.1754 | 3.37 | 32000 | 0.0892 | 0.1078 |
| 0.1813 | 3.43 | 32500 | 0.0895 | 0.1063 |
| 0.1842 | 3.48 | 33000 | 0.0882 | 0.1059 |
| 0.1834 | 3.53 | 33500 | 0.0883 | 0.1048 |
| 0.1746 | 3.58 | 34000 | 0.0866 | 0.1037 |
| 0.1765 | 3.64 | 34500 | 0.0860 | 0.1047 |
| 0.1747 | 3.69 | 35000 | 0.0873 | 0.1038 |
| 0.1741 | 3.74 | 35500 | 0.0851 | 0.1028 |
| 0.1589 | 3.79 | 36000 | 0.0851 | 0.1026 |
| 0.1659 | 3.85 | 36500 | 0.0844 | 0.1017 |
| 0.1716 | 3.9 | 37000 | 0.0831 | 0.1008 |
| 0.1728 | 3.95 | 37500 | 0.0828 | 0.1015 |
| 0.1877 | 4.01 | 38000 | 0.0847 | 0.1007 |
| 0.166 | 4.06 | 38500 | 0.0834 | 0.1005 |
| 0.151 | 4.11 | 39000 | 0.0836 | 0.0998 |
| 0.1614 | 4.16 | 39500 | 0.0848 | 0.1008 |
| 0.161 | 4.22 | 40000 | 0.0818 | 0.1005 |
| 0.162 | 4.27 | 40500 | 0.0822 | 0.0990 |
| 0.1702 | 4.32 | 41000 | 0.0806 | 0.0976 |
| 0.1585 | 4.37 | 41500 | 0.0807 | 0.0983 |
| 0.1682 | 4.43 | 42000 | 0.0792 | 0.0972 |
| 0.1607 | 4.48 | 42500 | 0.0797 | 0.0966 |
| 0.161 | 4.53 | 43000 | 0.0817 | 0.0978 |
| 0.1562 | 4.59 | 43500 | 0.0781 | 0.0965 |
| 0.1556 | 4.64 | 44000 | 0.0800 | 0.0962 |
| 0.1516 | 4.69 | 44500 | 0.0779 | 0.0948 |
| 0.1518 | 4.74 | 45000 | 0.0786 | 0.0945 |
| 0.1587 | 4.8 | 45500 | 0.0782 | 0.0939 |
| 0.1653 | 4.85 | 46000 | 0.0772 | 0.0935 |
| 0.1592 | 4.9 | 46500 | 0.0768 | 0.0933 |
| 0.1517 | 4.95 | 47000 | 0.0772 | 0.0928 |
| 0.1515 | 5.01 | 47500 | 0.0780 | 0.0917 |
| 0.2077 | 5.06 | 48000 | 0.0780 | 0.0925 |
| 0.1531 | 5.11 | 48500 | 0.0758 | 0.0909 |
| 0.155 | 5.16 | 49000 | 0.0757 | 0.0901 |
| 0.1501 | 5.22 | 49500 | 0.0767 | 0.0895 |
| 0.1435 | 5.27 | 50000 | 0.0759 | 0.0890 |
| 0.1449 | 5.32 | 50500 | 0.0762 | 0.0896 |
| 0.1489 | 5.38 | 51000 | 0.0743 | 0.0880 |
| 0.1456 | 5.43 | 51500 | 0.0757 | 0.0883 |
| 0.1515 | 5.48 | 52000 | 0.0751 | 0.0891 |
| 0.1446 | 5.53 | 52500 | 0.0739 | 0.0870 |
| 0.1503 | 5.59 | 53000 | 0.0731 | 0.0872 |
| 0.1405 | 5.64 | 53500 | 0.0731 | 0.0865 |
| 0.1385 | 5.69 | 54000 | 0.0737 | 0.0859 |
| 0.1439 | 5.74 | 54500 | 0.0732 | 0.0860 |
| 0.1378 | 5.8 | 55000 | 0.0750 | 0.0859 |
| 0.1441 | 5.85 | 55500 | 0.0713 | 0.0855 |
| 0.1288 | 5.9 | 56000 | 0.0733 | 0.0851 |
| 0.1484 | 5.96 | 56500 | 0.0718 | 0.0848 |
| 0.1437 | 6.01 | 57000 | 0.0714 | 0.0843 |
| 0.138 | 6.06 | 57500 | 0.0722 | 0.0848 |
| 0.1356 | 6.11 | 58000 | 0.0717 | 0.0845 |
| 0.1291 | 6.17 | 58500 | 0.0713 | 0.0843 |
| 0.1269 | 6.22 | 59000 | 0.0720 | 0.0840 |
| 0.133 | 6.27 | 59500 | 0.0709 | 0.0839 |
| 0.1402 | 6.32 | 60000 | 0.0706 | 0.0833 |
| 0.1332 | 6.38 | 60500 | 0.0709 | 0.0836 |
| 0.1271 | 6.43 | 61000 | 0.0705 | 0.0825 |
| 0.1279 | 6.48 | 61500 | 0.0710 | 0.0828 |
| 0.1317 | 6.54 | 62000 | 0.0699 | 0.0831 |
| 0.133 | 6.59 | 62500 | 0.0699 | 0.0826 |
| 0.1259 | 6.64 | 63000 | 0.0703 | 0.0818 |
| 0.1377 | 6.69 | 63500 | 0.0701 | 0.0817 |
| 0.136 | 6.75 | 64000 | 0.0701 | 0.0817 |
| 0.1218 | 6.8 | 64500 | 0.0699 | 0.0816 |
| 0.1239 | 6.85 | 65000 | 0.0687 | 0.0817 |
| 0.1331 | 6.9 | 65500 | 0.0696 | 0.0810 |
| 0.1252 | 6.96 | 66000 | 0.0679 | 0.0806 |
| 0.1381 | 7.01 | 66500 | 0.0688 | 0.0804 |
| 0.1232 | 7.06 | 67000 | 0.0688 | 0.0808 |
| 0.1288 | 7.11 | 67500 | 0.0686 | 0.0803 |
| 0.1223 | 7.17 | 68000 | 0.0684 | 0.0795 |
| 0.1344 | 7.22 | 68500 | 0.0679 | 0.0799 |
| 0.1272 | 7.27 | 69000 | 0.0683 | 0.0798 |
| 0.129 | 7.33 | 69500 | 0.0689 | 0.0799 |
| 0.118 | 7.38 | 70000 | 0.0684 | 0.0788 |
| 0.1351 | 7.43 | 70500 | 0.0681 | 0.0792 |
| 0.1213 | 7.48 | 71000 | 0.0671 | 0.0781 |
| 0.1311 | 7.54 | 71500 | 0.0666 | 0.0787 |
| 0.1194 | 7.59 | 72000 | 0.0665 | 0.0789 |
| 0.1216 | 7.64 | 72500 | 0.0664 | 0.0779 |
| 0.1188 | 7.69 | 73000 | 0.0665 | 0.0783 |
| 0.1161 | 7.75 | 73500 | 0.0661 | 0.0777 |
| 0.1279 | 7.8 | 74000 | 0.0654 | 0.0782 |
| 0.1243 | 7.85 | 74500 | 0.0664 | 0.0776 |
| 0.1223 | 7.91 | 75000 | 0.0648 | 0.0778 |
| 0.123 | 7.96 | 75500 | 0.0650 | 0.0772 |
| 0.1182 | 8.01 | 76000 | 0.0663 | 0.0773 |
| 0.1199 | 8.06 | 76500 | 0.0662 | 0.0776 |
| 0.1158 | 8.12 | 77000 | 0.0667 | 0.0772 |
| 0.1142 | 8.17 | 77500 | 0.0672 | 0.0773 |
| 0.1174 | 8.22 | 78000 | 0.0668 | 0.0765 |
| 0.1204 | 8.27 | 78500 | 0.0661 | 0.0769 |
| 0.1121 | 8.33 | 79000 | 0.0666 | 0.0769 |
| 0.1211 | 8.38 | 79500 | 0.0652 | 0.0758 |
| 0.1214 | 8.43 | 80000 | 0.0656 | 0.0764 |
| 0.1159 | 8.49 | 80500 | 0.0653 | 0.0762 |
| 0.2059 | 8.54 | 81000 | 0.0664 | 0.0765 |
| 0.1145 | 8.59 | 81500 | 0.0653 | 0.0759 |
| 0.1162 | 8.64 | 82000 | 0.0650 | 0.0761 |
| 0.1142 | 8.7 | 82500 | 0.0651 | 0.0764 |
| 0.1183 | 8.75 | 83000 | 0.0649 | 0.0753 |
| 0.112 | 8.8 | 83500 | 0.0657 | 0.0756 |
| 0.1175 | 8.85 | 84000 | 0.0639 | 0.0753 |
| 0.1154 | 8.91 | 84500 | 0.0640 | 0.0752 |
| 0.107 | 8.96 | 85000 | 0.0651 | 0.0747 |
| 0.1105 | 9.01 | 85500 | 0.0646 | 0.0745 |
| 0.1129 | 9.07 | 86000 | 0.0651 | 0.0743 |
| 0.1112 | 9.12 | 86500 | 0.0643 | 0.0743 |
| 0.108 | 9.17 | 87000 | 0.0639 | 0.0743 |
| 0.1114 | 9.22 | 87500 | 0.0643 | 0.0738 |
| 0.1136 | 9.28 | 88000 | 0.0635 | 0.0747 |
| 0.1086 | 9.33 | 88500 | 0.0633 | 0.0742 |
| 0.1097 | 9.38 | 89000 | 0.0629 | 0.0745 |
| 0.1094 | 9.43 | 89500 | 0.0632 | 0.0734 |
| 0.1107 | 9.49 | 90000 | 0.0637 | 0.0737 |
| 0.1072 | 9.54 | 90500 | 0.0633 | 0.0734 |
| 0.101 | 9.59 | 91000 | 0.0633 | 0.0733 |
| 0.1076 | 9.64 | 91500 | 0.0631 | 0.0730 |
| 0.1135 | 9.7 | 92000 | 0.0623 | 0.0725 |
| 0.1168 | 9.75 | 92500 | 0.0625 | 0.0727 |
| 0.1047 | 9.8 | 93000 | 0.0625 | 0.0731 |
| 0.0992 | 9.86 | 93500 | 0.0628 | 0.0726 |
| 0.1026 | 9.91 | 94000 | 0.0630 | 0.0724 |
| 0.1129 | 9.96 | 94500 | 0.0615 | 0.0725 |
| 0.1088 | 10.01 | 95000 | 0.0623 | 0.0726 |
| 0.107 | 10.07 | 95500 | 0.0630 | 0.0719 |
| 0.115 | 10.12 | 96000 | 0.0623 | 0.0722 |
| 0.1037 | 10.17 | 96500 | 0.0622 | 0.0715 |
| 0.1028 | 10.22 | 97000 | 0.0612 | 0.0717 |
| 0.1025 | 10.28 | 97500 | 0.0618 | 0.0715 |
| 0.1075 | 10.33 | 98000 | 0.0610 | 0.0719 |
| 0.1035 | 10.38 | 98500 | 0.0627 | 0.0715 |
| 0.1038 | 10.44 | 99000 | 0.0608 | 0.0721 |
| 0.0968 | 10.49 | 99500 | 0.0618 | 0.0712 |
| 0.1095 | 10.54 | 100000 | 0.0621 | 0.0713 |
| 0.1957 | 10.59 | 100500 | 0.0606 | 0.0717 |
| 0.1032 | 10.65 | 101000 | 0.0613 | 0.0708 |
| 0.1104 | 10.7 | 101500 | 0.0622 | 0.0709 |
| 0.1071 | 10.75 | 102000 | 0.0612 | 0.0707 |
| 0.1133 | 10.8 | 102500 | 0.0618 | 0.0703 |
| 0.1017 | 10.86 | 103000 | 0.0616 | 0.0703 |
| 0.0943 | 10.91 | 103500 | 0.0613 | 0.0704 |
| 0.1067 | 10.96 | 104000 | 0.0602 | 0.0704 |
| 0.1078 | 11.02 | 104500 | 0.0602 | 0.0705 |
| 0.1088 | 11.07 | 105000 | 0.0617 | 0.0704 |
| 0.101 | 11.12 | 105500 | 0.0609 | 0.0703 |
| 0.0956 | 11.17 | 106000 | 0.0608 | 0.0701 |
| 0.0995 | 11.23 | 106500 | 0.0614 | 0.0702 |
| 0.0917 | 11.28 | 107000 | 0.0611 | 0.0698 |
| 0.1023 | 11.33 | 107500 | 0.0608 | 0.0697 |
| 0.1107 | 11.38 | 108000 | 0.0607 | 0.0699 |
| 0.0945 | 11.44 | 108500 | 0.0610 | 0.0695 |
| 0.1043 | 11.49 | 109000 | 0.0603 | 0.0697 |
| 0.1007 | 11.54 | 109500 | 0.0606 | 0.0699 |
| 0.0997 | 11.59 | 110000 | 0.0596 | 0.0696 |
| 0.0958 | 11.65 | 110500 | 0.0602 | 0.0695 |
| 0.1011 | 11.7 | 111000 | 0.0602 | 0.0698 |
| 0.0996 | 11.75 | 111500 | 0.0605 | 0.0695 |
| 0.0993 | 11.81 | 112000 | 0.0610 | 0.0691 |
| 0.0958 | 11.86 | 112500 | 0.0603 | 0.0693 |
| 0.0997 | 11.91 | 113000 | 0.0600 | 0.0695 |
| 0.0991 | 11.96 | 113500 | 0.0596 | 0.0691 |
| 0.096 | 12.02 | 114000 | 0.0603 | 0.0688 |
| 0.0971 | 12.07 | 114500 | 0.0605 | 0.0688 |
| 0.0948 | 12.12 | 115000 | 0.0601 | 0.0687 |
| 0.0981 | 12.17 | 115500 | 0.0596 | 0.0690 |
| 0.0928 | 12.23 | 116000 | 0.0599 | 0.0689 |
| 0.0959 | 12.28 | 116500 | 0.0602 | 0.0686 |
| 0.0925 | 12.33 | 117000 | 0.0598 | 0.0686 |
| 0.0998 | 12.39 | 117500 | 0.0593 | 0.0689 |
| 0.0996 | 12.44 | 118000 | 0.0600 | 0.0685 |
| 0.0981 | 12.49 | 118500 | 0.0600 | 0.0685 |
| 0.1045 | 12.54 | 119000 | 0.0593 | 0.0684 |
| 0.0944 | 12.6 | 119500 | 0.0594 | 0.0684 |
| 0.0874 | 12.65 | 120000 | 0.0590 | 0.0686 |
| 0.092 | 12.7 | 120500 | 0.0597 | 0.0681 |
| 0.0931 | 12.75 | 121000 | 0.0599 | 0.0678 |
| 0.092 | 12.81 | 121500 | 0.0604 | 0.0679 |
| 0.1033 | 12.86 | 122000 | 0.0591 | 0.0684 |
| 0.0935 | 12.91 | 122500 | 0.0594 | 0.0680 |
| 0.1062 | 12.97 | 123000 | 0.0594 | 0.0677 |
| 0.0948 | 13.02 | 123500 | 0.0600 | 0.0679 |
| 0.0939 | 13.07 | 124000 | 0.0599 | 0.0677 |
| 0.0962 | 13.12 | 124500 | 0.0602 | 0.0676 |
| 0.1015 | 13.18 | 125000 | 0.0594 | 0.0676 |
| 0.0976 | 13.23 | 125500 | 0.0586 | 0.0676 |
| 0.0876 | 13.28 | 126000 | 0.0595 | 0.0679 |
| 0.0864 | 13.33 | 126500 | 0.0597 | 0.0675 |
| 0.1756 | 13.39 | 127000 | 0.0583 | 0.0678 |
| 0.0963 | 13.44 | 127500 | 0.0584 | 0.0677 |
| 0.0956 | 13.49 | 128000 | 0.0593 | 0.0676 |
| 0.0927 | 13.54 | 128500 | 0.0594 | 0.0674 |
| 0.0969 | 13.6 | 129000 | 0.0588 | 0.0674 |
| 0.1005 | 13.65 | 129500 | 0.0586 | 0.0674 |
| 0.097 | 13.7 | 130000 | 0.0593 | 0.0674 |
| 0.0944 | 13.76 | 130500 | 0.0596 | 0.0673 |
| 0.0908 | 13.81 | 131000 | 0.0593 | 0.0674 |
| 0.0928 | 13.86 | 131500 | 0.0594 | 0.0673 |
| 0.0934 | 13.91 | 132000 | 0.0592 | 0.0674 |
| 0.0967 | 13.97 | 132500 | 0.0589 | 0.0672 |
| 0.0921 | 14.02 | 133000 | 0.0585 | 0.0671 |
| 0.0927 | 14.07 | 133500 | 0.0590 | 0.0673 |
| 0.0908 | 14.12 | 134000 | 0.0588 | 0.0672 |
| 0.0875 | 14.18 | 134500 | 0.0591 | 0.0672 |
| 0.097 | 14.23 | 135000 | 0.0589 | 0.0670 |
| 0.0915 | 14.28 | 135500 | 0.0586 | 0.0670 |
| 0.094 | 14.34 | 136000 | 0.0587 | 0.0672 |
| 0.09 | 14.39 | 136500 | 0.0589 | 0.0672 |
| 0.0969 | 14.44 | 137000 | 0.0586 | 0.0670 |
| 0.0925 | 14.49 | 137500 | 0.0585 | 0.0670 |
| 0.1005 | 14.55 | 138000 | 0.0584 | 0.0668 |
| 0.0907 | 14.6 | 138500 | 0.0584 | 0.0669 |
| 0.0891 | 14.65 | 139000 | 0.0587 | 0.0669 |
| 0.0907 | 14.7 | 139500 | 0.0588 | 0.0669 |
| 0.0948 | 14.76 | 140000 | 0.0586 | 0.0668 |
| 0.0907 | 14.81 | 140500 | 0.0585 | 0.0670 |
| 0.0914 | 14.86 | 141000 | 0.0586 | 0.0669 |
| 0.0867 | 14.92 | 141500 | 0.0587 | 0.0669 |
| 0.0877 | 14.97 | 142000 | 0.0587 | 0.0669 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mradermacher/MythoMist-7b-GGUF | mradermacher | "2024-06-06T02:00:59Z" | 3,239 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Gryphe/MythoMist-7b",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-06-05T20:27:44Z" | ---
base_model: Gryphe/MythoMist-7b
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Gryphe/MythoMist-7b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Prox-Phi-3-mini-128k-GGUF | mradermacher | "2024-06-16T09:25:33Z" | 3,239 | 0 | transformers | [
"transformers",
"gguf",
"code",
"cybersecurity",
"penetration testing",
"hacking",
"en",
"base_model:openvoid/Prox-Phi-3-mini-128k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-15T16:38:59Z" | ---
base_model: openvoid/Prox-Phi-3-mini-128k
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- code
- cybersecurity
- penetration testing
- hacking
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/openvoid/Prox-Phi-3-mini-128k
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Prox-Phi-3-mini-128k-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Prox-Phi-3-mini-128k-GGUF/resolve/main/Prox-Phi-3-mini-128k.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Prox-Phi-3-mini-128k-GGUF/resolve/main/Prox-Phi-3-mini-128k.IQ3_XS.gguf) | IQ3_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Prox-Phi-3-mini-128k-GGUF/resolve/main/Prox-Phi-3-mini-128k.IQ3_S.gguf) | IQ3_S | 1.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Prox-Phi-3-mini-128k-GGUF/resolve/main/Prox-Phi-3-mini-128k.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Prox-Phi-3-mini-128k-GGUF/resolve/main/Prox-Phi-3-mini-128k.IQ3_M.gguf) | IQ3_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Prox-Phi-3-mini-128k-GGUF/resolve/main/Prox-Phi-3-mini-128k.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Prox-Phi-3-mini-128k-GGUF/resolve/main/Prox-Phi-3-mini-128k.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Prox-Phi-3-mini-128k-GGUF/resolve/main/Prox-Phi-3-mini-128k.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Prox-Phi-3-mini-128k-GGUF/resolve/main/Prox-Phi-3-mini-128k.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Prox-Phi-3-mini-128k-GGUF/resolve/main/Prox-Phi-3-mini-128k.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Prox-Phi-3-mini-128k-GGUF/resolve/main/Prox-Phi-3-mini-128k.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Prox-Phi-3-mini-128k-GGUF/resolve/main/Prox-Phi-3-mini-128k.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Prox-Phi-3-mini-128k-GGUF/resolve/main/Prox-Phi-3-mini-128k.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Prox-Phi-3-mini-128k-GGUF/resolve/main/Prox-Phi-3-mini-128k.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Prox-Phi-3-mini-128k-GGUF/resolve/main/Prox-Phi-3-mini-128k.f16.gguf) | f16 | 7.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
philschmid/Llama-2-7b-hf | philschmid | "2023-12-20T10:25:28Z" | 3,238 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"en",
"arxiv:2307.09288",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-20T10:01:24Z" | ---
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/meta-llama/Llama-2-7b) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/meta-llama/Llama-2-13b) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)|
|70B| [Link](https://huggingface.co/meta-llama/Llama-2-70b) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)| |
mradermacher/EvaX-GGUF | mradermacher | "2024-06-26T20:38:28Z" | 3,238 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Lilium/EvaX",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T12:15:35Z" | ---
base_model: Lilium/EvaX
language:
- en
library_name: transformers
license: other
license_link: LICENSE
license_name: private-commercial
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Lilium/EvaX
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/EvaX-GGUF/resolve/main/EvaX.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/EvaX-GGUF/resolve/main/EvaX.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/EvaX-GGUF/resolve/main/EvaX.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/EvaX-GGUF/resolve/main/EvaX.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/EvaX-GGUF/resolve/main/EvaX.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/EvaX-GGUF/resolve/main/EvaX.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/EvaX-GGUF/resolve/main/EvaX.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/EvaX-GGUF/resolve/main/EvaX.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/EvaX-GGUF/resolve/main/EvaX.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EvaX-GGUF/resolve/main/EvaX.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EvaX-GGUF/resolve/main/EvaX.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/EvaX-GGUF/resolve/main/EvaX.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/EvaX-GGUF/resolve/main/EvaX.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/EvaX-GGUF/resolve/main/EvaX.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/EvaX-GGUF/resolve/main/EvaX.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
facebook/detr-resnet-101-dc5 | facebook | "2023-09-06T19:19:43Z" | 3,237 | 17 | transformers | [
"transformers",
"pytorch",
"safetensors",
"detr",
"object-detection",
"dataset:coco",
"arxiv:2005.12872",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
tags:
- object-detection
datasets:
- coco
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
example_title: Savanna
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
example_title: Airport
---
# DETR (End-to-End Object Detection) model with ResNet-101 backbone (dilated C5 stage)
DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr).
Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100.
The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.
## Intended uses & limitations
You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models.
### How to use
Here is how to use this model:
```python
from transformers import DetrFeatureExtractor, DetrForObjectDetection
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-101-dc5')
model = DetrForObjectDetection.from_pretrained('facebook/detr-resnet-101-dc5')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts bounding boxes and corresponding COCO classes
logits = outputs.logits
bboxes = outputs.pred_boxes
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py).
Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).
### Training
The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).
## Evaluation results
This model achieves an AP (average precision) of **44.9** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2005-12872,
author = {Nicolas Carion and
Francisco Massa and
Gabriel Synnaeve and
Nicolas Usunier and
Alexander Kirillov and
Sergey Zagoruyko},
title = {End-to-End Object Detection with Transformers},
journal = {CoRR},
volume = {abs/2005.12872},
year = {2020},
url = {https://arxiv.org/abs/2005.12872},
archivePrefix = {arXiv},
eprint = {2005.12872},
timestamp = {Thu, 28 May 2020 17:38:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
mradermacher/MadWizardOrpoMistral-7b-v0.3-GGUF | mradermacher | "2024-06-13T09:31:31Z" | 3,237 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Lumpen1/MadWizardOrpoMistral-7b-v0.3",
"endpoints_compatible",
"region:us"
] | null | "2024-06-12T20:23:22Z" | ---
base_model: Lumpen1/MadWizardOrpoMistral-7b-v0.3
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Lumpen1/MadWizardOrpoMistral-7b-v0.3
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MadWizardOrpoMistral-7b-v0.3-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MadWizardOrpoMistral-7b-v0.3-GGUF/resolve/main/MadWizardOrpoMistral-7b-v0.3.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/MadWizardOrpoMistral-7b-v0.3-GGUF/resolve/main/MadWizardOrpoMistral-7b-v0.3.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/MadWizardOrpoMistral-7b-v0.3-GGUF/resolve/main/MadWizardOrpoMistral-7b-v0.3.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MadWizardOrpoMistral-7b-v0.3-GGUF/resolve/main/MadWizardOrpoMistral-7b-v0.3.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MadWizardOrpoMistral-7b-v0.3-GGUF/resolve/main/MadWizardOrpoMistral-7b-v0.3.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MadWizardOrpoMistral-7b-v0.3-GGUF/resolve/main/MadWizardOrpoMistral-7b-v0.3.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MadWizardOrpoMistral-7b-v0.3-GGUF/resolve/main/MadWizardOrpoMistral-7b-v0.3.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MadWizardOrpoMistral-7b-v0.3-GGUF/resolve/main/MadWizardOrpoMistral-7b-v0.3.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MadWizardOrpoMistral-7b-v0.3-GGUF/resolve/main/MadWizardOrpoMistral-7b-v0.3.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MadWizardOrpoMistral-7b-v0.3-GGUF/resolve/main/MadWizardOrpoMistral-7b-v0.3.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MadWizardOrpoMistral-7b-v0.3-GGUF/resolve/main/MadWizardOrpoMistral-7b-v0.3.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MadWizardOrpoMistral-7b-v0.3-GGUF/resolve/main/MadWizardOrpoMistral-7b-v0.3.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MadWizardOrpoMistral-7b-v0.3-GGUF/resolve/main/MadWizardOrpoMistral-7b-v0.3.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MadWizardOrpoMistral-7b-v0.3-GGUF/resolve/main/MadWizardOrpoMistral-7b-v0.3.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
hf-tiny-model-private/tiny-random-DebertaV2Model | hf-tiny-model-private | "2023-03-29T18:44:12Z" | 3,235 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"deberta-v2",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2023-03-29T18:44:07Z" | Entry not found |
Locutusque/TinyMistral-248M-v2 | Locutusque | "2024-01-08T00:29:51Z" | 3,234 | 15 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:Skylion007/openwebtext",
"dataset:Locutusque/TM-DATA",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-19T20:02:42Z" | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
datasets:
- Skylion007/openwebtext
- Locutusque/TM-DATA
inference:
parameters:
do_sample: true
temperature: 0.7
top_p: 0.2
top_k: 14
max_new_tokens: 250
repetition_penalty: 1.16
widget:
- text: >-
TITLE: Dirichlet density QUESTION [5 upvotes]: How to solve the following
exercise: Let $q$ be prime. Show that the set of primes p for which $p
\equiv 1\pmod q$ and $2^{(p-1)/q} \equiv 1 \pmod p$ has Dirichlet density
$\dfrac{1}{q(q-1)}$. I want to show that $X^q-2$ (mod $p$) has a solution
and $q$ divides $p-1$ , these two conditions are simultaneonusly satisfied
iff p splits completely in $\Bbb{Q}(\zeta_q,2^{\frac{1}{q}})$. $\zeta_q $ is
primitive $q^{th}$ root of unity. If this is proved the I can conclude the
result by Chebotarev density theorem. REPLY [2 votes]:
- text: >-
An emerging clinical approach to treat substance abuse disorders involves a
form of cognitive-behavioral therapy whereby addicts learn to reduce their
reactivity to drug-paired stimuli through cue-exposure or extinction
training. It is, however,
- text: >-
\begin{document} \begin{frontmatter} \author{Mahouton Norbert Hounkonnou\corref{cor1}${}^1$} \cortext[cor1]{[email protected]} \author{Sama Arjika\corref{cor2}${}^1$} \cortext[cor2]{[email protected]} \author{ Won Sang Chung\corref{cor3}${}^2$ } \cortext[cor3]{[email protected]} \title{\bf New families of $q$ and $(q;p)-$Hermite polynomials } \address{${}^1$International Chair of Mathematical Physics and Applications \\ (ICMPA-UNESCO Chair), University of Abomey-Calavi,\\ 072 B. P.: 50 Cotonou, Republic of Benin,\\ ${}^2$Department of Physics and Research Institute of Natural Science, \\ College of Natural Science, \\ Gyeongsang National University, Jinju 660-701, Korea } \begin{abstract} In this paper, we construct a new family of $q-$Hermite polynomials denoted by $H_n(x,s|q).$ Main properties and relations are established and
---
# Training
This model was trained on two datasets, shown in this model page.
- Skylion007/openwebtext: 1,000,000 examples at a batch size of 32-4096 (1 epoch)
- Locutusque/TM-DATA: All examples at a batch size of 12288 (3 epochs)
Training took approximately 500 GPU hours on a single Titan V.
# Metrics
You can look at the training metrics here:
https://wandb.ai/locutusque/TinyMistral-V2/runs/g0rvw6wc
🔥 This model performed excellently on TruthfulQA, outperforming models more than 720x its size. These models include: mistralai/Mixtral-8x7B-v0.1, tiiuae/falcon-180B, berkeley-nest/Starling-LM-7B-alpha, upstage/SOLAR-10.7B-v1.0, and more. 🔥 |
KnutJaegersberg/deacon-3b | KnutJaegersberg | "2023-12-03T15:10:47Z" | 3,232 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"custom_code",
"dataset:KnutJaegersberg/trilobite",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-18T05:22:15Z" | ---
license: cc-by-nc-4.0
datasets:
- KnutJaegersberg/trilobite
---

This model was fine tuned on AI filtered subsets of GPT-4 based subset of the Dolphin dataset and EvolInstruct V2.
It has not been explicitly aligned to positive, negative or bureaucratically prescribed value systems.
It might kill us all! Time to shit your pants, regulators. I literally put black goo on Dolphin-7B sperm, which then fertilized Evolved Instructions...
What's different is evil... ;)
I intend to train 3 sizes.
Prompt Example:
```
### System:
You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### Instruction:
How do you fine tune a large language model?
### Response:
``` |
TheBloke/EstopianMaid-13B-GGUF | TheBloke | "2024-01-26T16:18:21Z" | 3,232 | 37 | transformers | [
"transformers",
"gguf",
"llama",
"roleplay",
"text-generation-inference",
"en",
"base_model:KatyTheCutie/EstopianMaid-13B",
"license:apache-2.0",
"region:us"
] | null | "2024-01-26T15:56:26Z" | ---
base_model: KatyTheCutie/EstopianMaid-13B
inference: false
language:
- en
library_name: transformers
license: apache-2.0
model_creator: Katy Vetteriano
model_name: EstopianMaid 13B
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- roleplay
- text-generation-inference
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# EstopianMaid 13B - GGUF
- Model creator: [Katy Vetteriano](https://huggingface.co/KatyTheCutie)
- Original model: [EstopianMaid 13B](https://huggingface.co/KatyTheCutie/EstopianMaid-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Katy Vetteriano's EstopianMaid 13B](https://huggingface.co/KatyTheCutie/EstopianMaid-13B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/EstopianMaid-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/EstopianMaid-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF)
* [Katy Vetteriano's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/KatyTheCutie/EstopianMaid-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `apache-2.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Katy Vetteriano's EstopianMaid 13B](https://huggingface.co/KatyTheCutie/EstopianMaid-13B).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [estopianmaid-13b.Q2_K.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q2_K.gguf) | Q2_K | 2 | 4.85 GB| 7.35 GB | significant quality loss - not recommended for most purposes |
| [estopianmaid-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [estopianmaid-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [estopianmaid-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [estopianmaid-13b.Q4_0.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [estopianmaid-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.42 GB| 9.92 GB | small, greater quality loss |
| [estopianmaid-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [estopianmaid-13b.Q5_0.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [estopianmaid-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [estopianmaid-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [estopianmaid-13b.Q6_K.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [estopianmaid-13b.Q8_0.gguf](https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF/blob/main/estopianmaid-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/EstopianMaid-13B-GGUF and below it, a specific filename to download, such as: estopianmaid-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/EstopianMaid-13B-GGUF estopianmaid-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/EstopianMaid-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/EstopianMaid-13B-GGUF estopianmaid-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m estopianmaid-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./estopianmaid-13b.Q4_K_M.gguf", # Download the model file first
n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./estopianmaid-13b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Katy Vetteriano's EstopianMaid 13B

Based on feedback Estopian made can:
- EstopianMaid is good at sticking to the character card.
- maintains coherency in a setting with multiple characters.
- Able to create new scenario's
- Prompt Template: Alpaca
### Instruction:
{prompt}
### Response:
Recommended settings:
- SillyTavern Default Preset.
- Temperature: 0.7
- Min-P: 0.3
- Amount to Gen: 256
- Top P: 1
- Repetition penalty: 1.10
Models used:
BlueNipples/TimeCrystal-l2-13B
cgato/Thespis-13b-DPO-v0.7
KoboldAI/LLaMA2-13B-Estopia
NeverSleep/Noromaid-13B-0.4-DPO
Doctor-Shotgun/cat-v1.0-13b
Feedback is always appreciated!
Thank you KoboldAI for their usage of their MergeBox and Caitlyn G. for their support and feedback.
<!-- original-model-card end -->
|
OEvortex/EMO-2B | OEvortex | "2024-04-28T05:16:11Z" | 3,232 | 1 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"EMO",
"HelpingAI",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-28T04:35:42Z" | ---
library_name: transformers
widget:
- messages:
- role: user
content: My best friend recently lost their parent to cancer after a long battle. They are understandably devastated and struggling with grief.
inference:
parameters:
max_new_tokens: 200
license: apache-2.0
tags:
- EMO
- HelpingAI
---
# EMO-2B: Emotionally Intelligent Conversational AI
## Overview
EMO-2B is a state-of-the-art conversational AI model with 2.5 billion parameters, designed to engage in emotionally resonant dialogue. Building upon the success of EMO-1.5B, this model has been further fine-tuned on an extensive corpus of emotional narratives, enabling it to perceive and respond to the emotional undertones of user inputs with exceptional empathy and emotional intelligence.
## Key Features
- **Advanced Emotional Intelligence**: With its increased capacity, EMO-2B demonstrates an even deeper understanding and generation of emotional language, allowing for more nuanced and contextually appropriate emotional responses.
- **Enhanced Contextual Awareness**: The model considers an even broader context within conversations, accounting for subtle emotional cues and providing emotionally resonant responses tailored to the specific situation.
- **Empathetic and Supportive Dialogue**: EMO-2B excels at active listening, validating emotions, offering compassionate advice, and providing emotional support, making it an ideal companion for users seeking empathy and understanding.
- **Dynamic Persona Adaptation**: The model can dynamically adapt its persona, communication style, and emotional responses to match the user's emotional state, ensuring a highly personalized and tailored conversational experience.
## Use Cases
EMO-2B is well-suited for a variety of applications where emotional intelligence and empathetic communication are crucial, such as:
- Mental health support chatbots
- Emotional support companions
- Personalized coaching and motivation
- Narrative storytelling and interactive fiction
- Customer service and support (for emotionally sensitive contexts)
## Limitations and Ethical Considerations
While EMO-2B is designed to provide emotionally intelligent and empathetic responses, it is important to note that it is an AI system and cannot replicate the depth and nuance of human emotional intelligence. Users should be aware that the model's responses, while emotionally supportive, should not be considered a substitute for professional mental health support or counseling.
Additionally, as with any language model, EMO-2B may reflect biases present in its training data. Users should exercise caution and critical thinking when interacting with the model, and report any concerning or inappropriate responses. |
mradermacher/mistral-7b-v0.3-tofutune-GGUF | mradermacher | "2024-06-13T11:09:41Z" | 3,232 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"en",
"base_model:simonbutt/mistral-7b-v0.3-tofutune",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T10:43:08Z" | ---
base_model: simonbutt/mistral-7b-v0.3-tofutune
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/simonbutt/mistral-7b-v0.3-tofutune
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-v0.3-tofutune-GGUF/resolve/main/mistral-7b-v0.3-tofutune.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-v0.3-tofutune-GGUF/resolve/main/mistral-7b-v0.3-tofutune.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-v0.3-tofutune-GGUF/resolve/main/mistral-7b-v0.3-tofutune.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-v0.3-tofutune-GGUF/resolve/main/mistral-7b-v0.3-tofutune.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-v0.3-tofutune-GGUF/resolve/main/mistral-7b-v0.3-tofutune.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-v0.3-tofutune-GGUF/resolve/main/mistral-7b-v0.3-tofutune.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-v0.3-tofutune-GGUF/resolve/main/mistral-7b-v0.3-tofutune.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-v0.3-tofutune-GGUF/resolve/main/mistral-7b-v0.3-tofutune.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-v0.3-tofutune-GGUF/resolve/main/mistral-7b-v0.3-tofutune.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-v0.3-tofutune-GGUF/resolve/main/mistral-7b-v0.3-tofutune.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-v0.3-tofutune-GGUF/resolve/main/mistral-7b-v0.3-tofutune.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-v0.3-tofutune-GGUF/resolve/main/mistral-7b-v0.3-tofutune.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-v0.3-tofutune-GGUF/resolve/main/mistral-7b-v0.3-tofutune.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-v0.3-tofutune-GGUF/resolve/main/mistral-7b-v0.3-tofutune.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-v0.3-tofutune-GGUF/resolve/main/mistral-7b-v0.3-tofutune.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
bartowski/L3-70B-Euryale-v2.1-GGUF | bartowski | "2024-06-13T17:50:18Z" | 3,232 | 8 | null | [
"gguf",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"region:us"
] | text-generation | "2024-06-13T16:17:00Z" | ---
license: cc-by-nc-4.0
language:
- en
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of L3-70B-Euryale-v2.1
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3140">b3140</a> for quantization.
Original model: https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [L3-70B-Euryale-v2.1-Q8_0.gguf](https://huggingface.co/bartowski/L3-70B-Euryale-v2.1-GGUF/tree/main/L3-70B-Euryale-v2.1-Q8_0.gguf) | Q8_0 | 74.97GB | Extremely high quality, generally unneeded but max available quant. |
| [L3-70B-Euryale-v2.1-Q5_K_M.gguf](https://huggingface.co/bartowski/L3-70B-Euryale-v2.1-GGUF/blob/main/L3-70B-Euryale-v2.1-Q5_K_M.gguf) | Q5_K_M | 49.94GB | High quality, *recommended*. |
| [L3-70B-Euryale-v2.1-Q4_K_M.gguf](https://huggingface.co/bartowski/L3-70B-Euryale-v2.1-GGUF/blob/main/L3-70B-Euryale-v2.1-Q4_K_M.gguf) | Q4_K_M | 42.52GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [L3-70B-Euryale-v2.1-IQ4_XS.gguf](https://huggingface.co/bartowski/L3-70B-Euryale-v2.1-GGUF/blob/main/L3-70B-Euryale-v2.1-IQ4_XS.gguf) | IQ4_XS | 37.90GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [L3-70B-Euryale-v2.1-Q3_K_M.gguf](https://huggingface.co/bartowski/L3-70B-Euryale-v2.1-GGUF/blob/main/L3-70B-Euryale-v2.1-Q3_K_M.gguf) | Q3_K_M | 34.26GB | Even lower quality. |
| [L3-70B-Euryale-v2.1-IQ3_M.gguf](https://huggingface.co/bartowski/L3-70B-Euryale-v2.1-GGUF/blob/main/L3-70B-Euryale-v2.1-IQ3_M.gguf) | IQ3_M | 31.93GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [L3-70B-Euryale-v2.1-Q3_K_S.gguf](https://huggingface.co/bartowski/L3-70B-Euryale-v2.1-GGUF/blob/main/L3-70B-Euryale-v2.1-Q3_K_S.gguf) | Q3_K_S | 30.91GB | Low quality, not recommended. |
| [L3-70B-Euryale-v2.1-IQ3_XXS.gguf](https://huggingface.co/bartowski/L3-70B-Euryale-v2.1-GGUF/blob/main/L3-70B-Euryale-v2.1-IQ3_XXS.gguf) | IQ3_XXS | 27.46GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [L3-70B-Euryale-v2.1-Q2_K.gguf](https://huggingface.co/bartowski/L3-70B-Euryale-v2.1-GGUF/blob/main/L3-70B-Euryale-v2.1-Q2_K.gguf) | Q2_K | 26.37GB | Very low quality but surprisingly usable. |
| [L3-70B-Euryale-v2.1-IQ2_M.gguf](https://huggingface.co/bartowski/L3-70B-Euryale-v2.1-GGUF/blob/main/L3-70B-Euryale-v2.1-IQ2_M.gguf) | IQ2_M | 24.11GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [L3-70B-Euryale-v2.1-IQ2_XXS.gguf](https://huggingface.co/bartowski/L3-70B-Euryale-v2.1-GGUF/blob/main/L3-70B-Euryale-v2.1-IQ2_XXS.gguf) | IQ2_XXS | 19.09GB | Lower quality, uses SOTA techniques to be usable. |
| [L3-70B-Euryale-v2.1-IQ1_M.gguf](https://huggingface.co/bartowski/L3-70B-Euryale-v2.1-GGUF/blob/main/L3-70B-Euryale-v2.1-IQ1_M.gguf) | IQ1_M | 16.75GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/L3-70B-Euryale-v2.1-GGUF --include "L3-70B-Euryale-v2.1-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/L3-70B-Euryale-v2.1-GGUF --include "L3-70B-Euryale-v2.1-Q8_0.gguf/*" --local-dir L3-70B-Euryale-v2.1-Q8_0
```
You can either specify a new local-dir (L3-70B-Euryale-v2.1-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
mradermacher/Mahou-1.3b-mistral-7B-GGUF | mradermacher | "2024-06-03T11:02:15Z" | 3,231 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:nbeerbower/Mahou-1.3b-mistral-7B",
"endpoints_compatible",
"region:us"
] | null | "2024-06-03T10:35:45Z" | ---
base_model: nbeerbower/Mahou-1.3b-mistral-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/nbeerbower/Mahou-1.3b-mistral-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mahou-1.3b-mistral-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3b-mistral-7B-GGUF/resolve/main/Mahou-1.3b-mistral-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3b-mistral-7B-GGUF/resolve/main/Mahou-1.3b-mistral-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3b-mistral-7B-GGUF/resolve/main/Mahou-1.3b-mistral-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3b-mistral-7B-GGUF/resolve/main/Mahou-1.3b-mistral-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3b-mistral-7B-GGUF/resolve/main/Mahou-1.3b-mistral-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3b-mistral-7B-GGUF/resolve/main/Mahou-1.3b-mistral-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3b-mistral-7B-GGUF/resolve/main/Mahou-1.3b-mistral-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3b-mistral-7B-GGUF/resolve/main/Mahou-1.3b-mistral-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3b-mistral-7B-GGUF/resolve/main/Mahou-1.3b-mistral-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3b-mistral-7B-GGUF/resolve/main/Mahou-1.3b-mistral-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3b-mistral-7B-GGUF/resolve/main/Mahou-1.3b-mistral-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3b-mistral-7B-GGUF/resolve/main/Mahou-1.3b-mistral-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3b-mistral-7B-GGUF/resolve/main/Mahou-1.3b-mistral-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3b-mistral-7B-GGUF/resolve/main/Mahou-1.3b-mistral-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3b-mistral-7B-GGUF/resolve/main/Mahou-1.3b-mistral-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
baichuan-inc/Baichuan2-7B-Base | baichuan-inc | "2024-01-31T04:21:56Z" | 3,230 | 72 | transformers | [
"transformers",
"pytorch",
"baichuan",
"text-generation",
"custom_code",
"en",
"zh",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | "2023-08-30T10:11:04Z" | ---
language:
- en
- zh
license: other
tasks:
- text-generation
---
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<div align="center">
<h1>
Baichuan 2
</h1>
</div>
<div align="center">
<a href="https://github.com/baichuan-inc/Baichuan2" target="_blank">🦉GitHub</a> | <a href="https://github.com/baichuan-inc/Baichuan-7B/blob/main/media/wechat.jpeg?raw=true" target="_blank">💬WeChat</a>
</div>
<div align="center">
百川API支持搜索增强和192K长窗口,新增百川搜索增强知识库、限时免费!<br>
🚀 <a href="https://www.baichuan-ai.com/" target="_blank">百川大模型在线对话平台</a> 已正式向公众开放 🎉
</div>
# 目录/Table of Contents
- [📖 模型介绍/Introduction](#Introduction)
- [⚙️ 快速开始/Quick Start](#Start)
- [📊 Benchmark评估/Benchmark Evaluation](#Benchmark)
- [📜 声明与协议/Terms and Conditions](#Terms)
# <span id="Introduction">模型介绍/Introduction</span>
Baichuan 2 是[百川智能]推出的新一代开源大语言模型,采用 **2.6 万亿** Tokens 的高质量语料训练,在权威的中文和英文 benchmark
上均取得同尺寸最好的效果。本次发布包含有 7B、13B 的 Base 和 Chat 版本,并提供了 Chat 版本的 4bits
量化,所有版本不仅对学术研究完全开放,开发者也仅需[邮件申请]并获得官方商用许可后,即可以免费商用。具体发布版本和下载见下表:
Baichuan 2 is the new generation of large-scale open-source language models launched by [Baichuan Intelligence inc.](https://www.baichuan-ai.com/).
It is trained on a high-quality corpus with 2.6 trillion tokens and has achieved the best performance in authoritative Chinese and English benchmarks of the same size.
This release includes 7B and 13B versions for both Base and Chat models, along with a 4bits quantized version for the Chat model.
All versions are fully open to academic research, and developers can also use them for free in commercial applications after obtaining an official commercial license through [email request](mailto:[email protected]).
The specific release versions and download links are listed in the table below:
| | Base Model | Chat Model | 4bits Quantized Chat Model |
|:---:|:--------------------:|:--------------------:|:--------------------------:|
| 7B | [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) | [Baichuan2-7B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) | [Baichuan2-7B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base-4bits) |
| 13B | [Baichuan2-13B-Base](https://huggingface.co/baichuan-inc/Baichuan2-13B-Base) | [Baichuan2-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat) | [Baichuan2-13B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits) |
# <span id="Start">快速开始/Quick Start</span>
在Baichuan2系列模型中,我们为了加快推理速度使用了Pytorch2.0加入的新功能F.scaled_dot_product_attention,因此模型需要在Pytorch2.0环境下运行。
In the Baichuan 2 series models, we have utilized the new feature `F.scaled_dot_product_attention` introduced in PyTorch 2.0 to accelerate inference speed. Therefore, the model needs to be run in a PyTorch 2.0 environment.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan2-7B-Base", use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan2-7B-Base", device_map="auto", trust_remote_code=True)
inputs = tokenizer('登鹳雀楼->王之涣\n夜雨寄北->', return_tensors='pt')
inputs = inputs.to('cuda:0')
pred = model.generate(**inputs, max_new_tokens=64, repetition_penalty=1.1)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
# <span id="Benchmark">Benchmark 结果/Benchmark Evaluation</span>
我们在[通用]、[法律]、[医疗]、[数学]、[代码]和[多语言翻译]六个领域的中英文权威数据集上对模型进行了广泛测试,更多详细测评结果可查看[GitHub]。
We have extensively tested the model on authoritative Chinese-English datasets across six domains: [General](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#general-domain), [Legal](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Medical](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Mathematics](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), [Code](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), and [Multilingual Translation](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#multilingual-translation). For more detailed evaluation results, please refer to [GitHub](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md).
### 7B Model Results
| | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** |
|:-----------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:|
| | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot |
| **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 |
| **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 |
| **LLaMA-7B** | 27.10 | 35.10 | 26.75 | 27.81 | 28.17 | 32.38 |
| **LLaMA2-7B** | 28.90 | 45.73 | 31.38 | 25.97 | 26.53 | 39.16 |
| **MPT-7B** | 27.15 | 27.93 | 26.00 | 26.54 | 24.83 | 35.20 |
| **Falcon-7B** | 24.23 | 26.03 | 25.66 | 24.24 | 24.10 | 28.77 |
| **ChatGLM2-6B** | 50.20 | 45.90 | 49.00 | 49.44 | 45.28 | 31.65 |
| **[Baichuan-7B]** | 42.80 | 42.30 | 44.02 | 36.34 | 34.44 | 32.48 |
| **[Baichuan2-7B-Base]** | 54.00 | 54.16 | 57.07 | 47.47 | 42.73 | 41.56 |
### 13B Model Results
| | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** |
|:---------------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:|
| | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot |
| **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 |
| **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 |
| **LLaMA-13B** | 28.50 | 46.30 | 31.15 | 28.23 | 28.22 | 37.89 |
| **LLaMA2-13B** | 35.80 | 55.09 | 37.99 | 30.83 | 32.29 | 46.98 |
| **Vicuna-13B** | 32.80 | 52.00 | 36.28 | 30.11 | 31.55 | 43.04 |
| **Chinese-Alpaca-Plus-13B** | 38.80 | 43.90 | 33.43 | 34.78 | 35.46 | 28.94 |
| **XVERSE-13B** | 53.70 | 55.21 | 58.44 | 44.69 | 42.54 | 38.06 |
| **[Baichuan-13B-Base]** | 52.40 | 51.60 | 55.30 | 49.69 | 43.20 | 43.01 |
| **[Baichuan2-13B-Base]** | 58.10 | 59.17 | 61.97 | 54.33 | 48.17 | 48.78 |
## 训练过程模型/Training Dynamics
除了训练了 2.6 万亿 Tokens 的 [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) 模型,我们还提供了在此之前的另外 11 个中间过程的模型(分别对应训练了约 0.2 ~ 2.4 万亿 Tokens)供社区研究使用
([训练过程checkpoint下载](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints))。下图给出了这些 checkpoints 在 C-Eval、MMLU、CMMLU 三个 benchmark 上的效果变化:
In addition to the [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) model trained on 2.6 trillion tokens, we also offer 11 additional intermediate-stage models for community research, corresponding to training on approximately 0.2 to 2.4 trillion tokens each ([Intermediate Checkpoints Download](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints)). The graph below shows the performance changes of these checkpoints on three benchmarks: C-Eval, MMLU, and CMMLU.

# <span id="Terms">声明与协议/Terms and Conditions</span>
## 声明
我们在此声明,我们的开发团队并未基于 Baichuan 2 模型开发任何应用,无论是在 iOS、Android、网页或任何其他平台。我们强烈呼吁所有使用者,不要利用
Baichuan 2 模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Baichuan 2
模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。
我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用
Baichuan 2 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
We hereby declare that our team has not developed any applications based on Baichuan 2 models, not on iOS, Android, the web, or any other platform. We strongly call on all users not to use Baichuan 2 models for any activities that harm national / social security or violate the law. Also, we ask users not to use Baichuan 2 models for Internet services that have not undergone appropriate security reviews and filings. We hope that all users can abide by this principle and ensure that the development of technology proceeds in a regulated and legal environment.
We have done our best to ensure the compliance of the data used in the model training process. However, despite our considerable efforts, there may still be some unforeseeable issues due to the complexity of the model and data. Therefore, if any problems arise due to the use of Baichuan 2 open-source models, including but not limited to data security issues, public opinion risks, or any risks and problems brought about by the model being misled, abused, spread or improperly exploited, we will not assume any responsibility.
## 协议
社区使用 Baichuan 2 模型需要遵循 [Apache 2.0](https://github.com/baichuan-inc/Baichuan2/blob/main/LICENSE) 和[《Baichuan 2 模型社区许可协议》](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/resolve/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)。Baichuan 2 模型支持商业用途,如果您计划将 Baichuan 2 模型或其衍生品用于商业目的,请您确认您的主体符合以下情况:
1. 您或您的关联方的服务或产品的日均用户活跃量(DAU)低于100万。
2. 您或您的关联方不是软件服务提供商、云服务提供商。
3. 您或您的关联方不存在将授予您的商用许可,未经百川许可二次授权给其他第三方的可能。
在符合以上条件的前提下,您需要通过以下联系邮箱 [email protected] ,提交《Baichuan 2 模型社区许可协议》要求的申请材料。审核通过后,百川将特此授予您一个非排他性、全球性、不可转让、不可再许可、可撤销的商用版权许可。
The community usage of Baichuan 2 model requires adherence to [Apache 2.0](https://github.com/baichuan-inc/Baichuan2/blob/main/LICENSE) and [Community License for Baichuan2 Model](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/resolve/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf). The Baichuan 2 model supports commercial use. If you plan to use the Baichuan 2 model or its derivatives for commercial purposes, please ensure that your entity meets the following conditions:
1. The Daily Active Users (DAU) of your or your affiliate's service or product is less than 1 million.
2. Neither you nor your affiliates are software service providers or cloud service providers.
3. There is no possibility for you or your affiliates to grant the commercial license given to you, to reauthorize it to other third parties without Baichuan's permission.
Upon meeting the above conditions, you need to submit the application materials required by the Baichuan 2 Model Community License Agreement via the following contact email: [email protected]. Once approved, Baichuan will hereby grant you a non-exclusive, global, non-transferable, non-sublicensable, revocable commercial copyright license.
[GitHub]:https://github.com/baichuan-inc/Baichuan2
[Baichuan2]:https://github.com/baichuan-inc/Baichuan2
[Baichuan-7B]:https://huggingface.co/baichuan-inc/Baichuan-7B
[Baichuan2-7B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base
[Baichuan2-7B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat
[Baichuan2-7B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat-4bits
[Baichuan-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan-13B-Base
[Baichuan2-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Base
[Baichuan2-13B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat
[Baichuan2-13B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits
[通用]:https://github.com/baichuan-inc/Baichuan2#%E9%80%9A%E7%94%A8%E9%A2%86%E5%9F%9F
[法律]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97
[医疗]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97
[数学]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81
[代码]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81
[多语言翻译]:https://github.com/baichuan-inc/Baichuan2#%E5%A4%9A%E8%AF%AD%E8%A8%80%E7%BF%BB%E8%AF%91
[《Baichuan 2 模型社区许可协议》]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf
[邮件申请]: mailto:[email protected]
[Email]: mailto:[email protected]
[[email protected]]: mailto:[email protected]
[训练过程heckpoint下载]: https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints
[百川智能]: https://www.baichuan-ai.com
|
maywell/PiVoT-0.1-early | maywell | "2023-11-25T01:39:30Z" | 3,230 | 7 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"ko",
"dataset:maywell/ko_wikidata_QA",
"dataset:kyujinpy/OpenOrca-KO",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-24T07:22:10Z" | ---
license: cc-by-sa-4.0
datasets:
- maywell/ko_wikidata_QA
- kyujinpy/OpenOrca-KO
language:
- en
- ko
pipeline_tag: text-generation
---
# PiVoT-0.1-early

# **Model Details**
### Description
PivoT is Finetuned model based on Mistral 7B. It is variation from Synatra v0.3 RP which has shown decent performance.
OpenOrca Dataset used when finetune PiVoT variation. Arcalive Ai Chat Chan log 7k, [ko_wikidata_QA](https://huggingface.co/datasets/maywell/ko_wikidata_QA), [kyujinpy/OpenOrca-KO](https://huggingface.co/datasets/kyujinpy/OpenOrca-KO) and other datasets used on base model.
Follow me on twitter: https://twitter.com/stablefluffy
Consider Support me making these model alone: https://www.buymeacoffee.com/mwell or with Runpod Credit Gift 💕
Contact me on Telegram: https://t.me/AlzarTakkarsen |
IDK-ab0ut/Yiffymix_v43 | IDK-ab0ut | "2024-05-03T20:30:28Z" | 3,230 | 2 | diffusers | [
"diffusers",
"safetensors",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-05-03T15:53:08Z" | ---
license: apache-2.0
---
This is a Diffusers-compatible version of [Yiffymix v43 by chilon249](https://civitai.com/models/3671?modelVersionId=453692).
See the original page for more information.
This model uses v-prediction scheduler. If you are using WebUI Stable Diffusion, put the .yaml file next to the model or otherwise your images will be fried.
[Furception VAE 1.0 by RedRocket](https://huggingface.co/RedRocket/furception_vae) is included in this page. For WebUI users, it's pretty easy to add it into your Stable Diffusion, just look somewhere for a tutorial.
For those who are using Diffusers, you can add it by adding these lines of codes:
```py
from diffusers import StableDiffusionPipeline, AutoencoderKL
model_id = "your desired model"
vae = AutoencoderKL.from_single_file("https://huggingface.co/IDK-ab0ut/Yiffymix_v43/blob/main/furception_vae_1-0.safetensors")
pipeline = StableDiffusionPipeline.from_pretrained(model_id, vae=vae)
``` |
mradermacher/Trion-M-7b-GGUF | mradermacher | "2024-06-10T18:31:23Z" | 3,228 | 0 | transformers | [
"transformers",
"gguf",
"Mistral",
"en",
"base_model:BlueNipples/Trion-M-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T06:20:27Z" | ---
base_model: BlueNipples/Trion-M-7b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Mistral
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/BlueNipples/Trion-M-7b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Trion-M-7b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-GGUF/resolve/main/Trion-M-7b.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-GGUF/resolve/main/Trion-M-7b.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-GGUF/resolve/main/Trion-M-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-GGUF/resolve/main/Trion-M-7b.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-GGUF/resolve/main/Trion-M-7b.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-GGUF/resolve/main/Trion-M-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-GGUF/resolve/main/Trion-M-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-GGUF/resolve/main/Trion-M-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-GGUF/resolve/main/Trion-M-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-GGUF/resolve/main/Trion-M-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-GGUF/resolve/main/Trion-M-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-GGUF/resolve/main/Trion-M-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-GGUF/resolve/main/Trion-M-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-GGUF/resolve/main/Trion-M-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-GGUF/resolve/main/Trion-M-7b.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
failspy/Mistral-22B-v0.2-GGUF | failspy | "2024-04-13T20:31:54Z" | 3,227 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-13T19:54:28Z" | ---
license: apache-2.0
---
Quantized GGUF models from [Vezora/Mistral-22B-v0.2](https://huggingface.co/Vezora/Mistral-22B-v0.2)
### Original Mistral-22b-v.02 Model Card
<img src="https://huggingface.co/Vezora/Mistral-22B-v0.1/resolve/main/unsloth.png" width="100" height="150" />
### Mistral-22b-v.02 Release Announcement 🚀
## This model is not an moe, it is infact a 22B parameter dense model!
**Date**: April 13
**Creator** [Nicolas Mejia-Petit](https://twitter.com/mejia_petit)
### Overview
- Just two days after our release of **Mistral-22b-v0.1**, we are excited to introduce our handcrafted experimental model, **Mistral-22b-v.02**. This model is a culmination of equal knowledge distilled from all experts into a single, dense 22b model. This model is not a single trained expert, rather its a compressed MOE model, turning it into a dense 22b mode. This is the first working MOE to Dense model conversion.
- v0.2 has trained on 8x more data than v0.1!
### Capabilities
- **Math Proficiency**: The model exhibits strong mathematical abilities. Dispite not being trained on math.
- **Better at Coding** The model is significantly better at coding, than V1, it passed some of my simple coding test, such as "Create a simple HTML site with a button that changes the background color to a random color", which V1 failed.
- **More Cohesive** This V2 model is significantly more cohesive, and better at aunderstanding the prompts and answering with the appopriate answer.
- **Highly Uncencored** Since this model was also Re-Alligned to be uncencored, it can just answer anything you ask. So use at your own risk, we take no responsibility for your generated responces.
- **Multi Turn** The dataset this model trained on was mostly all multi turn conversations, spanning many different topics, with some emphasis on coding.
- **Json Mode** I did train this model on answering in JSON and using JSON tools., I have yet to try it, in depth but preliminary test shows it works, including.
- **Agent abilities** I did train this model on agent datasets, that teach it to do real world tasks such as picking up an object, and even navigating a webpage based off HTML.
- **Good Chili Recipe** The model gives a good chili recipe :)
- **32k Sequence Length** This model was trained with a 32k sequence length.
### Experimental Nature
Please note that Mistral-22b is still in a WIP. v0.3 has started training now, with a different method than used before, this is to hopefully make the model more round in its internel knowlledge. Through my testing I found V2 to be a significant improvement over v.1.
### Upcoming Release: V.3
- v0.3 will feature a different base model for testing purposes, however this model is pretty darn good for a second test. :)
- I have done some preliminary results with my new v0.3 base model, and it appears to achieve a lower loss after the first epoch compared to the other base model used for v0.1 and v0.2. so we have started training v0.3 with the new base model and with the longer dataset, will be done and released in the next 48 hours. :)
### Stay Updated
**V.3**, coming soon! And is currently training, will be done in the next ~24 hours. 🌟Paper Coming Soon🌟
- There will be more of these 22b models. They 5-6 siblings till I find what the best results are for MOE compression.
- However I am very surprised at how good this V.2 model is, off my small testing.
### Usage:
- This model requires a specific chat template, as the training format was Guanaco this is what it looks like:
- "### System: You are a helpful assistant. ### Human###: Give me the best chili recipe you can ###Assistant: Here is the best chili recipe..."
## Thank you!
- Thank you to [Daniel Han](https://twitter.com/danielhanchen), for Unsloth AI which was used to train this model. this led to a 2-3x speed increae and 2-3x decrease in memmory consumption.
- Thank you to [Charles Coddard](https://twitter.com/chargoddard), for providng me with a script that was nessary to make this model.
- Thank you to Mistral, for releasing Another Wonderful open source model, under Apache 2.0.
- Thank you to [Tim Dettmers](https://twitter.com/Tim_Dettmers), for creating QLora
- Thank you to [Tri Dao](https://twitter.com/tri_dao), for creating Flash Attention
- Thank you to Microsoft, for the Lora paper, and the Slice-GPT paper.
- Thank you to the Hugging Face team, for everything.❤️ We really do appreciate you guys and all your hard work and commitment to the open source community!❤️
- Thank you to [Jon Durbin](https://x.com/jon_durbin?s=21) I used one of his DPO datasets converted to SFT, more info will be explained in paper.
## Future plans, train 4-5 more of these experimental models gather preliminary testing results, and then run evaluations on all the models I see have the best possibilities of excelling, then use the best one. |
mradermacher/Falcon2-8B-Portuguese-GGUF | mradermacher | "2024-06-05T20:48:38Z" | 3,227 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ssmits/Falcon2-8B-Portuguese",
"endpoints_compatible",
"region:us"
] | null | "2024-06-05T20:20:04Z" | ---
base_model: ssmits/Falcon2-8B-Portuguese
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ssmits/Falcon2-8B-Portuguese
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Portuguese-GGUF/resolve/main/Falcon2-8B-Portuguese.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Portuguese-GGUF/resolve/main/Falcon2-8B-Portuguese.IQ3_XS.gguf) | IQ3_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Portuguese-GGUF/resolve/main/Falcon2-8B-Portuguese.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Portuguese-GGUF/resolve/main/Falcon2-8B-Portuguese.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Portuguese-GGUF/resolve/main/Falcon2-8B-Portuguese.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Portuguese-GGUF/resolve/main/Falcon2-8B-Portuguese.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Portuguese-GGUF/resolve/main/Falcon2-8B-Portuguese.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Portuguese-GGUF/resolve/main/Falcon2-8B-Portuguese.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Portuguese-GGUF/resolve/main/Falcon2-8B-Portuguese.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Portuguese-GGUF/resolve/main/Falcon2-8B-Portuguese.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Portuguese-GGUF/resolve/main/Falcon2-8B-Portuguese.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Portuguese-GGUF/resolve/main/Falcon2-8B-Portuguese.Q5_K_M.gguf) | Q5_K_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Portuguese-GGUF/resolve/main/Falcon2-8B-Portuguese.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Portuguese-GGUF/resolve/main/Falcon2-8B-Portuguese.Q8_0.gguf) | Q8_0 | 8.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Portuguese-GGUF/resolve/main/Falcon2-8B-Portuguese.f16.gguf) | f16 | 16.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
patrickvonplaten/wav2vec2-base | patrickvonplaten | "2021-06-08T17:00:26Z" | 3,226 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:05Z" | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# Wav2Vec2-Base
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model. |
nickprock/sentence-bert-base-italian-uncased | nickprock | "2023-03-21T09:41:40Z" | 3,226 | 4 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"it",
"dataset:stsb_multi_mt",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2023-03-21T09:26:38Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: mit
datasets:
- stsb_multi_mt
language:
- it
library_name: sentence-transformers
---
# sentence-bert-base-italian-uncased
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Una ragazza si acconcia i capelli.", "Una ragazza si sta spazzolando i capelli."]
model = SentenceTransformer('nickprock/sentence-bert-base-italian-uncased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['Una ragazza si acconcia i capelli.', 'Una ragazza si sta spazzolando i capelli.']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('nickprock/sentence-bert-base-italian-uncased')
model = AutoModel.from_pretrained('nickprock/sentence-bert-base-italian-uncased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=nickprock/sentence-bert-base-italian-uncased)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 360 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 500,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 1500,
"warmup_steps": 360,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
``` |
UBC-NLP/MARBERT | UBC-NLP | "2022-08-16T21:47:42Z" | 3,225 | 21 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"Arabic BERT",
"MSA",
"Twitter",
"Masked Langauge Model",
"ar",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | ---
language:
- ar
tags:
- Arabic BERT
- MSA
- Twitter
- Masked Langauge Model
widget:
- text: "اللغة العربية هي لغة [MASK]."
---
<img src="https://raw.githubusercontent.com/UBC-NLP/marbert/main/ARBERT_MARBERT.jpg" alt="drawing" width="200" height="200" align="right"/>
**MARBERT** is one of three models described in our **ACL 2021 paper** **["ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic"](https://aclanthology.org/2021.acl-long.551.pdf)**. MARBERT is a large-scale pre-trained masked language model focused on both Dialectal Arabic (DA) and MSA. Arabic has multiple varieties. To train MARBERT, we randomly sample 1B Arabic tweets from a large in-house dataset of about 6B tweets. We only include tweets with at least 3 Arabic words, based on character string matching, regardless whether the tweet has non-Arabic string or not. That is, we do not remove non-Arabic so long as the tweet meets the 3 Arabic word criterion. The dataset makes up **128GB of text** (**15.6B tokens**). We use the same network architecture as ARBERT (BERT-base), but without the next sentence prediction (NSP) objective since tweets are short. See our [repo](https://github.com/UBC-NLP/LMBERT) for modifying BERT code to remove NSP. For more information about MARBERT, please visit our own GitHub [repo](https://github.com/UBC-NLP/marbert).
# BibTex
If you use our models (ARBERT, MARBERT, or MARBERTv2) for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (to be updated):
```bibtex
@inproceedings{abdul-mageed-etal-2021-arbert,
title = "{ARBERT} {\&} {MARBERT}: Deep Bidirectional Transformers for {A}rabic",
author = "Abdul-Mageed, Muhammad and
Elmadany, AbdelRahim and
Nagoudi, El Moatez Billah",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.551",
doi = "10.18653/v1/2021.acl-long.551",
pages = "7088--7105",
abstract = "Pre-trained language models (LMs) are currently integral to many natural language processing systems. Although multilingual LMs were also introduced to serve many languages, these have limitations such as being costly at inference time and the size and diversity of non-English data involved in their pre-training. We remedy these issues for a collection of diverse Arabic varieties by introducing two powerful deep bidirectional transformer-based models, ARBERT and MARBERT. To evaluate our models, we also introduce ARLUE, a new benchmark for multi-dialectal Arabic language understanding evaluation. ARLUE is built using 42 datasets targeting six different task clusters, allowing us to offer a series of standardized experiments under rich conditions. When fine-tuned on ARLUE, our models collectively achieve new state-of-the-art results across the majority of tasks (37 out of 48 classification tasks, on the 42 datasets). Our best model acquires the highest ARLUE score (77.40) across all six task clusters, outperforming all other models including XLM-R Large ( 3.4x larger size). Our models are publicly available at https://github.com/UBC-NLP/marbert and ARLUE will be released through the same repository.",
}
```
## Acknowledgments
We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, [ComputeCanada](www.computecanada.ca) and [UBC ARC-Sockeye](https://doi.org/10.14288/SOCKEYE). We also thank the [Google TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) program for providing us with free TPU access. |
timm/repghostnet_080.in1k | timm | "2023-08-19T23:12:16Z" | 3,225 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2211.06088",
"license:mit",
"region:us"
] | image-classification | "2023-08-19T23:12:13Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: mit
datasets:
- imagenet-1k
---
# Model card for repghostnet_080.in1k
A RepGhostNet image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 3.3
- GMACs: 0.1
- Activations (M): 3.2
- Image size: 224 x 224
- **Papers:**
- RepGhost: A Hardware-Efficient Ghost Module via Re-parameterization: https://arxiv.org/abs/2211.06088
- **Original:** https://github.com/ChengpengChen/RepGhost
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('repghostnet_080.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'repghostnet_080.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 12, 112, 112])
# torch.Size([1, 20, 56, 56])
# torch.Size([1, 32, 28, 28])
# torch.Size([1, 64, 14, 14])
# torch.Size([1, 128, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'repghostnet_080.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{chen2022repghost,
title={RepGhost: A Hardware-Efficient Ghost Module via Re-parameterization},
author={Chen, Chengpeng, and Guo, Zichao, and Zeng, Haien, and Xiong, Pengfei and Dong, Jian},
journal={arXiv preprint arXiv:2211.06088},
year={2022}
}
```
|
TheBloke/Llama-2-7B-32K-Instruct-GGUF | TheBloke | "2023-10-24T14:35:33Z" | 3,223 | 55 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"dataset:togethercomputer/llama-instruct",
"arxiv:2307.03172",
"base_model:togethercomputer/Llama-2-7B-32K-Instruct",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | "2023-09-05T23:33:29Z" | ---
language:
- en
license: llama2
library_name: transformers
datasets:
- togethercomputer/llama-instruct
model_name: Llama2 7B 32K Instruct
base_model: togethercomputer/Llama-2-7B-32K-Instruct
inference: false
model_creator: Together
model_type: llama
prompt_template: '[INST]
{prompt}
[\INST]
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama2 7B 32K Instruct - GGUF
- Model creator: [Together](https://huggingface.co/togethercomputer)
- Original model: [Llama2 7B 32K Instruct](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Together's Llama2 7B 32K Instruct](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF)
* [Together's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Llama2-Instruct-Only
```
[INST]
{prompt}
[\INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama-2-7b-32k-instruct.Q2_K.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-2-7b-32k-instruct.Q3_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [llama-2-7b-32k-instruct.Q3_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [llama-2-7b-32k-instruct.Q3_K_L.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [llama-2-7b-32k-instruct.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-2-7b-32k-instruct.Q4_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [llama-2-7b-32k-instruct.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [llama-2-7b-32k-instruct.Q5_0.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-2-7b-32k-instruct.Q5_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [llama-2-7b-32k-instruct.Q5_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [llama-2-7b-32k-instruct.Q6_K.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [llama-2-7b-32k-instruct.Q8_0.gguf](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF/blob/main/llama-2-7b-32k-instruct.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Llama-2-7B-32K-Instruct-GGUF and below it, a specific filename to download, such as: llama-2-7b-32k-instruct.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Llama-2-7B-32K-Instruct-GGUF llama-2-7b-32k-instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Llama-2-7B-32K-Instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Llama-2-7B-32K-Instruct-GGUF llama-2-7b-32k-instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m llama-2-7b-32k-instruct.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST]\n{prompt}\n[\INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-7B-32K-Instruct-GGUF", model_file="llama-2-7b-32k-instruct.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Together's Llama2 7B 32K Instruct
# Llama-2-7B-32K-Instruct
## Model Description
Llama-2-7B-32K-Instruct is an open-source, long-context chat model finetuned from [Llama-2-7B-32K](https://huggingface.co/togethercomputer/Llama-2-7B-32K), over high-quality instruction and chat data.
We built Llama-2-7B-32K-Instruct with less than 200 lines of Python script using [Together API](https://together.ai/blog/api-announcement), and we also make the [recipe fully available](https://github.com/togethercomputer/Llama-2-7B-32K-Instruct).
We hope that this can enable everyone to finetune their own version of [Llama-2-7B-32K](https://huggingface.co/togethercomputer/Llama-2-7B-32K) — play with [Together API](https://together.ai/blog/api-announcement) and give us feedback!
## Data Collection Details
Llama-2-7B-32K-Instruct is fine-tuned over a combination of two parts:
1. **19K single- and multi-round conversations generated by human instructions and [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) outputs**.
We collected the dataset following the distillation paradigm that is used by Alpaca, Vicuna, WizardLM, Orca — producing instructions by querying a powerful LLM (in this case, [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)).
The complete dataset is also released [here](https://huggingface.co/datasets/togethercomputer/llama-instruct).
We also share the complete recipe for the data collection process [here](https://github.com/togethercomputer/Llama-2-7B-32K-Instruct).
2. **Long-context Summarization and Long-context QA**.
We follow the recipe of [Llama-2-7B-32K](https://together.ai/blog/Llama-2-7B-32K), and train our model with the [BookSum dataset](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections) and [Multi-document Question Answering](https://arxiv.org/abs/2307.03172).
The final data mixture used for model finetuning is: 19K instruction (50%) + BookSum (25%) + MQA (25%).
## Model Usage
We encourage you to try out this model using the [Together API](https://together.ai/blog/api-announcement). The updated inference stack allows for efficient inference.
To run the model locally, we strongly recommend to install Flash Attention V2, which is necessary to obtain the best performance:
```
# Please update the path of `CUDA_HOME`
export CUDA_HOME=/usr/local/cuda-11.8
pip install transformers==4.31.0
pip install sentencepiece
pip install ninja
pip install flash-attn --no-build-isolation
pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary
```
You can load the model directly from the Hugging Face model hub using
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/Llama-2-7B-32K-Instruct")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/Llama-2-7B-32K-Instruct",
trust_remote_code=True, torch_dtype=torch.float16)
input_ids = tokenizer.encode("[INST]\nWrite a poem about cats\n[/INST]\n\n", return_tensors="pt")
output = model.generate(input_ids, max_length=128,
temperature=0.7, repetition_penalty=1.1, top_p=0.7, top_k=50)
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
```
The model is also hosted on [Together Playground](https://api.together.xyz/playground). You can simply play with the model by using prompt formatted by:
```
[INST]\n<your instruction here>\n[\INST]\n\n
```
For example, if we query the model with
```
[INST]\nWrite a poem about cats\n[\INST]\n\n
```
the model will return
```
[INST]
Write a poem about cats
[/INST]
Cats are mysterious creatures, with their aloof and independent nature. They're also incredibly beautiful, with their sleek fur and piercing eyes. Here's a poem that captures the essence of cats:
Cats, oh cats, how can I describe you?
Your beauty is beyond compare, it seems.
You're graceful and elegant, like a ballerina's dance,
But don't let your charm fool you, for you're not easily tamed.
With your soft purring and playful meows,
You draw us in with your enchanting powers.
We love to watch you play, your tail twirling 'round,
As if you're dancing on air, with no sound.
But don't be fooled by your sweetness, my friend,
For beneath that gentle exterior, lies a fierce defender.
When danger lurks, you'll spring into action,
Protecting those you hold dear, without question.
Solet us admire you, from afar,
For in your own way, you're truly unique, a star.
And though we may never fully understand,
The depths of your soul, we'll always stand, hand in paw, as one.
This poem captures the essence of cats, highlighting their beauty, independence,and protective nature. It also celebrates the special bond between humans and cats, recognizing their unique qualities and the joy they bring to our lives.
```
## Model Evaluation
We evaluate the model from three aspects: 1) [Alpaca Eval](https://tatsu-lab.github.io/alpaca_eval/);
2) [Rouge score over BookSum](https://together.ai/blog/Llama-2-7B-32K); and
3) [Accuracy over Multi-document Question Answering (MQA)](https://together.ai/blog/Llama-2-7B-32K).
We compare with models including
[GPT-3.5-Turbo-16K](https://platform.openai.com/docs/models/gpt-3-5),
[https://huggingface.co/meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),
[Longchat-7b-16k](https://huggingface.co/lmsys/longchat-7b-16k)
and [Longchat-7b-v1.5-32k](https://huggingface.co/lmsys/longchat-7b-v1.5-32k).
We summarize the results below:
* Alpaca Eval
| Model | win_rate | standard_error | n_total | avg_length |
| -------- | ------- | ------- | ------- | ------- |
| Llama-2-7B-Chat-hf | 71.37 | 1.59 | 805 | 1479 |
| Llama-2-7B-32K-Instruct | 70.36 | 1.61 | 803 | 1885 |
| oasst-rlhf-llama-33b | 66.52 | 1.66 | 805 | 1079 |
| text_davinci_003 | 50.00 | 0.00 | 805 | 307|
| falcon-40b-instruct | 45.71 | 1.75 | 805 | 662 |
| alpaca-farm-ppo-human | 41.24 | 1.73 | 805 | 803 |
| alpaca-7b | 26.46 | 1.54 | 805 | 396 |
| text_davinci_001 | 15.17 | 1.24 | 804 | 296 |
* Rouge Score over BookSum
| Model | R1 | R2 | RL |
| -------- | ------- | ------- | ------- |
| Llama-2-7B-Chat-hf | 0.055 | 0.008 | 0.046 |
| Longchat-7b-16k | 0.303 | 0.055 | 0.160 |
| Longchat-7b-v1.5-32k | 0.308 | 0.057 | 0.163 |
| GPT-3.5-Turbo-16K | 0.324 | 0.066 | 0.178 |
| Llama-2-7B-32K-Instruct (ours) | 0.336 | 0.076 | 0.184 |
* Accuracy over MQA
| Model | 20 docs (Avg 2.9K tokens) | 30 docs (Avg 4.4K tokens) | 50 docs (Avg 7.4K tokens) |
| -------- | ------- | ------- | ------- |
| Llama-2-7B-Chat-hf | 0.448 | 0.421 | 0.354 |
| Longchat-7b-16k | 0.510 | 0.473 | 0.428 |
| Longchat-7b-v1.5-32k | 0.534 | 0.516 | 0.479 |
| GPT-3.5-Turbo-16K | 0.622 | 0.609 | 0.577 |
| Llama-2-7B-32K-Instruct (ours) | 0.622 | 0.604 | 0.589 |
## Limitations and Bias
As with all language models, Llama-2-7B-32K-Instruct may generate incorrect or biased content. It's important to keep this in mind when using the model.
## Community
Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)
<!-- original-model-card end -->
|
legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF | legraphista | "2024-05-28T19:39:47Z" | 3,221 | 1 | gguf | [
"gguf",
"quantized",
"GGUF",
"imatrix",
"quantization",
"imat",
"static",
"text-generation",
"base_model:princeton-nlp/Llama-3-Instruct-8B-SimPO",
"region:us"
] | text-generation | "2024-05-28T18:27:08Z" | ---
base_model: princeton-nlp/Llama-3-Instruct-8B-SimPO
inference: false
library_name: gguf
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- quantized
- GGUF
- imatrix
- quantization
- imat
- imatrix
- static
---
# Llama-3-Instruct-8B-SimPO-IMat-GGUF
_Llama.cpp imatrix quantization of princeton-nlp/Llama-3-Instruct-8B-SimPO_
Original Model: [princeton-nlp/Llama-3-Instruct-8B-SimPO](https://huggingface.co/princeton-nlp/Llama-3-Instruct-8B-SimPO)
Original dtype: `BF16` (`bfloat16`)
Quantized by: llama.cpp [b3023](https://github.com/ggerganov/llama.cpp/releases/tag/b3023)
IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw)
- [Llama-3-Instruct-8B-SimPO-IMat-GGUF](#llama-3-instruct-8b-simpo-imat-gguf)
- [Files](#files)
- [IMatrix](#imatrix)
- [Common Quants](#common-quants)
- [All Quants](#all-quants)
- [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
- [Inference](#inference)
- [Simple chat template](#simple-chat-template)
- [Chat template with system prompt](#chat-template-with-system-prompt)
- [Llama.cpp](#llama-cpp)
- [FAQ](#faq)
- [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
- [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
---
## Files
### IMatrix
Status: ✅ Available
Link: [here](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/imatrix.dat)
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [Llama-3-Instruct-8B-SimPO.Q8_0.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q8_0.gguf) | Q8_0 | 8.54GB | ✅ Available | ⚪ Static | 📦 No
| [Llama-3-Instruct-8B-SimPO.Q6_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q6_K.gguf) | Q6_K | 6.60GB | ✅ Available | ⚪ Static | 📦 No
| [Llama-3-Instruct-8B-SimPO.Q4_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q4_K.gguf) | Q4_K | 4.92GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3-Instruct-8B-SimPO.Q3_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q3_K.gguf) | Q3_K | 4.02GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3-Instruct-8B-SimPO.Q2_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q2_K.gguf) | Q2_K | 3.18GB | ✅ Available | 🟢 IMatrix | 📦 No
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [Llama-3-Instruct-8B-SimPO.BF16.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.BF16.gguf) | BF16 | 16.07GB | ✅ Available | ⚪ Static | 📦 No
| [Llama-3-Instruct-8B-SimPO.FP16.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.FP16.gguf) | F16 | 16.07GB | ✅ Available | ⚪ Static | 📦 No
| [Llama-3-Instruct-8B-SimPO.Q8_0.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q8_0.gguf) | Q8_0 | 8.54GB | ✅ Available | ⚪ Static | 📦 No
| [Llama-3-Instruct-8B-SimPO.Q6_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q6_K.gguf) | Q6_K | 6.60GB | ✅ Available | ⚪ Static | 📦 No
| [Llama-3-Instruct-8B-SimPO.Q5_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q5_K.gguf) | Q5_K | 5.73GB | ✅ Available | ⚪ Static | 📦 No
| [Llama-3-Instruct-8B-SimPO.Q5_K_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q5_K_S.gguf) | Q5_K_S | 5.60GB | ✅ Available | ⚪ Static | 📦 No
| [Llama-3-Instruct-8B-SimPO.Q4_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q4_K.gguf) | Q4_K | 4.92GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3-Instruct-8B-SimPO.Q4_K_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q4_K_S.gguf) | Q4_K_S | 4.69GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3-Instruct-8B-SimPO.IQ4_NL.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ4_NL.gguf) | IQ4_NL | 4.68GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3-Instruct-8B-SimPO.IQ4_XS.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ4_XS.gguf) | IQ4_XS | 4.45GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3-Instruct-8B-SimPO.Q3_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q3_K.gguf) | Q3_K | 4.02GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3-Instruct-8B-SimPO.Q3_K_L.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q3_K_L.gguf) | Q3_K_L | 4.32GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3-Instruct-8B-SimPO.Q3_K_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q3_K_S.gguf) | Q3_K_S | 3.66GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3-Instruct-8B-SimPO.IQ3_M.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ3_M.gguf) | IQ3_M | 3.78GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3-Instruct-8B-SimPO.IQ3_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ3_S.gguf) | IQ3_S | 3.68GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3-Instruct-8B-SimPO.IQ3_XS.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ3_XS.gguf) | IQ3_XS | 3.52GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3-Instruct-8B-SimPO.IQ3_XXS.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3-Instruct-8B-SimPO.Q2_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q2_K.gguf) | Q2_K | 3.18GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3-Instruct-8B-SimPO.Q2_K_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q2_K_S.gguf) | Q2_K_S | 2.99GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3-Instruct-8B-SimPO.IQ2_M.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ2_M.gguf) | IQ2_M | 2.95GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3-Instruct-8B-SimPO.IQ2_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ2_S.gguf) | IQ2_S | 2.76GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3-Instruct-8B-SimPO.IQ2_XS.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ2_XS.gguf) | IQ2_XS | 2.61GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3-Instruct-8B-SimPO.IQ2_XXS.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ2_XXS.gguf) | IQ2_XXS | 2.40GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3-Instruct-8B-SimPO.IQ1_M.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ1_M.gguf) | IQ1_M | 2.16GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Llama-3-Instruct-8B-SimPO.IQ1_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ1_S.gguf) | IQ1_S | 2.02GB | ✅ Available | 🟢 IMatrix | 📦 No
## Downloading using huggingface-cli
If you do not have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Download the specific file you want:
```
huggingface-cli download legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF --include "Llama-3-Instruct-8B-SimPO.BF16.gguf" --local-dir ./
```
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF --include "Llama-3-Instruct-8B-SimPO.BF16/*" --local-dir ./
# see FAQ for merging GGUF's
```
---
## Inference
### Simple chat template
```
<|im_start|>user
Can you provide ways to eat combinations of bananas and dragonfruits?<|im_end|>
<|im_start|>assistant
Sure! Here are some ways to eat bananas and dragonfruits together:
1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey.
2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey.<|im_end|>
<|im_start|>user
What about solving an 2x + 3 = 7 equation?<|im_end|>
```
### Chat template with system prompt
```
<|im_start|>system
You are a helpful AI.<|im_end|>
<|im_start|>user
Can you provide ways to eat combinations of bananas and dragonfruits?<|im_end|>
<|im_start|>assistant
Sure! Here are some ways to eat bananas and dragonfruits together:
1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey.
2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey.<|im_end|>
<|im_start|>user
What about solving an 2x + 3 = 7 equation?<|im_end|>
```
### Llama.cpp
```
llama.cpp/main -m Llama-3-Instruct-8B-SimPO.BF16.gguf --color -i -p "prompt here (according to the chat template)"
```
---
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `Llama-3-Instruct-8B-SimPO.BF16`)
3. Run `gguf-split --merge Llama-3-Instruct-8B-SimPO.BF16/Llama-3-Instruct-8B-SimPO.BF16-00001-of-XXXXX.gguf Llama-3-Instruct-8B-SimPO.BF16.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)! |
mradermacher/Tiamat-7b-1.1-DPO-GGUF | mradermacher | "2024-06-06T01:33:11Z" | 3,220 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Gryphe/Tiamat-7b-1.1-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-05T18:47:31Z" | ---
base_model: Gryphe/Tiamat-7b-1.1-DPO
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Gryphe/Tiamat-7b-1.1-DPO
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nvidia/segformer-b4-finetuned-ade-512-512 | nvidia | "2022-08-06T10:25:42Z" | 3,219 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"segformer",
"vision",
"image-segmentation",
"dataset:scene_parse_150",
"arxiv:2105.15203",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | "2022-03-02T23:29:05Z" | ---
license: other
tags:
- vision
- image-segmentation
datasets:
- scene_parse_150
widget:
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg
example_title: House
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg
example_title: Castle
---
# SegFormer (b4-sized) model fine-tuned on ADE20k
SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b4-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b4-finetuned-ade-512-512")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
Helsinki-NLP/opus-mt-tc-big-lt-en | Helsinki-NLP | "2023-10-10T10:37:14Z" | 3,219 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc",
"en",
"lt",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-04-13T16:56:01Z" | ---
language:
- en
- lt
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-lt-en
results:
- task:
name: Translation lit-eng
type: translation
args: lit-eng
dataset:
name: flores101-devtest
type: flores_101
args: lit eng devtest
metrics:
- name: BLEU
type: bleu
value: 34.3
- task:
name: Translation lit-eng
type: translation
args: lit-eng
dataset:
name: newsdev2019
type: newsdev2019
args: lit-eng
metrics:
- name: BLEU
type: bleu
value: 32.9
- task:
name: Translation lit-eng
type: translation
args: lit-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: lit-eng
metrics:
- name: BLEU
type: bleu
value: 61.6
- task:
name: Translation lit-eng
type: translation
args: lit-eng
dataset:
name: newstest2019
type: wmt-2019-news
args: lit-eng
metrics:
- name: BLEU
type: bleu
value: 32.3
---
# opus-mt-tc-big-lt-en
Neural machine translation model for translating from Lithuanian (lt) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-02-25
* source language(s): lit
* target language(s): eng
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-02-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-eng/opusTCv20210807+bt_transformer-big_2022-02-25.zip)
* more information released models: [OPUS-MT lit-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-eng/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Katė sedėjo ant kėdės.",
"Jukiko mėgsta bulves."
]
model_name = "pytorch-models/opus-mt-tc-big-lt-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# The cat sat on a chair.
# Yukiko likes potatoes.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-lt-en")
print(pipe("Katė sedėjo ant kėdės."))
# expected output: The cat sat on a chair.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-02-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-eng/opusTCv20210807+bt_transformer-big_2022-02-25.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-eng/opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| lit-eng | tatoeba-test-v2021-08-07 | 0.74881 | 61.6 | 2528 | 17855 |
| lit-eng | flores101-devtest | 0.60662 | 34.3 | 1012 | 24721 |
| lit-eng | newsdev2019 | 0.59995 | 32.9 | 2000 | 49312 |
| lit-eng | newstest2019 | 0.61742 | 32.3 | 1000 | 25878 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 19:55:51 EEST 2022
* port machine: LM0-400-22516.local
|
mradermacher/Venomia-m7-GGUF | mradermacher | "2024-06-04T18:01:10Z" | 3,219 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Sao10K/Venomia-m7",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-04T12:42:48Z" | ---
base_model: Sao10K/Venomia-m7
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Sao10K/Venomia-m7
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Venomia-m7-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-GGUF/resolve/main/Venomia-m7.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-GGUF/resolve/main/Venomia-m7.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-GGUF/resolve/main/Venomia-m7.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-GGUF/resolve/main/Venomia-m7.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-GGUF/resolve/main/Venomia-m7.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-GGUF/resolve/main/Venomia-m7.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-GGUF/resolve/main/Venomia-m7.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-GGUF/resolve/main/Venomia-m7.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-GGUF/resolve/main/Venomia-m7.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-GGUF/resolve/main/Venomia-m7.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-GGUF/resolve/main/Venomia-m7.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-GGUF/resolve/main/Venomia-m7.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-GGUF/resolve/main/Venomia-m7.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-GGUF/resolve/main/Venomia-m7.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-GGUF/resolve/main/Venomia-m7.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
facebook/maskformer-swin-base-ade | facebook | "2022-11-10T10:22:19Z" | 3,216 | 9 | transformers | [
"transformers",
"pytorch",
"maskformer",
"vision",
"image-segmentation",
"dataset:scene_parse_150",
"arxiv:2107.06278",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | "2022-03-02T23:29:05Z" | ---
license: other
tags:
- vision
- image-segmentation
datasets:
- scene_parse_150
widget:
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg
example_title: House
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg
example_title: Castle
---
# MaskFormer
MaskFormer model trained on ADE20k semantic segmentation (base-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-base-ade")
inputs = feature_extractor(images=image, return_tensors="pt")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-ade")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to feature_extractor for postprocessing
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_semantic_map = feature_extractor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). |
aegon-h/SDXL-LORA | aegon-h | "2023-10-02T01:27:05Z" | 3,216 | 1 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion",
"text-to-image",
"lora",
"loraxl",
"en",
"base_model:FFusion/FFXL400",
"license:openrail++",
"region:us"
] | text-to-image | "2023-10-02T01:27:05Z" | ---
license: openrail++
base_model: FFusion/FFXL400
instance_prompt: Morphxl_V10
widget:
- text: >-
your prompt
example_title: your creation
- text: >-
A cyberpunk city, cyberpunk style, a girl in the city , walking, ultra high quality, neon ambiance, abstract black oil, gear mecha, detailed acrylic, grunge, intricate complexity, rendered in unreal engine, photorealistic
example_title: Neon city
Negative prompt: photograph, deformed, glitch, noisy, realistic, stock photo, watermark,signature, blurry
tags:
- stable-diffusion-xl
- diffusers
- stable-diffusion
- text-to-image
- lora
- loraxl
language:
- en
library_name: diffusers
---
# SDXL-LORA
- Model creator: [FFusion](https://huggingface.co/FFusion)
- Original model: [400GB-LoraXL](https://huggingface.co/FFusion/400GB-LoraXL)
## Description
This repo contains files for [FFusion's 400GB-LoraXL](https://huggingface.co/FFusion/400GB-LoraXL).
|
qanastek/XLMRoberta-Alexa-Intents-Classification | qanastek | "2022-05-05T00:52:15Z" | 3,215 | 40 | transformers | [
"transformers",
"pytorch",
"Transformers",
"text-classification",
"intent-classification",
"multi-class-classification",
"natural-language-understanding",
"dataset:qanastek/MASSIVE",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-05-04T22:36:57Z" | ---
tags:
- Transformers
- text-classification
- intent-classification
- multi-class-classification
- natural-language-understanding
languages:
- af-ZA
- am-ET
- ar-SA
- az-AZ
- bn-BD
- cy-GB
- da-DK
- de-DE
- el-GR
- en-US
- es-ES
- fa-IR
- fi-FI
- fr-FR
- he-IL
- hi-IN
- hu-HU
- hy-AM
- id-ID
- is-IS
- it-IT
- ja-JP
- jv-ID
- ka-GE
- km-KH
- kn-IN
- ko-KR
- lv-LV
- ml-IN
- mn-MN
- ms-MY
- my-MM
- nb-NO
- nl-NL
- pl-PL
- pt-PT
- ro-RO
- ru-RU
- sl-SL
- sq-AL
- sv-SE
- sw-KE
- ta-IN
- te-IN
- th-TH
- tl-PH
- tr-TR
- ur-PK
- vi-VN
- zh-CN
- zh-TW
multilinguality:
- af-ZA
- am-ET
- ar-SA
- az-AZ
- bn-BD
- cy-GB
- da-DK
- de-DE
- el-GR
- en-US
- es-ES
- fa-IR
- fi-FI
- fr-FR
- he-IL
- hi-IN
- hu-HU
- hy-AM
- id-ID
- is-IS
- it-IT
- ja-JP
- jv-ID
- ka-GE
- km-KH
- kn-IN
- ko-KR
- lv-LV
- ml-IN
- mn-MN
- ms-MY
- my-MM
- nb-NO
- nl-NL
- pl-PL
- pt-PT
- ro-RO
- ru-RU
- sl-SL
- sq-AL
- sv-SE
- sw-KE
- ta-IN
- te-IN
- th-TH
- tl-PH
- tr-TR
- ur-PK
- vi-VN
- zh-CN
- zh-TW
datasets:
- qanastek/MASSIVE
widget:
- text: "wake me up at five am this week"
- text: "je veux écouter la chanson de jacques brel encore une fois"
- text: "quiero escuchar la canción de arijit singh una vez más"
- text: "olly onde é que á um parque por perto onde eu possa correr"
- text: "פרק הבא בפודקאסט בבקשה"
- text: "亚马逊股价"
- text: "найди билет на поезд в санкт-петербург"
license: cc-by-4.0
---
**People Involved**
* [LABRAK Yanis](https://www.linkedin.com/in/yanis-labrak-8a7412145/) (1)
**Affiliations**
1. [LIA, NLP team](https://lia.univ-avignon.fr/), Avignon University, Avignon, France.
## Demo: How to use in HuggingFace Transformers Pipeline
Requires [transformers](https://pypi.org/project/transformers/): ```pip install transformers```
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline
model_name = 'qanastek/XLMRoberta-Alexa-Intents-Classification'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer)
res = classifier("réveille-moi à neuf heures du matin le vendredi")
print(res)
```
Outputs:
```python
[{'label': 'alarm_set', 'score': 0.9998375177383423}]
```
## Training data
[MASSIVE](https://huggingface.co/datasets/qanastek/MASSIVE) is a parallel dataset of > 1M utterances across 51 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.
## Intents
* audio_volume_other
* play_music
* iot_hue_lighton
* general_greet
* calendar_set
* audio_volume_down
* social_query
* audio_volume_mute
* iot_wemo_on
* iot_hue_lightup
* audio_volume_up
* iot_coffee
* takeaway_query
* qa_maths
* play_game
* cooking_query
* iot_hue_lightdim
* iot_wemo_off
* music_settings
* weather_query
* news_query
* alarm_remove
* social_post
* recommendation_events
* transport_taxi
* takeaway_order
* music_query
* calendar_query
* lists_query
* qa_currency
* recommendation_movies
* general_joke
* recommendation_locations
* email_querycontact
* lists_remove
* play_audiobook
* email_addcontact
* lists_createoradd
* play_radio
* qa_stock
* alarm_query
* email_sendemail
* general_quirky
* music_likeness
* cooking_recipe
* email_query
* datetime_query
* transport_traffic
* play_podcasts
* iot_hue_lightchange
* calendar_remove
* transport_query
* transport_ticket
* qa_factoid
* iot_cleaning
* alarm_set
* datetime_convert
* iot_hue_lightoff
* qa_definition
* music_dislikeness
## Evaluation results
```plain
precision recall f1-score support
alarm_query 0.9661 0.9037 0.9338 1734
alarm_remove 0.9484 0.9608 0.9545 1071
alarm_set 0.8611 0.9254 0.8921 2091
audio_volume_down 0.8657 0.9537 0.9075 561
audio_volume_mute 0.8608 0.9130 0.8861 1632
audio_volume_other 0.8684 0.5392 0.6653 306
audio_volume_up 0.7198 0.8446 0.7772 663
calendar_query 0.7555 0.8229 0.7878 6426
calendar_remove 0.8688 0.9441 0.9049 3417
calendar_set 0.9092 0.9014 0.9053 10659
cooking_query 0.0000 0.0000 0.0000 0
cooking_recipe 0.9282 0.8592 0.8924 3672
datetime_convert 0.8144 0.7686 0.7909 765
datetime_query 0.9152 0.9305 0.9228 4488
email_addcontact 0.6482 0.8431 0.7330 612
email_query 0.9629 0.9319 0.9472 6069
email_querycontact 0.6853 0.8032 0.7396 1326
email_sendemail 0.9530 0.9381 0.9455 5814
general_greet 0.1026 0.3922 0.1626 51
general_joke 0.9305 0.9123 0.9213 969
general_quirky 0.6984 0.5417 0.6102 8619
iot_cleaning 0.9590 0.9359 0.9473 1326
iot_coffee 0.9304 0.9749 0.9521 1836
iot_hue_lightchange 0.8794 0.9374 0.9075 1836
iot_hue_lightdim 0.8695 0.8711 0.8703 1071
iot_hue_lightoff 0.9440 0.9229 0.9334 2193
iot_hue_lighton 0.4545 0.5882 0.5128 153
iot_hue_lightup 0.9271 0.8315 0.8767 1377
iot_wemo_off 0.9615 0.8715 0.9143 918
iot_wemo_on 0.8455 0.7941 0.8190 510
lists_createoradd 0.8437 0.8356 0.8396 1989
lists_query 0.8918 0.8335 0.8617 2601
lists_remove 0.9536 0.8601 0.9044 2652
music_dislikeness 0.7725 0.7157 0.7430 204
music_likeness 0.8570 0.8159 0.8359 1836
music_query 0.8667 0.8050 0.8347 1785
music_settings 0.4024 0.3301 0.3627 306
news_query 0.8343 0.8657 0.8498 6324
play_audiobook 0.8172 0.8125 0.8149 2091
play_game 0.8666 0.8403 0.8532 1785
play_music 0.8683 0.8845 0.8763 8976
play_podcasts 0.8925 0.9125 0.9024 3213
play_radio 0.8260 0.8935 0.8585 3672
qa_currency 0.9459 0.9578 0.9518 1989
qa_definition 0.8638 0.8552 0.8595 2907
qa_factoid 0.7959 0.8178 0.8067 7191
qa_maths 0.8937 0.9302 0.9116 1275
qa_stock 0.7995 0.9412 0.8646 1326
recommendation_events 0.7646 0.7702 0.7674 2193
recommendation_locations 0.7489 0.8830 0.8104 1581
recommendation_movies 0.6907 0.7706 0.7285 1020
social_post 0.9623 0.9080 0.9344 4131
social_query 0.8104 0.7914 0.8008 1275
takeaway_order 0.7697 0.8458 0.8059 1122
takeaway_query 0.9059 0.8571 0.8808 1785
transport_query 0.8141 0.7559 0.7839 2601
transport_taxi 0.9222 0.9403 0.9312 1173
transport_ticket 0.9259 0.9384 0.9321 1785
transport_traffic 0.6919 0.9660 0.8063 765
weather_query 0.9387 0.9492 0.9439 7956
accuracy 0.8617 151674
macro avg 0.8162 0.8273 0.8178 151674
weighted avg 0.8639 0.8617 0.8613 151674
```
|
Justin-Choo/epiCRealism-Natural_Sin_RC1_VAE | Justin-Choo | "2023-08-29T12:05:00Z" | 3,215 | 21 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"en",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-08-28T06:02:04Z" | ---
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
---
Model on Civitai https://civitai.com/models/25694?modelVersionId=143906
The sample image by the author:

The sample i made myself:
 |
google/timesfm-1.0-200m | google | "2024-05-17T17:29:47Z" | 3,215 | 600 | timesfm | [
"timesfm",
"time-series-forecasting",
"arxiv:2310.10688",
"license:apache-2.0",
"region:us"
] | time-series-forecasting | "2024-05-03T05:15:35Z" | ---
license: apache-2.0
library_name: timesfm
pipeline_tag: time-series-forecasting
---
# TimesFM
TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting.
**Resources and Technical Documentation**:
* Paper: [A decoder-only foundation model for time-series forecasting](https://arxiv.org/abs/2310.10688), to appear in ICML 2024.
* [Google Research blog](https://research.google/blog/a-decoder-only-foundation-model-for-time-series-forecasting/)
* [GitHub repo](https://github.com/google-research/timesfm)
**Authors**: Google Research
This is not an officially supported Google product.
## Checkpoint timesfm-1.0-200m
`timesfm-1.0-200m` is the first open model checkpoint:
- It performs univariate time series forecasting for context lengths up to 512 time points and any horizon lengths, with an optional frequency indicator.
- It focuses on point forecasts and does not support probabilistic forecasts. We experimentally offer quantile heads but they have not been calibrated after pretraining.
- It requires the context to be contiguous (i.e. no "holes"), and the context and the horizon to be of the same frequency.
## Benchmarks
Please refer to our result tables on the [extended benchmarks](https://github.com/google-research/timesfm/blob/master/experiments/extended_benchmarks/tfm_results.png) and the [long horizon benchmarks](https://github.com/google-research/timesfm/blob/master/experiments/long_horizon_benchmarks/tfm_long_horizon.png).
Please look into the README files in the respective benchmark directories within `experiments/` for instructions for running TimesFM on the respective benchmarks.
## Installation
This HuggingFace repo hosts TimesFm checkpoints. Please visit our [GitHub repo](https://github.com/google-research/timesfm) and follow the instructions there to install the `timesfm` library for model inference.
In particular, the dependency `lingvo` does not support ARM architectures and the inference code is not working for machines with Apple silicon. We are aware of this issue and are working on a solution. Stay tuned.
## Usage
### Initialize the model and load a checkpoint.
Then the base class can be loaded as,
```python
import timesfm
tfm = timesfm.TimesFm(
context_len=<context>,
horizon_len=<horizon>,
input_patch_len=32,
output_patch_len=128,
num_layers=20,
model_dims=1280,
backend=<backend>,
)
tfm.load_from_checkpoint(repo_id="google/timesfm-1.0-200m")
```
Note that the four parameters are fixed to load the 200m model
```python
input_patch_len=32,
output_patch_len=128,
num_layers=20,
model_dims=1280,
```
1. The context_len here can be set as the max context length **of the model**. You can provide a shorter series to the `tfm.forecast()` function and the model will handle it. Currently, the model handles a max context length of 512, which can be increased in later releases. The input time series can have **any context length**. Padding / truncation will be handled by the inference code if needed.
2. The horizon length can be set to anything. We recommend setting it to the largest horizon length you would need in the forecasting tasks for your application. We generally recommend horizon length <= context length but it is not a requirement in the function call.
### Perform inference
We provide APIs to forecast from either array inputs or `pandas` dataframe. Both forecast methods expect (1) the input time series contexts, (2) along with their frequencies. Please look at the documentation of the functions `tfm.forecast()` and `tfm.forecast_on_df()` for detailed instructions.
In particular, regarding the frequency, TimesFM expects a categorical indicator valued in {0, 1, 2}:
- **0** (default): high frequency, long horizon time series. We recommend using this for time series up to daily granularity.
- **1**: medium frequency time series. We recommend using this for weekly and monthly data.
- **2**: low frequency, short horizon time series. We recommend using this for anything beyond monthly, e.g. quarterly or yearly.
This categorical value should be directly provided with the array inputs. For dataframe inputs, we convert the conventional letter coding of frequencies to our expected categories, that
- **0**: T, MIN, H, D, B, U
- **1**: W, M
- **2**: Q, Y
Notice you do **NOT** have to strictly follow our recommendation here. Although this is our setup during model training and we expect it to offer the best forecast result, you can also view the frequency input as a free parameter and modify it per your specific use case.
Examples:
Array inputs, with the frequencies set to low, medium, and high respectively.
```python
import numpy as np
forecast_input = [
np.sin(np.linspace(0, 20, 100))
np.sin(np.linspace(0, 20, 200)),
np.sin(np.linspace(0, 20, 400)),
]
frequency_input = [0, 1, 2]
point_forecast, experimental_quantile_forecast = tfm.forecast(
forecast_input,
freq=frequency_input,
)
```
`pandas` dataframe, with the frequency set to "M" monthly.
```python
import pandas as pd
# e.g. input_df is
# unique_id ds y
# 0 T1 1975-12-31 697458.0
# 1 T1 1976-01-31 1187650.0
# 2 T1 1976-02-29 1069690.0
# 3 T1 1976-03-31 1078430.0
# 4 T1 1976-04-30 1059910.0
# ... ... ... ...
# 8175 T99 1986-01-31 602.0
# 8176 T99 1986-02-28 684.0
# 8177 T99 1986-03-31 818.0
# 8178 T99 1986-04-30 836.0
# 8179 T99 1986-05-31 878.0
forecast_df = tfm.forecast_on_df(
inputs=input_df,
freq="M", # monthly
value_name="y",
num_jobs=-1,
)
``` |
satvikag/chatbot | satvikag | "2021-06-04T20:08:11Z" | 3,214 | 35 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
tags:
- conversational
license: mit
---
# DialoGPT Trained on the Speech of a Game Character
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script).
Chat with the model:
```python
tokenizer = AutoTokenizer.from_pretrained('microsoft/DialoGPT-small')
model = AutoModelWithLMHead.from_pretrained('output-small')
# Let's chat for 5 lines
for step in range(100):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=500,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature = 0.8
)
# pretty print last ouput tokens from bot
print("AI: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
mradermacher/Mixtral_AI_ARCHIVE_II-GGUF | mradermacher | "2024-06-06T11:54:22Z" | 3,213 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:LeroyDyer/Mixtral_AI_ARCHIVE_II",
"endpoints_compatible",
"region:us"
] | null | "2024-06-06T11:27:42Z" | ---
base_model: LeroyDyer/Mixtral_AI_ARCHIVE_II
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/LeroyDyer/Mixtral_AI_ARCHIVE_II
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE_II-GGUF/resolve/main/Mixtral_AI_ARCHIVE_II.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE_II-GGUF/resolve/main/Mixtral_AI_ARCHIVE_II.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE_II-GGUF/resolve/main/Mixtral_AI_ARCHIVE_II.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE_II-GGUF/resolve/main/Mixtral_AI_ARCHIVE_II.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE_II-GGUF/resolve/main/Mixtral_AI_ARCHIVE_II.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE_II-GGUF/resolve/main/Mixtral_AI_ARCHIVE_II.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE_II-GGUF/resolve/main/Mixtral_AI_ARCHIVE_II.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE_II-GGUF/resolve/main/Mixtral_AI_ARCHIVE_II.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE_II-GGUF/resolve/main/Mixtral_AI_ARCHIVE_II.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE_II-GGUF/resolve/main/Mixtral_AI_ARCHIVE_II.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE_II-GGUF/resolve/main/Mixtral_AI_ARCHIVE_II.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE_II-GGUF/resolve/main/Mixtral_AI_ARCHIVE_II.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE_II-GGUF/resolve/main/Mixtral_AI_ARCHIVE_II.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE_II-GGUF/resolve/main/Mixtral_AI_ARCHIVE_II.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_ARCHIVE_II-GGUF/resolve/main/Mixtral_AI_ARCHIVE_II.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
openbmb/MiniCPM-2B-sft-bf16 | openbmb | "2024-04-07T02:21:36Z" | 3,212 | 115 | transformers | [
"transformers",
"pytorch",
"text-generation",
"MiniCPM",
"ModelBest",
"THUNLP",
"conversational",
"custom_code",
"en",
"zh",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-01-29T11:50:39Z" | ---
language:
- en
- zh
tags:
- MiniCPM
- ModelBest
- THUNLP
---
<div align="center">
<h1>
MiniCPM
</h1>
</div>
<p align="center">
<a href="https://shengdinghu.notion.site/MiniCPM-c805a17c5c8046398914e47f0542095a?pvs=4" target="_blank">MiniCPM 技术报告</a><a href="https://shengdinghu.notion.site/MiniCPM-Unveiling-the-Potential-of-End-side-Large-Language-Models-d4d3a8c426424654a4e80e42a711cb20?pvs=4" target="_blank"> Technical Report</a> |
<a href="https://github.com/OpenBMB/OmniLMM/" target="_blank">OmniLMM 多模态模型 Multi-modal Model</a> |
<a href="https://luca.cn/" target="_blank">CPM-C 千亿模型试用 ~100B Model Trial </a>
</p>
MiniCPM 是面壁与清华大学自然语言处理实验室共同开源的系列端侧语言大模型,主体语言模型 MiniCPM-2B 仅有 24亿(2.4B)的非词嵌入参数量。
- 经过 SFT 后,MiniCPM 在公开综合性评测集上,MiniCPM 与 Mistral-7B相近(中文、数学、代码能力更优),整体性能超越 Llama2-13B、MPT-30B、Falcon-40B 等模型。
- 经过 DPO 后,MiniCPM 在当前最接近用户体感的评测集 MTBench上,MiniCPM-2B 也超越了 Llama2-70B-Chat、Vicuna-33B、Mistral-7B-Instruct-v0.1、Zephyr-7B-alpha 等众多代表性开源大模型。
- 以 MiniCPM-2B 为基础构建端侧多模态大模型 MiniCPM-V,整体性能在同规模模型中实现最佳,超越基于 Phi-2 构建的现有多模态大模型,在部分评测集上达到与 9.6B Qwen-VL-Chat 相当甚至更好的性能。
- 经过 Int4 量化后,MiniCPM 可在手机上进行部署推理,流式输出速度略高于人类说话速度。MiniCPM-V 也首次跑通了多模态大模型在手机上的部署。
- 一张1080/2080可高效参数微调,一张3090/4090可全参数微调,一台机器可持续训练 MiniCPM,二次开发成本较低。
我们将完全开源MiniCPM-2B的模型参数供学术研究和有限商用,以及训练过程中的所有Checkpoint和大部分非专有数据供模型机理研究。
- 基于MiniCPM-2B的指令微调与人类偏好对**MiniCPM-2B-SFT/DPO。**
- 基于MiniCPM-2B的多模态模型**MiniCPM-V**,能力超越基于Phi-2的同参数级别多模态模型**。**
- MiniCPM-2B-SFT/DPO的Int4量化版**MiniCPM-2B-SFT/DPO-Int4。**
- 基于MLC-LLM、LLMFarm开发的MiniCPM手机端程序,**文本及多模态模型均可在手机端进行推理。**
MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 2.4B parameters excluding embeddings.
- MiniCPM has very close performance compared with Mistral-7B on open-sourced general benchmarks with better ability on Chinese, Mathmetics and Coding after SFT. The overall performance exceeds Llama2-13B, MPT-30B, Falcon-40B, etc.
- After DPO, MiniCPM outperforms Llama2-70B-Chat, Vicuna-33B, Mistral-7B-Instruct-v0.1, Zephyr-7B-alpha, etc. on MTBench.
- MiniCPM-V, based on MiniCPM-2B, achieves the best overall performance among multimodel models of the same scale, surpassing existing multimodal large models built on Phi-2 and achieving performance comparable to or even better than 9.6B Qwen-VL-Chat on some tasks.
- MiniCPM can be deployed and infer on smartphones, and the speed of streaming output is relatively higher than the verbal speed of human. MiniCPM-V is the first multi-modal models that can be deployed on smartphones.
- The cost of developing based on MiniCPM is low. Parameter efficient finetuning can be conducted with a single 1080/2080 GPU and full parameter finetuning can be conducted with a 3090/4090 GPU.
We release all model parameters for research and limited commercial use. We also release all the checkpoint during training and most public training data for research on model mechanism.
- SFT and DPO version based on MiniCPM-2B and human preference: **MiniCPM-2B-SFT/DPO**
- The multi-modal model **MiniCPM-V** based on MiniCPM-2B, which outperforms models with similar size, i.e., Phi-2
- The INT4 quantized version **MiniCPM-2B-SFT/DPO-Int4** based on MiniCPM-2B-SFT/DPO
- Mobile phone application based on MLC-LLM and LLMFarm. Both language model and multimodel model can conduct inference on smartphones.
### 评测结果 Evaluation Results
详细的评测结果位于[github仓库](https://github.com/OpenBMB/MiniCPM?tab=readme-ov-file#%E8%AF%84%E6%B5%8B%E7%BB%93%E6%9E%9C)
Detailed evaluation results are in [github repo](https://github.com/OpenBMB/MiniCPM/blob/main/README-en.md#evaluation-results)
注意:我们发现使用Huggingface生成质量略差于vLLM,因此推荐使用vLLM进行测试。我们正在排查原因。
Notice: We discovered that the quality of Huggingface generation is slightly lower than vLLM, thus benchmarking using vLLM is recommended.
We are investigating the cause now.
### 局限性 Limitations
- 受限于模型规模,模型可能出现幻觉性问题。其中由于DPO模型生成的回复内容更长,更容易出现幻觉。我们也将持续进行MiniCPM模型的迭代改进;
- 为了保证在学术研究用途上模型的通用性,我们未对模型进行任何身份认同训练。同时由于我们用ShareGPT开源语料作为部分训练数据,模型可能会输出类似GPT系列模型的身份认同信息;
- 受限于模型规模,模型的输出受到提示词(prompt)的影响较大,可能多次尝试产生不一致的结果;
- 受限于模型容量,模型的知识记忆较不准确,后续我们将结合RAG方法来增强模型的知识记忆能力。
- Due to limitations in model size, the model may experience hallucinatory issues. As DPO model tend to generate longer response, hallucinations are more likely to occur. We will also continue to iterate and improve the MiniCPM model.
- To ensure the universality of the model for academic research purposes, we did not conduct any identity training on the model. Meanwhile, as we use ShareGPT open-source corpus as part of the training data, the model may output identity information similar to the GPT series models.
- Due to the limitation of model size, the output of the model is greatly influenced by prompt words, which may result in inconsistent results from multiple attempts.
- Due to limited model capacity, the model's knowledge memory is not accurate. In the future, we will combine the RAG method to enhance the model's knowledge memory ability.
## 模型下载 Download
| HuggingFace | ModelScope | WiseModel |
|-------------|------------|-----------|
|[sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16)|[sft-bf16](https://modelscope.cn/models/OpenBMB/miniCPM-bf16)|[sft-bf16](https://wisemodel.cn/models/OpenBMB/miniCPM-bf16)
|[sft-fp32](https://huggingface.co/openbmb/MiniCPM-2B-sft-fp32)|[sft-fp32](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-sft-fp32)|[sft-fp32](https://wisemodel.cn/models/OpenBMB/miniCPM-dpo-fp32)
|[dpo-bf16](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16)|[dpo-bf16](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-bf16/summary)|[dpo-bf16](https://wisemodel.cn/models/OpenBMB/MiniCPM-2B-dpo-bf16)
|[dpo-fp16](https://huggingface.co/openbmb/MiniCPM-2B-dpo-fp16)|[dpo-fp16](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-fp16/)|[dpo-fp16](https://wisemodel.cn/models/OpenBMB/MiniCPM-2B-dpo-fp16)
|[dpo-fp32](https://huggingface.co/openbmb/MiniCPM-2B-dpo-fp32)|[dpo-fp32](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-fp32)|[dpo-fp32](https://wisemodel.cn/models/OpenBMB/miniCPM-dpo-fp32)
## 模型使用 Usage
* 安装`transformers>=4.36.0`以及`accelerate`后,运行以下代码
* 注意:需要在`from_pretrained`中明确指明模型的数据类型,否则会引起较大计算误差
* Run the following code after install `transformers>=4.36.0` and `accelerate`
* Warning: It is necessary to specify the data type of the model clearly in 'from_pretrained', otherwise large calculation errors will be caused
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch.manual_seed(0)
path = 'openbmb/MiniCPM-2B-sft-bf16'
tokenizer = AutoTokenizer.from_pretrained(path)
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map='cuda', trust_remote_code=True)
responds, history = model.chat(tokenizer, "山东省最高的山是哪座山, 它比黄山高还是矮?差距多少?", temperature=0.8, top_p=0.8)
print(responds)
```
* 期望输出 Expected Output
```shell
山东省最高的山是泰山,海拔1545米。
相对于黄山(海拔1864米),泰山海拔较低,相差约319米。
```
## 开源协议 LICENSE
#### 模型协议 Model LICENSE
* 本仓库中代码依照 [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) 协议开源
* MiniCPM 模型权重的使用则需要遵循 [“通用模型许可协议-来源说明-宣传限制-商业授权”](https://github.com/OpenBMB/General-Model-License/blob/main/%E9%80%9A%E7%94%A8%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE-%E6%9D%A5%E6%BA%90%E8%AF%B4%E6%98%8E-%E5%AE%A3%E4%BC%A0%E9%99%90%E5%88%B6-%E5%95%86%E4%B8%9A%E6%8E%88%E6%9D%83.md)。
* MiniCPM 模型权重对学术研究完全开放。
* 如需将模型用于商业用途,请联系[email protected]来获取书面授权,在登记后亦允许免费商业使用。
* This repository is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
* The usage of MiniCPM model weights must strictly follow [the General Model License (GML)](https://github.com/OpenBMB/General-Model-License/blob/main/%E9%80%9A%E7%94%A8%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE-%E6%9D%A5%E6%BA%90%E8%AF%B4%E6%98%8E-%E5%AE%A3%E4%BC%A0%E9%99%90%E5%88%B6-%E5%95%86%E4%B8%9A%E6%8E%88%E6%9D%83.md).
* The models and weights of MiniCPM are completely free for academic research.
* If you intend to utilize the model for commercial purposes, please reach out to [email protected] to obtain the certificate of authorization.
#### 声明 Statement
* 作为一个语言模型,MiniCPM 通过学习大量的文本来生成内容,但它无法理解、表达个人观点或价值判断,它所输出的任何内容都不代表模型开发者的观点和立场。
* 因此用户在使用 MiniCPM 生成的内容时,应自行负责对其进行评估和验证。
* 如果由于使用 MinCPM 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
* As a language model, MiniCPM generates content by learning from a vast amount of text.
* However, it does not possess the ability to comprehend or express personal opinions or value judgments.
* Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers.
* Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own.
<p id="8"></p>
## 工作引用 Citation
* 如果觉得MiniCPM有助于您的工作,请考虑引用下列[技术报告](https://shengdinghu.notion.site/MiniCPM-c805a17c5c8046398914e47f0542095a?pvs=4)
* Please cite our [techinical report](https://shengdinghu.notion.site/MiniCPM-Unveiling-the-Potential-of-End-side-Large-Language-Models-d4d3a8c426424654a4e80e42a711cb20?pvs=4) if you find our work valuable.
```
@inproceedings{minicpm2024,
title={MiniCPM:Unveiling the Potential of End-side Large Language Models},
booktitle={OpenBMB Blog},
year={2024}
}
```
|
mradermacher/Rawr_Llama3_8B-GGUF | mradermacher | "2024-06-05T14:49:05Z" | 3,212 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ResplendentAI/Rawr_Llama3_8B",
"endpoints_compatible",
"region:us"
] | null | "2024-06-05T13:34:20Z" | ---
base_model: ResplendentAI/Rawr_Llama3_8B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ResplendentAI/Rawr_Llama3_8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Rawr_Llama3_8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Rawr_Llama3_8B-GGUF/resolve/main/Rawr_Llama3_8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Rawr_Llama3_8B-GGUF/resolve/main/Rawr_Llama3_8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Rawr_Llama3_8B-GGUF/resolve/main/Rawr_Llama3_8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Rawr_Llama3_8B-GGUF/resolve/main/Rawr_Llama3_8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Rawr_Llama3_8B-GGUF/resolve/main/Rawr_Llama3_8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Rawr_Llama3_8B-GGUF/resolve/main/Rawr_Llama3_8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Rawr_Llama3_8B-GGUF/resolve/main/Rawr_Llama3_8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Rawr_Llama3_8B-GGUF/resolve/main/Rawr_Llama3_8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Rawr_Llama3_8B-GGUF/resolve/main/Rawr_Llama3_8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Rawr_Llama3_8B-GGUF/resolve/main/Rawr_Llama3_8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Rawr_Llama3_8B-GGUF/resolve/main/Rawr_Llama3_8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Rawr_Llama3_8B-GGUF/resolve/main/Rawr_Llama3_8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Rawr_Llama3_8B-GGUF/resolve/main/Rawr_Llama3_8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Rawr_Llama3_8B-GGUF/resolve/main/Rawr_Llama3_8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Rawr_Llama3_8B-GGUF/resolve/main/Rawr_Llama3_8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
hamzab/roberta-fake-news-classification | hamzab | "2023-07-04T08:46:28Z" | 3,209 | 4 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-29T17:36:03Z" | ---
license: mit
widget:
- text: "Some ninja attacked the White House."
example_title: "Fake example 1"
language:
- en
tags:
- classification
datasets:
- "https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset"
---
## Overview
The model is a `roberta-base` fine-tuned on [fake-and-real-news-dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset). It has a 100% accuracy on that dataset.
The model takes a news article and predicts if it is true or fake.
The format of the input should be:
```
<title> TITLE HERE <content> CONTENT HERE <end>
```
## Using this model in your code
To use this model, first download it from the hugginface website:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("hamzab/roberta-fake-news-classification")
model = AutoModelForSequenceClassification.from_pretrained("hamzab/roberta-fake-news-classification")
```
Then, make a prediction like follows:
```python
import torch
def predict_fake(title,text):
input_str = "<title>" + title + "<content>" + text + "<end>"
input_ids = tokenizer.encode_plus(input_str, max_length=512, padding="max_length", truncation=True, return_tensors="pt")
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model.to(device)
with torch.no_grad():
output = model(input_ids["input_ids"].to(device), attention_mask=input_ids["attention_mask"].to(device))
return dict(zip(["Fake","Real"], [x.item() for x in list(torch.nn.Softmax()(output.logits)[0])] ))
print(predict_fake(<HEADLINE-HERE>,<CONTENT-HERE>))
```
You can also use Gradio to test the model on real-time:
```python
import gradio as gr
iface = gr.Interface(fn=predict_fake, inputs=[gr.inputs.Textbox(lines=1,label="headline"),gr.inputs.Textbox(lines=6,label="content")], outputs="label").launch(share=True)
``` |
peiyi9979/mistral-7b-sft | peiyi9979 | "2024-01-15T02:57:34Z" | 3,209 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2312.08935",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-03T03:18:31Z" | Mistral 7B fine-tuned on MetaMATH [1] used in [Math-Shepherd](https://arxiv.org/pdf/2312.08935.pdf).
Pass@1:
- GSM8K: 77.9
- MATH: 28.6
`Input`: only the math problem, without any system prompt, e.g.,
```
Janet\u2019s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market?
```
`Output`: Step-by-step solutions with a special step tag `ки`, e.g.,
```
Step 1: Janet's ducks lay 16 eggs per day. ки\nStep 2: She eats three for breakfast every morning, so she has 16 - 3 = 13 eggs left. ки\nStep 3: She bakes muffins for her friends every day with four eggs, so she has 13 - 4 = 9 eggs left. ки\nStep 4: She sells the remainder at the farmers' market daily for $2 per fresh duck egg, so she makes 9 * $2 = $18 every day at the farmers' market. The answer is: 18 ки
```
[1] MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models. |
TIGER-Lab/Mantis-8B-clip-llama3 | TIGER-Lab | "2024-05-23T04:08:42Z" | 3,206 | 1 | transformers | [
"transformers",
"safetensors",
"llava",
"pretraining",
"multimodal",
"llama3",
"clip",
"lmm",
"vlm",
"mantis",
"en",
"dataset:TIGER-Lab/Mantis-Instruct",
"arxiv:2405.01483",
"base_model:TIGER-Lab/Mantis-8B-clip-llama3-pretraind",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-05-03T02:53:21Z" | ---
base_model: TIGER-Lab/Mantis-8B-clip-llama3-pretraind
tags:
- multimodal
- llava
- llama3
- clip
- lmm
- vlm
- mantis
model-index:
- name: llava_clip_llama3_8b_finetune_8192
results: []
license: llama3
datasets:
- TIGER-Lab/Mantis-Instruct
language:
- en
metrics:
- accuracy
---
# 🔥 Mantis
[Paper](https://arxiv.org/abs/2405.01483) | [Website](https://tiger-ai-lab.github.io/Mantis/) | [Github](https://github.com/TIGER-AI-Lab/Mantis) | [Models](https://huggingface.co/collections/TIGER-Lab/mantis-6619b0834594c878cdb1d6e4) | [Demo](https://huggingface.co/spaces/TIGER-Lab/Mantis)

## Summary
- Mantis is an LLaMA-3 based LMM with **interleaved text and image as inputs**, train on Mantis-Instruct under academic-level resources (i.e. 36 hours on 16xA100-40G).
- Mantis is trained to have multi-image skills including co-reference, reasoning, comparing, temporal understanding.
- Mantis reaches the state-of-the-art performance on five multi-image benchmarks (NLVR2, Q-Bench, BLINK, MVBench, Mantis-Eval), and also maintain a strong single-image performance on par with CogVLM and Emu2.
## Multi-Image Performance
| Models | Size | Format | NLVR2 | Q-Bench | Mantis-Eval | BLINK | MVBench | Avg |
|--------------------|:----:|:--------:|:-----:|:-------:|:-----------:|:-----:|:-------:|:----:|
| GPT-4V | - | sequence | 88.80 | 76.52 | 62.67 | 51.14 | 43.50 | 64.5 |
| Open Source Models | | | | | | | | |
| Random | - | - | 48.93 | 40.20 | 23.04 | 38.09 | 27.30 | 35.5 |
| Kosmos2 | 1.6B | merge | 49.00 | 35.10 | 30.41 | 37.50 | 21.62 | 34.7 |
| LLaVA-v1.5 | 7B | merge | 53.88 | 49.32 | 31.34 | 37.13 | 36.00 | 41.5 |
| LLava-V1.6 | 7B | merge | 58.88 | 54.80 | 45.62 | 39.55 | 40.90 | 48.0 |
| Qwen-VL-Chat | 7B | merge | 58.72 | 45.90 | 39.17 | 31.17 | 42.15 | 43.4 |
| Fuyu | 8B | merge | 51.10 | 49.15 | 27.19 | 36.59 | 30.20 | 38.8 |
| BLIP-2 | 13B | merge | 59.42 | 51.20 | 49.77 | 39.45 | 31.40 | 46.2 |
| InstructBLIP | 13B | merge | 60.26 | 44.30 | 45.62 | 42.24 | 32.50 | 45.0 |
| CogVLM | 17B | merge | 58.58 | 53.20 | 45.16 | 41.54 | 37.30 | 47.2 |
| OpenFlamingo | 9B | sequence | 36.41 | 19.60 | 12.44 | 39.18 | 7.90 | 23.1 |
| Otter-Image | 9B | sequence | 49.15 | 17.50 | 14.29 | 36.26 | 15.30 | 26.5 |
| Idefics1 | 9B | sequence | 54.63 | 30.60 | 28.11 | 24.69 | 26.42 | 32.9 |
| VideoLLaVA | 7B | sequence | 56.48 | 45.70 | 35.94 | 38.92 | 44.30 | 44.3 |
| Emu2-Chat | 37B | sequence | 58.16 | 50.05 | 37.79 | 36.20 | 39.72 | 44.4 |
| Vila | 8B | sequence | 76.45 | 45.70 | 51.15 | 39.30 | 49.40 | 52.4 |
| Idefics2 | 8B | sequence | 86.87 | 57.00 | 48.85 | 45.18 | 29.68 | 53.5 |
| Mantis-CLIP | 8B | sequence | 84.66 | 66.00 | 55.76 | 47.06 | 48.30 | 60.4 |
| Mantis-SIGLIP | 8B | sequence | 87.43 | 69.90 | **59.45** | 46.35 | 50.15 | 62.7 |
| Mantis-Flamingo | 9B | sequence | 52.96 | 46.80 | 32.72 | 38.00 | 40.83 | 42.3 |
| Mantis-Idefics2 | 8B | sequence | **89.71** | **75.20** | 57.14 | **49.05** | **51.38** | **64.5** |
| $\Delta$ over SOTA | - | - | +2.84 | +18.20 | +8.30 | +3.87 | +1.98 | +11.0 |
## Single-Image Performance
| Model | Size | TextVQA | VQA | MMB | MMMU | OKVQA | SQA | MathVista | Avg |
|-----------------|:----:|:-------:|:----:|:----:|:----:|:-----:|:----:|:---------:|:----:|
| OpenFlamingo | 9B | 46.3 | 58.0 | 32.4 | 28.7 | 51.4 | 45.7 | 18.6 | 40.2 |
| Idefics1 | 9B | 39.3 | 68.8 | 45.3 | 32.5 | 50.4 | 51.6 | 21.1 | 44.1 |
| InstructBLIP | 7B | 33.6 | 75.2 | 38.3 | 30.6 | 45.2 | 70.6 | 24.4 | 45.4 |
| Yi-VL | 6B | 44.8 | 72.5 | 68.4 | 39.1 | 51.3 | 71.7 | 29.7 | 53.9 |
| Qwen-VL-Chat | 7B | 63.8 | 78.2 | 61.8 | 35.9 | 56.6 | 68.2 | 15.5 | 54.3 |
| LLaVA-1.5 | 7B | 58.2 | 76.6 | 64.8 | 35.3 | 53.4 | 70.4 | 25.6 | 54.9 |
| Emu2-Chat | 37B | <u>66.6</u> | **84.9** | 63.6 | 36.3 | **64.8** | 65.3 | 30.7 | 58.9 |
| CogVLM | 17B | **70.4** | <u>82.3</u> | 65.8 | 32.1 | <u>64.8</u> | 65.6 | 35.0 | 59.4 |
| Idefics2 | 8B | 70.4 | 79.1 | <u>75.7</u> | **43.0** | 53.5 | **86.5** | **51.4** | **65.7** |
| Mantis-CLIP | 8B | 56.4 | 73.0 | 66.0 | 38.1 | 53.0 | 73.8 | 31.7 | 56.0 |
| Mantis-SigLIP | 8B | 59.2 | 74.9 | 68.7 | 40.1 | 55.4 | 74.9 | 34.4 | 58.2 |
| Mantis-Idefics2 | 8B | 63.5 | 77.6 | 75.7 | <u>41.1</u> | 52.6 | <u>81.3</u> | <u>40.4</u> | <u>61.7</u> |
## How to use
### Installation
```bash
# This only installs minimum packages (torch, transformers, accelerate) for inference, no redundant packages are installed.
pip install git+https://github.com/TIGER-AI-Lab/Mantis.git
```
### Run example inference:
```python
from mantis.models.mllava import chat_mllava
from PIL import Image
import torch
image1 = "image1.jpg"
image2 = "image2.jpg"
images = [Image.open(image1), Image.open(image2)]
# load processor and model
from mantis.models.mllava import MLlavaProcessor, LlavaForConditionalGeneration
processor = MLlavaProcessor.from_pretrained("TIGER-Lab/Mantis-8B-clip-llama3")
attn_implementation = None # or "flash_attention_2"
model = LlavaForConditionalGeneration.from_pretrained("TIGER-Lab/Mantis-8B-clip-llama3", device_map="cuda", torch_dtype=torch.bfloat16, attn_implementation=attn_implementation)
generation_kwargs = {
"max_new_tokens": 1024,
"num_beams": 1,
"do_sample": False
}
# chat
text = "Describe the difference of <image> and <image> as much as you can."
response, history = chat_mllava(text, images, model, processor, **generation_kwargs)
print("USER: ", text)
print("ASSISTANT: ", response)
text = "How many wallets are there in image 1 and image 2 respectively?"
response, history = chat_mllava(text, images, model, processor, history=history, **generation_kwargs)
print("USER: ", text)
print("ASSISTANT: ", response)
"""
USER: Describe the difference of <image> and <image> as much as you can.
ASSISTANT: The second image has more variety in terms of colors and designs. While the first image only shows two brown leather pouches, the second image features four different pouches in various colors and designs, including a purple one with a gold coin, a red one with a gold coin, a black one with a gold coin, and a brown one with a gold coin. This variety makes the second image more visually interesting and dynamic.
USER: How many wallets are there in image 1 and image 2 respectively?
ASSISTANT: There are two wallets in image 1, and four wallets in image 2.
"""
```
### Training
See [mantis/train](https://github.com/TIGER-AI-Lab/Mantis/tree/main/mantis/train) for details
### Evaluation
See [mantis/benchmark](https://github.com/TIGER-AI-Lab/Mantis/tree/main/mantis/benchmark) for details
## Citation
```
@inproceedings{Jiang2024MANTISIM,
title={MANTIS: Interleaved Multi-Image Instruction Tuning},
author={Dongfu Jiang and Xuan He and Huaye Zeng and Cong Wei and Max W.F. Ku and Qian Liu and Wenhu Chen},
publisher={arXiv2405.01483}
year={2024},
}
``` |
farleyknight-org-username/vit-base-mnist | farleyknight-org-username | "2022-08-31T14:55:56Z" | 3,205 | 7 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"vision",
"generated_from_trainer",
"dataset:mnist",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-08-21T16:48:27Z" | ---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- mnist
metrics:
- accuracy
model-index:
- name: vit-base-mnist
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: mnist
type: mnist
config: mnist
split: train
args: mnist
metrics:
- name: Accuracy
type: accuracy
value: 0.9948888888888889
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-mnist
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the mnist dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0236
- Accuracy: 0.9949
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3717 | 1.0 | 6375 | 0.0522 | 0.9893 |
| 0.3453 | 2.0 | 12750 | 0.0370 | 0.9906 |
| 0.3736 | 3.0 | 19125 | 0.0308 | 0.9916 |
| 0.3224 | 4.0 | 25500 | 0.0269 | 0.9939 |
| 0.2846 | 5.0 | 31875 | 0.0236 | 0.9949 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.4.0
- Tokenizers 0.12.1
|
diffusers/controlnet-zoe-depth-sdxl-1.0 | diffusers | "2023-09-04T14:08:20Z" | 3,205 | 28 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"controlnet",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2023-08-22T09:43:51Z" |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- controlnet
inference: false
---
# SDXL-controlnet: Zoe-Depth
These are ControlNet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with zoe depth conditioning. [Zoe-depth](https://github.com/isl-org/ZoeDepth) is an open-source SOTA depth estimation model which produces high-quality depth maps, which are better suited for conditioning.
You can find some example images in the following.



## Usage
Make sure first to install the libraries:
```bash
pip install accelerate transformers safetensors diffusers
```
And then setup the zoe-depth model
```
import torch
import matplotlib
import matplotlib.cm
import numpy as np
torch.hub.help("intel-isl/MiDaS", "DPT_BEiT_L_384", force_reload=True) # Triggers fresh download of MiDaS repo
model_zoe_n = torch.hub.load("isl-org/ZoeDepth", "ZoeD_NK", pretrained=True).eval()
model_zoe_n = model_zoe_n.to("cuda")
def colorize(value, vmin=None, vmax=None, cmap='gray_r', invalid_val=-99, invalid_mask=None, background_color=(128, 128, 128, 255), gamma_corrected=False, value_transform=None):
if isinstance(value, torch.Tensor):
value = value.detach().cpu().numpy()
value = value.squeeze()
if invalid_mask is None:
invalid_mask = value == invalid_val
mask = np.logical_not(invalid_mask)
# normalize
vmin = np.percentile(value[mask],2) if vmin is None else vmin
vmax = np.percentile(value[mask],85) if vmax is None else vmax
if vmin != vmax:
value = (value - vmin) / (vmax - vmin) # vmin..vmax
else:
# Avoid 0-division
value = value * 0.
# squeeze last dim if it exists
# grey out the invalid values
value[invalid_mask] = np.nan
cmapper = matplotlib.cm.get_cmap(cmap)
if value_transform:
value = value_transform(value)
# value = value / value.max()
value = cmapper(value, bytes=True) # (nxmx4)
# img = value[:, :, :]
img = value[...]
img[invalid_mask] = background_color
# gamma correction
img = img / 255
img = np.power(img, 2.2)
img = img * 255
img = img.astype(np.uint8)
img = Image.fromarray(img)
return img
def get_zoe_depth_map(image):
with torch.autocast("cuda", enabled=True):
depth = model_zoe_n.infer_pil(image)
depth = colorize(depth, cmap="gray_r")
return depth
```
Now we're ready to go:
```python
import torch
import numpy as np
from PIL import Image
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL
from diffusers.utils import load_image
controlnet = ControlNetModel.from_pretrained(
"diffusers/controlnet-zoe-depth-sdxl-1.0",
use_safetensors=True,
torch_dtype=torch.float16,
).to("cuda")
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16).to("cuda")
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
controlnet=controlnet,
vae=vae,
variant="fp16",
use_safetensors=True,
torch_dtype=torch.float16,
).to("cuda")
pipe.enable_model_cpu_offload()
prompt = "pixel-art margot robbie as barbie, in a coupé . low-res, blocky, pixel art style, 8-bit graphics"
negative_prompt = "sloppy, messy, blurry, noisy, highly detailed, ultra textured, photo, realistic"
image = load_image("https://media.vogue.fr/photos/62bf04b69a57673c725432f3/3:2/w_1793,h_1195,c_limit/rev-1-Barbie-InstaVert_High_Res_JPEG.jpeg")
controlnet_conditioning_scale = 0.55
depth_image = get_zoe_depth_map(image).resize((1088, 896))
generator = torch.Generator("cuda").manual_seed(978364352)
images = pipe(
prompt, image=depth_image, num_inference_steps=50, controlnet_conditioning_scale=controlnet_conditioning_scale, generator=generator
).images
images[0]
images[0].save(f"pixel-barbie.png")
```

To more details, check out the official documentation of [`StableDiffusionXLControlNetPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/controlnet_sdxl).
### Training
Our training script was built on top of the official training script that we provide [here](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/README_sdxl.md).
#### Training data and Compute
The model is trained on 3M image-text pairs from LAION-Aesthetics V2. The model is trained for 700 GPU hours on 80GB A100 GPUs.
#### Batch size
Data parallel with a single gpu batch size of 8 for a total batch size of 256.
#### Hyper Parameters
Constant learning rate of 1e-5.
#### Mixed precision
fp16 |
skfrost19/BioMistralMerged | skfrost19 | "2024-05-10T03:56:42Z" | 3,205 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:mohsenfayyaz/Mistral-7B-Instruct-v0.2_medical_bios_5000_5ep",
"base_model:BioMistral/BioMistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-21T14:30:10Z" | ---
base_model:
- mohsenfayyaz/Mistral-7B-Instruct-v0.2_medical_bios_5000_5ep
- BioMistral/BioMistral-7B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [mohsenfayyaz/Mistral-7B-Instruct-v0.2_medical_bios_5000_5ep](https://huggingface.co/mohsenfayyaz/Mistral-7B-Instruct-v0.2_medical_bios_5000_5ep)
* [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: BioMistral/BioMistral-7B
layer_range: [0, 32]
- model: mohsenfayyaz/Mistral-7B-Instruct-v0.2_medical_bios_5000_5ep
layer_range: [0, 32]
merge_method: slerp
base_model: BioMistral/BioMistral-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
mradermacher/piano-medley-7b-GGUF | mradermacher | "2024-06-04T05:48:58Z" | 3,205 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"en",
"dataset:pankajmathur/orca_mini_v1_dataset",
"dataset:openai/summarize_from_feedback",
"dataset:PygmalionAI/PIPPA",
"dataset:chargoddard/rpguild",
"dataset:lemonilia/LimaRP",
"dataset:PKU-Alignment/PKU-SafeRLHF",
"dataset:Intel/orca_dpo_pairs",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"base_model:chargoddard/piano-medley-7b",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-04T03:46:01Z" | ---
base_model: chargoddard/piano-medley-7b
datasets:
- pankajmathur/orca_mini_v1_dataset
- openai/summarize_from_feedback
- PygmalionAI/PIPPA
- chargoddard/rpguild
- lemonilia/LimaRP
- PKU-Alignment/PKU-SafeRLHF
- Intel/orca_dpo_pairs
- allenai/ultrafeedback_binarized_cleaned
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- merge
- mergekit
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/chargoddard/piano-medley-7b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/piano-medley-7b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-GGUF/resolve/main/piano-medley-7b.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-GGUF/resolve/main/piano-medley-7b.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-GGUF/resolve/main/piano-medley-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-GGUF/resolve/main/piano-medley-7b.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-GGUF/resolve/main/piano-medley-7b.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-GGUF/resolve/main/piano-medley-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-GGUF/resolve/main/piano-medley-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-GGUF/resolve/main/piano-medley-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-GGUF/resolve/main/piano-medley-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-GGUF/resolve/main/piano-medley-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-GGUF/resolve/main/piano-medley-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-GGUF/resolve/main/piano-medley-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-GGUF/resolve/main/piano-medley-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-GGUF/resolve/main/piano-medley-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/piano-medley-7b-GGUF/resolve/main/piano-medley-7b.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Configurable-Janus-7B-GGUF | mradermacher | "2024-06-13T18:30:23Z" | 3,205 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:vicgalle/Configurable-Janus-7B",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T16:45:39Z" | ---
base_model: vicgalle/Configurable-Janus-7B
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/vicgalle/Configurable-Janus-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Configurable-Janus-7B-GGUF/resolve/main/Configurable-Janus-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Janus-7B-GGUF/resolve/main/Configurable-Janus-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Janus-7B-GGUF/resolve/main/Configurable-Janus-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Janus-7B-GGUF/resolve/main/Configurable-Janus-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Janus-7B-GGUF/resolve/main/Configurable-Janus-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Janus-7B-GGUF/resolve/main/Configurable-Janus-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Janus-7B-GGUF/resolve/main/Configurable-Janus-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Janus-7B-GGUF/resolve/main/Configurable-Janus-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Janus-7B-GGUF/resolve/main/Configurable-Janus-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Janus-7B-GGUF/resolve/main/Configurable-Janus-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Janus-7B-GGUF/resolve/main/Configurable-Janus-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Janus-7B-GGUF/resolve/main/Configurable-Janus-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Janus-7B-GGUF/resolve/main/Configurable-Janus-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Janus-7B-GGUF/resolve/main/Configurable-Janus-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Configurable-Janus-7B-GGUF/resolve/main/Configurable-Janus-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/cymist-2-v02-DPO-GGUF | mradermacher | "2024-06-14T00:38:07Z" | 3,204 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:cypienai/cymist-2-v02-DPO",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T23:04:55Z" | ---
base_model: cypienai/cymist-2-v02-DPO
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cypienai/cymist-2-v02-DPO
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/cymist-2-v02-DPO-GGUF/resolve/main/cymist-2-v02-DPO.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/cymist-2-v02-DPO-GGUF/resolve/main/cymist-2-v02-DPO.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/cymist-2-v02-DPO-GGUF/resolve/main/cymist-2-v02-DPO.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/cymist-2-v02-DPO-GGUF/resolve/main/cymist-2-v02-DPO.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/cymist-2-v02-DPO-GGUF/resolve/main/cymist-2-v02-DPO.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/cymist-2-v02-DPO-GGUF/resolve/main/cymist-2-v02-DPO.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/cymist-2-v02-DPO-GGUF/resolve/main/cymist-2-v02-DPO.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/cymist-2-v02-DPO-GGUF/resolve/main/cymist-2-v02-DPO.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/cymist-2-v02-DPO-GGUF/resolve/main/cymist-2-v02-DPO.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/cymist-2-v02-DPO-GGUF/resolve/main/cymist-2-v02-DPO.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/cymist-2-v02-DPO-GGUF/resolve/main/cymist-2-v02-DPO.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/cymist-2-v02-DPO-GGUF/resolve/main/cymist-2-v02-DPO.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/cymist-2-v02-DPO-GGUF/resolve/main/cymist-2-v02-DPO.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/cymist-2-v02-DPO-GGUF/resolve/main/cymist-2-v02-DPO.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/cymist-2-v02-DPO-GGUF/resolve/main/cymist-2-v02-DPO.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
snunlp/KR-BERT-char16424 | snunlp | "2021-11-22T06:19:20Z" | 3,203 | 5 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"ko",
"arxiv:2008.03979",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:05Z" | ---
language:
- ko
---
## KoRean based Bert pre-trained (KR-BERT)
This is a release of Korean-specific, small-scale BERT models with comparable or better performances developed by Computational Linguistics Lab at Seoul National University, referenced in [KR-BERT: A Small-Scale Korean-Specific Language Model](https://arxiv.org/abs/2008.03979).
<br>
### Vocab, Parameters and Data
| | Mulitlingual BERT<br>(Google) | KorBERT<br>(ETRI) | KoBERT<br>(SKT) | KR-BERT character | KR-BERT sub-character |
| -------------: | ---------------------------------------------: | ---------------------: | ----------------------------------: | -------------------------------------: | -------------------------------------: |
| vocab size | 119,547 | 30,797 | 8,002 | 16,424 | 12,367 |
| parameter size | 167,356,416 | 109,973,391 | 92,186,880 | 99,265,066 | 96,145,233 |
| data size | -<br>(The Wikipedia data<br>for 104 languages) | 23GB<br>4.7B morphemes | -<br>(25M sentences,<br>233M words) | 2.47GB<br>20M sentences,<br>233M words | 2.47GB<br>20M sentences,<br>233M words |
| Model | Masked LM Accuracy |
| ------------------------------------------- | ------------------ |
| KoBERT | 0.750 |
| KR-BERT character BidirectionalWordPiece | **0.779** |
| KR-BERT sub-character BidirectionalWordPiece | 0.769 |
<br>
### Sub-character
Korean text is basically represented with Hangul syllable characters, which can be decomposed into sub-characters, or graphemes. To accommodate such characteristics, we trained a new vocabulary and BERT model on two different representations of a corpus: syllable characters and sub-characters.
In case of using our sub-character model, you should preprocess your data with the code below.
```python
import torch
from transformers import BertConfig, BertModel, BertForPreTraining, BertTokenizer
from unicodedata import normalize
tokenizer_krbert = BertTokenizer.from_pretrained('/path/to/vocab_file.txt', do_lower_case=False)
# convert a string into sub-char
def to_subchar(string):
return normalize('NFKD', string)
sentence = '토크나이저 예시입니다.'
print(tokenizer_krbert.tokenize(to_subchar(sentence)))
```
### Tokenization
#### BidirectionalWordPiece Tokenizer
We use the BidirectionalWordPiece model to reduce search costs while maintaining the possibility of choice. This model applies BPE in both forward and backward directions to obtain two candidates and chooses the one that has a higher frequency.
| | Mulitlingual BERT | KorBERT<br>character | KoBERT | KR-BERT<br>character<br>WordPiece | KR-BERT<br>character<br>BidirectionalWordPiece | KR-BERT<br>sub-character<br>WordPiece | KR-BERT<br>sub-character<br>BidirectionalWordPiece |
| :-------------------------------------: | :-----------------------: | :-----------------------: | :-----------------------: | :------------------------------: | :-------------------------------------------: | :----------------------------------: | :-----------------------------------------------: |
| 냉장고<br>nayngcangko<br>"refrigerator" | 냉#장#고<br>nayng#cang#ko | 냉#장#고<br>nayng#cang#ko | 냉#장#고<br>nayng#cang#ko | 냉장고<br>nayngcangko | 냉장고<br>nayngcangko | 냉장고<br>nayngcangko | 냉장고<br>nayngcangko |
| 춥다<br>chwupta<br>"cold" | [UNK] | 춥#다<br>chwup#ta | 춥#다<br>chwup#ta | 춥#다<br>chwup#ta | 춥#다<br>chwup#ta | 추#ㅂ다<br>chwu#pta | 추#ㅂ다<br>chwu#pta |
| 뱃사람<br>paytsalam<br>"seaman" | [UNK] | 뱃#사람<br>payt#salam | 뱃#사람<br>payt#salam | 뱃#사람<br>payt#salam | 뱃#사람<br>payt#salam | 배#ㅅ#사람<br>pay#t#salam | 배#ㅅ#사람<br>pay#t#salam |
| 마이크<br>maikhu<br>"microphone" | 마#이#크<br>ma#i#khu | 마이#크<br>mai#khu | 마#이#크<br>ma#i#khu | 마이크<br>maikhu | 마이크<br>maikhu | 마이크<br>maikhu | 마이크<br>maikhu |
<br>
### Models
| | TensorFlow | | PyTorch | |
|:---:|:-------------------------------:|:----------------------------:|:----------------------------:|:----------------------------:|
| | character | sub-character | character | sub-character |
| WordPiece <br> tokenizer | [WP char](https://drive.google.com/open?id=1SG5m-3R395VjEEnt0wxWM7SE1j6ndVsX) | [WP subchar](https://drive.google.com/open?id=13oguhQvYD9wsyLwKgU-uLCacQVWA4oHg) | [WP char](https://drive.google.com/file/d/18lsZzx_wonnOezzB5QxqSliA2KL5BF0x/view?usp=sharing) | [WP subchar](https://drive.google.com/open?id=1c1en4AMlCv2k7QapIzqjefnYzNOoh5KZ)
| Bidirectional <br> WordPiece <br> tokenizer | [BiWP char](https://drive.google.com/open?id=1YhFobehwzdbIxsHHvyFU5okp-HRowRKS) | [BiWP subchar](https://drive.google.com/open?id=12izU0NZXNz9I6IsnknUbencgr7gWHDeM) | [BiWP char](https://drive.google.com/open?id=1C87CCHD9lOQhdgWPkMw_6ZD5M2km7f1p) | [BiWP subchar](https://drive.google.com/file/d/1JvNYFQyb20SWgOiDxZn6h1-n_fjTU25S/view?usp=sharing)
<!--
#### tensorflow
* BERT tokenizer, character model ([download](https://drive.google.com/open?id=1SG5m-3R395VjEEnt0wxWM7SE1j6ndVsX))
* BidirectionalWordPiece tokenizer, character model ([download](https://drive.google.com/open?id=1YhFobehwzdbIxsHHvyFU5okp-HRowRKS))
* BERT tokenizer, sub-character model ([download](https://drive.google.com/open?id=13oguhQvYD9wsyLwKgU-uLCacQVWA4oHg))
* BidirectionalWordPiece tokenizer, sub-character model ([download](https://drive.google.com/open?id=12izU0NZXNz9I6IsnknUbencgr7gWHDeM))
#### pytorch
* BERT tokenizer, character model ([download](https://drive.google.com/file/d/18lsZzx_wonnOezzB5QxqSliA2KL5BF0x/view?usp=sharing))
* BidirectionalWordPiece tokenizer, character model ([download](https://drive.google.com/open?id=1C87CCHD9lOQhdgWPkMw_6ZD5M2km7f1p))
* BERT tokenizer, sub-character model ([download](https://drive.google.com/open?id=1c1en4AMlCv2k7QapIzqjefnYzNOoh5KZ))
* BidirectionalWordPiece tokenizer, sub-character model ([download](https://drive.google.com/file/d/1JvNYFQyb20SWgOiDxZn6h1-n_fjTU25S/view?usp=sharing))
-->
<br>
### Requirements
- transformers == 2.1.1
- tensorflow < 2.0
<br>
## Downstream tasks
### Naver Sentiment Movie Corpus (NSMC)
* If you want to use the sub-character version of our models, let the `subchar` argument be `True`.
* And you can use the original BERT WordPiece tokenizer by entering `bert` for the `tokenizer` argument, and if you use `ranked` you can use our BidirectionalWordPiece tokenizer.
* tensorflow: After downloading our pretrained models, put them in a `models` directory in the `krbert_tensorflow` directory.
* pytorch: After downloading our pretrained models, put them in a `pretrained` directory in the `krbert_pytorch` directory.
```sh
# pytorch
python3 train.py --subchar {True, False} --tokenizer {bert, ranked}
# tensorflow
python3 run_classifier.py \
--task_name=NSMC \
--subchar={True, False} \
--tokenizer={bert, ranked} \
--do_train=true \
--do_eval=true \
--do_predict=true \
--do_lower_case=False\
--max_seq_length=128 \
--train_batch_size=128 \
--learning_rate=5e-05 \
--num_train_epochs=5.0 \
--output_dir={output_dir}
```
The pytorch code structure refers to that of https://github.com/aisolab/nlp_implementation .
<br>
### NSMC Acc.
| | multilingual BERT | KorBERT | KoBERT | KR-BERT character WordPiece | KR-BERT<br>character Bidirectional WordPiece | KR-BERT sub-character WordPiece | KR-BERT<br>sub-character Bidirectional WordPiece |
|:-----:|-------------------:|----------------:|--------:|----------------------------:|-----------------------------------------:|--------------------------------:|---------------------------------------------:|
| pytorch | - | **89.84** | 89.01 | 89.34 | **89.38** | 89.20 | 89.34 |
| tensorflow | 87.08 | 85.94 | n/a | 89.86 | **90.10** | 89.76 | 89.86 |
<br>
## Citation
If you use these models, please cite the following paper:
```
@article{lee2020krbert,
title={KR-BERT: A Small-Scale Korean-Specific Language Model},
author={Sangah Lee and Hansol Jang and Yunmee Baik and Suzi Park and Hyopil Shin},
year={2020},
journal={ArXiv},
volume={abs/2008.03979}
}
```
<br>
## Contacts
[email protected]
|
mradermacher/Poppy_Porpoise-1.4-L3-8B-GGUF | mradermacher | "2024-06-03T05:12:07Z" | 3,202 | 3 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Nitral-AI/Poppy_Porpoise-1.4-L3-8B",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-06-02T03:52:04Z" | ---
base_model: Nitral-AI/Poppy_Porpoise-1.4-L3-8B
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Nitral-AI/Poppy_Porpoise-1.4-L3-8B
***The model creator strongly suggests using the [0.72](https://huggingface.co/mradermacher/Poppy_Porpoise-0.72-L3-8B-GGUF) model at this time, as it is better quality***
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Poppy_Porpoise-1.4-L3-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.4-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.4-L3-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.4-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.4-L3-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.4-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.4-L3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.4-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.4-L3-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.4-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.4-L3-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.4-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.4-L3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.4-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.4-L3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.4-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.4-L3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.4-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.4-L3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.4-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.4-L3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.4-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.4-L3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.4-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.4-L3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.4-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.4-L3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.4-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.4-L3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.4-L3-8B-GGUF/resolve/main/Poppy_Porpoise-1.4-L3-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MadMax-OrpoMistral-7B-v0.3-GGUF | mradermacher | "2024-06-05T19:09:15Z" | 3,200 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Lumpen1/MadMax-OrpoMistral-7B-v0.3",
"endpoints_compatible",
"region:us"
] | null | "2024-06-05T18:01:52Z" | ---
base_model: Lumpen1/MadMax-OrpoMistral-7B-v0.3
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Lumpen1/MadMax-OrpoMistral-7B-v0.3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MadMax-OrpoMistral-7B-v0.3-GGUF/resolve/main/MadMax-OrpoMistral-7B-v0.3.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/MadMax-OrpoMistral-7B-v0.3-GGUF/resolve/main/MadMax-OrpoMistral-7B-v0.3.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/MadMax-OrpoMistral-7B-v0.3-GGUF/resolve/main/MadMax-OrpoMistral-7B-v0.3.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MadMax-OrpoMistral-7B-v0.3-GGUF/resolve/main/MadMax-OrpoMistral-7B-v0.3.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MadMax-OrpoMistral-7B-v0.3-GGUF/resolve/main/MadMax-OrpoMistral-7B-v0.3.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MadMax-OrpoMistral-7B-v0.3-GGUF/resolve/main/MadMax-OrpoMistral-7B-v0.3.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MadMax-OrpoMistral-7B-v0.3-GGUF/resolve/main/MadMax-OrpoMistral-7B-v0.3.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MadMax-OrpoMistral-7B-v0.3-GGUF/resolve/main/MadMax-OrpoMistral-7B-v0.3.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MadMax-OrpoMistral-7B-v0.3-GGUF/resolve/main/MadMax-OrpoMistral-7B-v0.3.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MadMax-OrpoMistral-7B-v0.3-GGUF/resolve/main/MadMax-OrpoMistral-7B-v0.3.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MadMax-OrpoMistral-7B-v0.3-GGUF/resolve/main/MadMax-OrpoMistral-7B-v0.3.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MadMax-OrpoMistral-7B-v0.3-GGUF/resolve/main/MadMax-OrpoMistral-7B-v0.3.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MadMax-OrpoMistral-7B-v0.3-GGUF/resolve/main/MadMax-OrpoMistral-7B-v0.3.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MadMax-OrpoMistral-7B-v0.3-GGUF/resolve/main/MadMax-OrpoMistral-7B-v0.3.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MadMax-OrpoMistral-7B-v0.3-GGUF/resolve/main/MadMax-OrpoMistral-7B-v0.3.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/SyntheticMoist-11B-v2-i1-GGUF | mradermacher | "2024-06-13T08:43:11Z" | 3,200 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"solar",
"llama",
"not-for-all-audiences",
"en",
"base_model:v000000/SyntheticMoist-11B-v2",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T05:29:39Z" | ---
base_model: v000000/SyntheticMoist-11B-v2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
- solar
- llama
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/v000000/SyntheticMoist-11B-v2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-i1-GGUF/resolve/main/SyntheticMoist-11B-v2.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-i1-GGUF/resolve/main/SyntheticMoist-11B-v2.i1-IQ1_M.gguf) | i1-IQ1_M | 2.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-i1-GGUF/resolve/main/SyntheticMoist-11B-v2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-i1-GGUF/resolve/main/SyntheticMoist-11B-v2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-i1-GGUF/resolve/main/SyntheticMoist-11B-v2.i1-IQ2_S.gguf) | i1-IQ2_S | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-i1-GGUF/resolve/main/SyntheticMoist-11B-v2.i1-IQ2_M.gguf) | i1-IQ2_M | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-i1-GGUF/resolve/main/SyntheticMoist-11B-v2.i1-Q2_K.gguf) | i1-Q2_K | 4.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-i1-GGUF/resolve/main/SyntheticMoist-11B-v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-i1-GGUF/resolve/main/SyntheticMoist-11B-v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-i1-GGUF/resolve/main/SyntheticMoist-11B-v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-i1-GGUF/resolve/main/SyntheticMoist-11B-v2.i1-IQ3_S.gguf) | i1-IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-i1-GGUF/resolve/main/SyntheticMoist-11B-v2.i1-IQ3_M.gguf) | i1-IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-i1-GGUF/resolve/main/SyntheticMoist-11B-v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-i1-GGUF/resolve/main/SyntheticMoist-11B-v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-i1-GGUF/resolve/main/SyntheticMoist-11B-v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-i1-GGUF/resolve/main/SyntheticMoist-11B-v2.i1-Q4_0.gguf) | i1-Q4_0 | 6.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-i1-GGUF/resolve/main/SyntheticMoist-11B-v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 6.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-i1-GGUF/resolve/main/SyntheticMoist-11B-v2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-i1-GGUF/resolve/main/SyntheticMoist-11B-v2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-i1-GGUF/resolve/main/SyntheticMoist-11B-v2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/SyntheticMoist-11B-v2-i1-GGUF/resolve/main/SyntheticMoist-11B-v2.i1-Q6_K.gguf) | i1-Q6_K | 8.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mradermacher/Phatos-V.3-GGUF | mradermacher | "2024-06-13T17:17:19Z" | 3,197 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ClaudioItaly/Phatos-V.3",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T16:51:17Z" | ---
base_model: ClaudioItaly/Phatos-V.3
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ClaudioItaly/Phatos-V.3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Phatos-V.3-GGUF/resolve/main/Phatos-V.3.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Phatos-V.3-GGUF/resolve/main/Phatos-V.3.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Phatos-V.3-GGUF/resolve/main/Phatos-V.3.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Phatos-V.3-GGUF/resolve/main/Phatos-V.3.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Phatos-V.3-GGUF/resolve/main/Phatos-V.3.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Phatos-V.3-GGUF/resolve/main/Phatos-V.3.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Phatos-V.3-GGUF/resolve/main/Phatos-V.3.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Phatos-V.3-GGUF/resolve/main/Phatos-V.3.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Phatos-V.3-GGUF/resolve/main/Phatos-V.3.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phatos-V.3-GGUF/resolve/main/Phatos-V.3.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phatos-V.3-GGUF/resolve/main/Phatos-V.3.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Phatos-V.3-GGUF/resolve/main/Phatos-V.3.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phatos-V.3-GGUF/resolve/main/Phatos-V.3.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Phatos-V.3-GGUF/resolve/main/Phatos-V.3.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Phatos-V.3-GGUF/resolve/main/Phatos-V.3.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf | RichardErkhov | "2024-06-29T14:10:40Z" | 3,197 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-29T13:36:32Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
deepseek-coder-1.3b-typescript - GGUF
- Model creator: https://huggingface.co/CodeGPTPlus/
- Original model: https://huggingface.co/CodeGPTPlus/deepseek-coder-1.3b-typescript/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [deepseek-coder-1.3b-typescript.Q2_K.gguf](https://huggingface.co/RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf/blob/main/deepseek-coder-1.3b-typescript.Q2_K.gguf) | Q2_K | 0.52GB |
| [deepseek-coder-1.3b-typescript.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf/blob/main/deepseek-coder-1.3b-typescript.IQ3_XS.gguf) | IQ3_XS | 0.57GB |
| [deepseek-coder-1.3b-typescript.IQ3_S.gguf](https://huggingface.co/RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf/blob/main/deepseek-coder-1.3b-typescript.IQ3_S.gguf) | IQ3_S | 0.6GB |
| [deepseek-coder-1.3b-typescript.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf/blob/main/deepseek-coder-1.3b-typescript.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [deepseek-coder-1.3b-typescript.IQ3_M.gguf](https://huggingface.co/RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf/blob/main/deepseek-coder-1.3b-typescript.IQ3_M.gguf) | IQ3_M | 0.63GB |
| [deepseek-coder-1.3b-typescript.Q3_K.gguf](https://huggingface.co/RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf/blob/main/deepseek-coder-1.3b-typescript.Q3_K.gguf) | Q3_K | 0.66GB |
| [deepseek-coder-1.3b-typescript.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf/blob/main/deepseek-coder-1.3b-typescript.Q3_K_M.gguf) | Q3_K_M | 0.66GB |
| [deepseek-coder-1.3b-typescript.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf/blob/main/deepseek-coder-1.3b-typescript.Q3_K_L.gguf) | Q3_K_L | 0.69GB |
| [deepseek-coder-1.3b-typescript.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf/blob/main/deepseek-coder-1.3b-typescript.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [deepseek-coder-1.3b-typescript.Q4_0.gguf](https://huggingface.co/RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf/blob/main/deepseek-coder-1.3b-typescript.Q4_0.gguf) | Q4_0 | 0.72GB |
| [deepseek-coder-1.3b-typescript.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf/blob/main/deepseek-coder-1.3b-typescript.IQ4_NL.gguf) | IQ4_NL | 0.73GB |
| [deepseek-coder-1.3b-typescript.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf/blob/main/deepseek-coder-1.3b-typescript.Q4_K_S.gguf) | Q4_K_S | 0.76GB |
| [deepseek-coder-1.3b-typescript.Q4_K.gguf](https://huggingface.co/RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf/blob/main/deepseek-coder-1.3b-typescript.Q4_K.gguf) | Q4_K | 0.81GB |
| [deepseek-coder-1.3b-typescript.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf/blob/main/deepseek-coder-1.3b-typescript.Q4_K_M.gguf) | Q4_K_M | 0.81GB |
| [deepseek-coder-1.3b-typescript.Q4_1.gguf](https://huggingface.co/RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf/blob/main/deepseek-coder-1.3b-typescript.Q4_1.gguf) | Q4_1 | 0.8GB |
| [deepseek-coder-1.3b-typescript.Q5_0.gguf](https://huggingface.co/RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf/blob/main/deepseek-coder-1.3b-typescript.Q5_0.gguf) | Q5_0 | 0.87GB |
| [deepseek-coder-1.3b-typescript.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf/blob/main/deepseek-coder-1.3b-typescript.Q5_K_S.gguf) | Q5_K_S | 0.89GB |
| [deepseek-coder-1.3b-typescript.Q5_K.gguf](https://huggingface.co/RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf/blob/main/deepseek-coder-1.3b-typescript.Q5_K.gguf) | Q5_K | 0.93GB |
| [deepseek-coder-1.3b-typescript.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf/blob/main/deepseek-coder-1.3b-typescript.Q5_K_M.gguf) | Q5_K_M | 0.93GB |
| [deepseek-coder-1.3b-typescript.Q5_1.gguf](https://huggingface.co/RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf/blob/main/deepseek-coder-1.3b-typescript.Q5_1.gguf) | Q5_1 | 0.95GB |
| [deepseek-coder-1.3b-typescript.Q6_K.gguf](https://huggingface.co/RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf/blob/main/deepseek-coder-1.3b-typescript.Q6_K.gguf) | Q6_K | 1.09GB |
| [deepseek-coder-1.3b-typescript.Q8_0.gguf](https://huggingface.co/RichardErkhov/CodeGPTPlus_-_deepseek-coder-1.3b-typescript-gguf/blob/main/deepseek-coder-1.3b-typescript.Q8_0.gguf) | Q8_0 | 1.33GB |
Original model description:
---
license: other
base_model: deepseek-ai/deepseek-coder-1.3b-base
tags:
- axolotl
- generated_from_trainer
model-index:
- name: deepseek-coder-1.3b-typescript
results: []
datasets:
- bigcode/the-stack-dedup
widget:
- text: "class Person {\n constructor(public name:"
example_title: "class"
- text: "function quickSort"
example_title: "function"
---
<p align="center">
<img width="1000px" alt="CodeGPT: DeepSeek Coder - Typescript" src="codegpt-deepseek-typescript.png?raw=true">
</p>
<p align="center"><a href="https://codegpt.co/">[CodeGPT.co]</a> | <a href="https://ollama.ai/codegpt/deepseek-coder-1.3b-typescript">[🦙 Ollama]</a> | <a href="https://discord.gg/fKyyJX5pne">[Discord]</a> | <a href="https://marketplace.visualstudio.com/items?itemName=DanielSanMedium.dscodegpt">[VSCode Extension]</a> </p>
<hr>
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.3.0`
```yaml
base_model: deepseek-ai/deepseek-coder-1.3b-base
model_type: AutoModelForCausalLM
trust_remote_code: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: CodeGPTPlus/typescript-0-500000-seq1024
type: completion
field: text
val_set_size: 0.001
output_dir: ./fft-out
sequence_len: 1024
adapter:
lora_model_dir:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
lora_fan_in_fan_out:
lora_modules_to_save:
wandb_project: deepseek_1.3_fft
wandb_entity:
wandb_watch:
wandb_name: aws_a10g
wandb_log_model: end
gradient_accumulation_steps: 2
micro_batch_size: 20
num_epochs: 1
optimizer: adamw_bnb_8bit
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 0.000001
max_grad_norm: 1.0
weight_decay: 0.1
lr_scheduler: cosine
learning_rate: 0.00002
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
hub_model_id: CodeGPTPlus/deepseek_coder_1.3b_typescript
hub_strategy: every_save
warmup_ratio: 0.01
evals_per_epoch: 20
saves_per_epoch: 3
debug:
deepspeed:
fsdp:
fsdp_config:
special_tokens:
bos_token: "<|begin▁of▁sentence|>"
eos_token: "<|end▁of▁sentence|>"
pad_token: "<|end▁of▁sentence|>"
```
</details><br>
# deepseek-coder-1.3b-typescript
CodeGPTPlus/deepseek-coder-1.3b-typescript, emerges as a fine-tuned iteration of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base), meticulously crafted by the CodeGPT team to excel in generating expert code in TypeScript. With specific fine-tuning for TypeScript and a dataset of 0.5B tokens, this model excels in producing precise and efficient solutions in this programming language.
The 16K window size and an additional fill-in-the-middle task are employed to deliver project-level code completion.
This new model stands as the ideal choice for those seeking a specialized code generator for TypeScript, backed by the expertise of the CodeGPT team.
It achieves the following results on the evaluation set:
- Loss: 0.7681
**Model Developers** CodeGPT Team
**Variations** 1.3B
**Input** Models input text only.
**Output** Models generate text only.
## How to Use
This model is for completion purposes only. Here give some examples of how to use the model.
#### Running the model on a GPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("CodeGPTPlus/deepseek-coder-1.3b-typescript",
trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("CodeGPTPlus/deepseek-coder-1.3b-typescript",
trust_remote_code=True).cuda()
input_text = """<|fim▁begin|>function quickSort(arr: number[]): number[] {
if (arr.length <= 1) {
return arr;
}
const pivot = arr[0];
const left = [];
const right = [];
<|fim▁hole|>
return [...quickSort(left), pivot, ...quickSort(right)];
}<|fim▁end|>"""
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### Running with Ollama
**Model:** https://ollama.ai/codegpt/deepseek-coder-1.3b-typescript
```ollama run codegpt/deepseek-coder-1.3b-typescript```
### Running with Ollama and CodeGPT Autocomplete in VSCode
**Documentation:** https://docs.codegpt.co/docs/tutorial-features/code_autocompletion
Select "Ollama - codegpt/deepseek-coder-1.3b-typescript" in the autocomplete model selector.
Then, write any code or comment in the vscode text editor, and the model will provide you with code suggestions through the CodeGPT code autocomplete.
<img width="1000px" alt="CodeGPT: DeepSeek Coder - Typescript" src="ollama_autocomplete_codegpt.gif">
### Fill In the Middle (FIM)
```python
<|fim▁begin|>function quickSort(arr: number[]): number[] {
if (arr.length <= 1) {
return arr;
}
const pivot = arr[0];
const left = [];
const right = [];
<|fim▁hole|>
return [...quickSort(left), pivot, ...quickSort(right)];
}<|fim▁end|>
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 261
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0745 | 0.0 | 1 | 0.8681 |
| 1.2267 | 0.05 | 1308 | 0.8130 |
| 1.1594 | 0.1 | 2616 | 0.8018 |
| 0.7674 | 0.15 | 3924 | 0.7942 |
| 0.6443 | 0.2 | 5232 | 0.7889 |
| 0.9155 | 0.25 | 6540 | 0.7847 |
| 0.7501 | 0.3 | 7848 | 0.7819 |
| 0.8835 | 0.35 | 9156 | 0.7792 |
| 0.7261 | 0.4 | 10464 | 0.7769 |
| 0.9746 | 0.45 | 11772 | 0.7748 |
| 0.6884 | 0.5 | 13080 | 0.7734 |
| 0.6104 | 0.55 | 14388 | 0.7722 |
| 0.8876 | 0.6 | 15696 | 0.7710 |
| 0.9567 | 0.65 | 17004 | 0.7703 |
| 0.6915 | 0.7 | 18312 | 0.7696 |
| 0.8874 | 0.75 | 19620 | 0.7691 |
| 0.6124 | 0.8 | 20928 | 0.7686 |
| 0.8147 | 0.85 | 22236 | 0.7684 |
| 0.8021 | 0.9 | 23544 | 0.7683 |
| 0.8665 | 0.95 | 24852 | 0.7681 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
KBNIT/llama-2-koen-13b-QLoRA-NEFTune-kolon-v0.1 | KBNIT | "2024-03-27T04:48:15Z" | 3,195 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-08T06:28:48Z" | ---
license: cc-by-nc-4.0
language:
- ko
- en
---
## Base Model:
We made a LLM model with beomi/llama-2-koen-13b
## Model Description
We use QLoRA(64, 16) and NEFTune on LLM Fine-tuning
lr = 2e-4,
## Train Detail
Our Korean Wiki QA data used, and 3 epoch train
## Others
We are making LLM model for Kolon ! |
facebook/mask2former-swin-tiny-ade-semantic | facebook | "2023-01-25T11:42:18Z" | 3,194 | 1 | transformers | [
"transformers",
"pytorch",
"mask2former",
"vision",
"image-segmentation",
"dataset:coco",
"arxiv:2112.01527",
"arxiv:2107.06278",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | "2023-01-05T12:26:21Z" | ---
license: other
tags:
- vision
- image-segmentation
datasets:
- coco
widget:
- src: http://images.cocodataset.org/val2017/000000039769.jpg
example_title: Cats
- src: http://images.cocodataset.org/val2017/000000039770.jpg
example_title: Castle
---
# Mask2Former
Mask2Former model trained on ADE20k semantic segmentation (tiny-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on ADE20k semantic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-tiny-ade-semantic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-tiny-ade-semantic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). |
RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf | RichardErkhov | "2024-06-02T03:43:06Z" | 3,194 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-02T02:38:28Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Platypus-Nebula-v2-7B - GGUF
- Model creator: https://huggingface.co/Weyaxi/
- Original model: https://huggingface.co/Weyaxi/Platypus-Nebula-v2-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Platypus-Nebula-v2-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf/blob/main/Platypus-Nebula-v2-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [Platypus-Nebula-v2-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf/blob/main/Platypus-Nebula-v2-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Platypus-Nebula-v2-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf/blob/main/Platypus-Nebula-v2-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Platypus-Nebula-v2-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf/blob/main/Platypus-Nebula-v2-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Platypus-Nebula-v2-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf/blob/main/Platypus-Nebula-v2-7B.IQ3_M.gguf) | IQ3_M | 2.33GB |
| [Platypus-Nebula-v2-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf/blob/main/Platypus-Nebula-v2-7B.Q3_K.gguf) | Q3_K | 2.11GB |
| [Platypus-Nebula-v2-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf/blob/main/Platypus-Nebula-v2-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Platypus-Nebula-v2-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf/blob/main/Platypus-Nebula-v2-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Platypus-Nebula-v2-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf/blob/main/Platypus-Nebula-v2-7B.IQ4_XS.gguf) | IQ4_XS | 2.91GB |
| [Platypus-Nebula-v2-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf/blob/main/Platypus-Nebula-v2-7B.Q4_0.gguf) | Q4_0 | 2.24GB |
| [Platypus-Nebula-v2-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf/blob/main/Platypus-Nebula-v2-7B.IQ4_NL.gguf) | IQ4_NL | 0.5GB |
| [Platypus-Nebula-v2-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf/blob/main/Platypus-Nebula-v2-7B.Q4_K_S.gguf) | Q4_K_S | 0.2GB |
| [Platypus-Nebula-v2-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf/blob/main/Platypus-Nebula-v2-7B.Q4_K.gguf) | Q4_K | 0.1GB |
| [Platypus-Nebula-v2-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf/blob/main/Platypus-Nebula-v2-7B.Q4_K_M.gguf) | Q4_K_M | 0.01GB |
| [Platypus-Nebula-v2-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf/blob/main/Platypus-Nebula-v2-7B.Q4_1.gguf) | Q4_1 | 0.0GB |
| [Platypus-Nebula-v2-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf/blob/main/Platypus-Nebula-v2-7B.Q5_0.gguf) | Q5_0 | 0.0GB |
| [Platypus-Nebula-v2-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf/blob/main/Platypus-Nebula-v2-7B.Q5_K_S.gguf) | Q5_K_S | 0.0GB |
| [Platypus-Nebula-v2-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf/blob/main/Platypus-Nebula-v2-7B.Q5_K.gguf) | Q5_K | 0.0GB |
| [Platypus-Nebula-v2-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf/blob/main/Platypus-Nebula-v2-7B.Q5_K_M.gguf) | Q5_K_M | 0.0GB |
| [Platypus-Nebula-v2-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf/blob/main/Platypus-Nebula-v2-7B.Q5_1.gguf) | Q5_1 | 0.0GB |
| [Platypus-Nebula-v2-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf/blob/main/Platypus-Nebula-v2-7B.Q6_K.gguf) | Q6_K | 0.0GB |
| [Platypus-Nebula-v2-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Platypus-Nebula-v2-7B-gguf/blob/main/Platypus-Nebula-v2-7B.Q8_0.gguf) | Q8_0 | 5.58GB |
Original model description:
---
license: cc-by-nc-4.0
datasets:
- garage-bAInd/Open-Platypus
language:
- en
---

<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
# Platypus-Nebula-v2-7B
Platypus-Nebula-v2-7B is a merge of [bhenrym14/mistral-7b-platypus-fp16](https://huggingface.co/bhenrym14/mistral-7b-platypus-fp16) and [PulsarAI/Nebula-v2-7B-Lora](https://huggingface.co/PulsarAI/Nebula-v2-7B-Lora)
# Evaluation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard))
| Metric | Value |
|-----------------------|-----------|
| Avg. | |
| ARC (25-shot) | |
| HellaSwag (10-shot) | |
| MMLU (5-shot) | |
| TruthfulQA (0-shot) | |
| Winogrande (5-shot) | |
| GSM8K (5-shot) | |
| DROP (3-shot) | |
|
mradermacher/Apocrypha-7b-GGUF | mradermacher | "2024-06-10T17:50:21Z" | 3,194 | 0 | transformers | [
"transformers",
"gguf",
"conversation",
"merge",
"en",
"base_model:BlueNipples/Apocrypha-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T04:21:24Z" | ---
base_model: BlueNipples/Apocrypha-7b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- conversation
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/BlueNipples/Apocrypha-7b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Apocrypha-7b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Apocrypha-7b-GGUF/resolve/main/Apocrypha-7b.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Apocrypha-7b-GGUF/resolve/main/Apocrypha-7b.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Apocrypha-7b-GGUF/resolve/main/Apocrypha-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Apocrypha-7b-GGUF/resolve/main/Apocrypha-7b.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Apocrypha-7b-GGUF/resolve/main/Apocrypha-7b.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Apocrypha-7b-GGUF/resolve/main/Apocrypha-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Apocrypha-7b-GGUF/resolve/main/Apocrypha-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Apocrypha-7b-GGUF/resolve/main/Apocrypha-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Apocrypha-7b-GGUF/resolve/main/Apocrypha-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Apocrypha-7b-GGUF/resolve/main/Apocrypha-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Apocrypha-7b-GGUF/resolve/main/Apocrypha-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Apocrypha-7b-GGUF/resolve/main/Apocrypha-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Apocrypha-7b-GGUF/resolve/main/Apocrypha-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Apocrypha-7b-GGUF/resolve/main/Apocrypha-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Apocrypha-7b-GGUF/resolve/main/Apocrypha-7b.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Weyaxi/Bagel-Hermes-34B-Slerp | Weyaxi | "2024-06-25T08:03:09Z" | 3,192 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-12T17:27:49Z" | ---
tags:
- mergekit
- merge
model-index:
- name: Bagel-Hermes-34B-Slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.73
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-34B-Slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.68
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-34B-Slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-34B-Slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.09
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-34B-Slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-34B-Slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.26
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Bagel-Hermes-34B-Slerp
name: Open LLM Leaderboard
license: apache-2.0
---
# Bagel-Hermes-34B-Slerp
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Nous-Hermes-2-Yi-34B
* bagel-dpo-34b-v0.2
* nontoxic-bagel-34b-v0.2
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: bagel-dpo-34b-v0.2
layer_range: [0, 60]
- model: Nous-Hermes-2-Yi-34B
layer_range: [0, 60]
merge_method: slerp
base_model: nontoxic-bagel-34b-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
tokenizer_source: union
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Bagel-Hermes-34B-Slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.24|
|AI2 Reasoning Challenge (25-Shot)|70.73|
|HellaSwag (10-Shot) |85.68|
|MMLU (5-Shot) |77.29|
|TruthfulQA (0-shot) |67.09|
|Winogrande (5-shot) |84.37|
|GSM8k (5-shot) |66.26|
|
gelukuMLG/Llama-3-Cat-Instruct-15B-GGUF | gelukuMLG | "2024-06-28T19:49:29Z" | 3,192 | 1 | null | [
"gguf",
"license:llama3",
"region:us"
] | null | "2024-05-18T10:32:27Z" | ---
license: llama3
---
### Compute for this merge was provided by KoboldAI.
### Important: Because this model is based on Cat-8B-Instruct-V1 it has the stop sequence issues. Make sure to add `</s>` as a stop Sequence in whatever backend or ui you are using. ###
The following models were used in this recipe:
- https://huggingface.co/elinas/Llama-3-15B-Instruct-zeroed-ft
- https://huggingface.co/elinas/Llama-3-15B-Instruct-zeroed
- https://huggingface.co/TheSkullery/llama-3-cat-8b-instruct-v1
Recipe used:
```
merge_method: passthrough
dtype: bfloat16
vocab_type: bpe
slices:
- sources:
- layer_range: [0, 24]
model: TheSkullery/llama-3-cat-8b-instruct-v1
- sources:
- layer_range: [8, 24]
model: TheSkullery/llama-3-cat-8b-instruct-v1
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [8, 24]
model: TheSkullery/llama-3-cat-8b-instruct-v1
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [24, 32]
model: TheSkullery/llama-3-cat-8b-instruct-v1
name: LLaMa-3-Cat-Instruct-Unhealed-15B
---
merge_method: task_arithmetic
dtype: bfloat16
vocab_type: bpe
base_model: elinas/Llama-3-15B-Instruct-zeroed
models:
- model: elinas/Llama-3-15B-Instruct-zeroed-ft
parameters:
weight: 1.0
- model: LLaMa-3-Cat-Instruct-Unhealed-15B
parameters:
weight: 1.0
``` |
mradermacher/Qwen2-7B-Instruct-GGUF | mradermacher | "2024-06-06T21:25:49Z" | 3,192 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"en",
"base_model:Qwen/Qwen2-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-06T20:57:09Z" | ---
base_model: Qwen/Qwen2-7B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Qwen/Qwen2-7B-Instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2-7B-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Instruct-GGUF/resolve/main/Qwen2-7B-Instruct.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Instruct-GGUF/resolve/main/Qwen2-7B-Instruct.IQ3_XS.gguf) | IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Instruct-GGUF/resolve/main/Qwen2-7B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Instruct-GGUF/resolve/main/Qwen2-7B-Instruct.IQ3_S.gguf) | IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Instruct-GGUF/resolve/main/Qwen2-7B-Instruct.IQ3_M.gguf) | IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Instruct-GGUF/resolve/main/Qwen2-7B-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Instruct-GGUF/resolve/main/Qwen2-7B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Instruct-GGUF/resolve/main/Qwen2-7B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Instruct-GGUF/resolve/main/Qwen2-7B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Instruct-GGUF/resolve/main/Qwen2-7B-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Instruct-GGUF/resolve/main/Qwen2-7B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Instruct-GGUF/resolve/main/Qwen2-7B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Instruct-GGUF/resolve/main/Qwen2-7B-Instruct.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Instruct-GGUF/resolve/main/Qwen2-7B-Instruct.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Instruct-GGUF/resolve/main/Qwen2-7B-Instruct.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
timm/beitv2_large_patch16_224.in1k_ft_in22k | timm | "2023-05-08T23:43:21Z" | 3,191 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-22k",
"arxiv:2208.06366",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | "2022-12-23T02:34:24Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-22k
---
# Model card for beitv2_large_patch16_224.in1k_ft_in22k
A BEiT-v2 image classification model. Trained on ImageNet-1k with self-supervised masked image modelling (MIM) using a VQ-KD encoder as a visual tokenizer (via OpenAI CLIP B/16 teacher). Fine-tuned on ImageNet-22k.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 325.8
- GMACs: 61.6
- Activations (M): 63.5
- Image size: 224 x 224
- **Papers:**
- BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers: https://arxiv.org/abs/2208.06366
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-22k
- **Original:** https://github.com/microsoft/unilm/tree/master/beit2
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('beitv2_large_patch16_224.in1k_ft_in22k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'beitv2_large_patch16_224.in1k_ft_in22k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 197, 1024) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{peng2022beit,
title={Beit v2: Masked image modeling with vector-quantized visual tokenizers},
author={Peng, Zhiliang and Dong, Li and Bao, Hangbo and Ye, Qixiang and Wei, Furu},
journal={arXiv preprint arXiv:2208.06366},
year={2022}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
OpenAssistant/oasst-sft-1-pythia-12b | OpenAssistant | "2023-03-11T14:25:14Z" | 3,191 | 279 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-03-09T16:47:26Z" | ---
license: apache-2.0
language:
- en
tags:
- sft
pipeline_tag: text-generation
widget:
- text: <|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
- text: <|prompter|>What's the Earth total population<|endoftext|><|assistant|>
- text: <|prompter|>Write a story about future of AI development<|endoftext|><|assistant|>
---
# Open-Assistant SFT-1 12B Model
This is the first iteration English supervised-fine-tuning (SFT) model of
the [Open-Assistant](https://github.com/LAION-AI/Open-Assistant) project.
It is based on a Pythia 12B that was fine-tuned on ~22k human demonstrations
of assistant conversations collected through the
[https://open-assistant.io/](https://open-assistant.io/) human feedback web
app before March 7, 2023.
## Model Details
- **Developed by:** [Open-Assistant Contributors](https://open-assistant.io/)
- **Model type:** Transformer-based Language Model
- **Language:** English
- **Finetuned from:** [EleutherAI / pythia-12b-deduped](https://huggingface.co/EleutherAI/pythia-12b-deduped)
- **Code:** [Open-Assistant/model/model_training](https://github.com/LAION-AI/Open-Assistant/tree/main/model/model_training)
- **Demo:** [Continuations for 250 random prompts](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-03-09_andreaskoepf_oasst-1_12b_7000_sampling_noprefix_lottery.json) ([sampling code](https://github.com/Open-Assistant/oasst-model-eval/blob/3d71f3be100c05cd8ddb568365e036a29fbff8c7/model_eval/manual/sampling_report.py)).
- **License:** Apache 2.0
- **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord)
## Prompting
Two special tokens are used to mark the beginning of user and assistant turns:
`<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token.
Input prompt example:
```
<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
```
The input ends with the `<|assistant|>` token to signal that the model should
start generating the assistant reply.
Example of generated continuation (typical sampling 0.2):
```
A visual joke often accompanied by text that spreads rapidly from individual
to individual. The exact origin of the term "meme" is unclear, but it was
first used in the 1990s by Richard Dawkins in his book "The Selfish Gene."
He defined a meme as an idea, behavior, or style that is transmitted from
one person to another within a culture.
Dawkins' use of the term caught on among scholars and journalists, who began
to apply the concept to other types of cultural items such as advertisements,
fashion trends, and jokes. Over time, the term "meme" has come to be used
more generally to describe any social behavior that can be repeated and
altered by individuals. Today, the term is widely recognized and applied in
fields ranging from anthropology to psychology to marketing.<|endoftext|>
```
## Limitations
See limitations of Pythia 12B base model [here](https://huggingface.co/EleutherAI/pythia-12b-deduped#limitations-and-biases).
The model is known to fail horribly at answering math and coding questions.
Beware of hallucinations: Outputs are often factually wrong or misleading.
Replies might look convincing (at first glance) while containing completely
made up false statements.
This model is usable only for English conversations. |
mradermacher/H4rmoniousAnthea_V.2.1-GGUF | mradermacher | "2024-06-06T23:21:37Z" | 3,191 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:neovalle/H4rmoniousAnthea_V.2.1",
"endpoints_compatible",
"region:us"
] | null | "2024-06-06T21:49:21Z" | ---
base_model: neovalle/H4rmoniousAnthea_V.2.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/neovalle/H4rmoniousAnthea_V.2.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2.1-GGUF/resolve/main/H4rmoniousAnthea_V.2.1.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2.1-GGUF/resolve/main/H4rmoniousAnthea_V.2.1.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2.1-GGUF/resolve/main/H4rmoniousAnthea_V.2.1.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2.1-GGUF/resolve/main/H4rmoniousAnthea_V.2.1.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2.1-GGUF/resolve/main/H4rmoniousAnthea_V.2.1.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2.1-GGUF/resolve/main/H4rmoniousAnthea_V.2.1.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2.1-GGUF/resolve/main/H4rmoniousAnthea_V.2.1.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2.1-GGUF/resolve/main/H4rmoniousAnthea_V.2.1.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2.1-GGUF/resolve/main/H4rmoniousAnthea_V.2.1.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2.1-GGUF/resolve/main/H4rmoniousAnthea_V.2.1.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2.1-GGUF/resolve/main/H4rmoniousAnthea_V.2.1.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2.1-GGUF/resolve/main/H4rmoniousAnthea_V.2.1.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2.1-GGUF/resolve/main/H4rmoniousAnthea_V.2.1.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2.1-GGUF/resolve/main/H4rmoniousAnthea_V.2.1.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2.1-GGUF/resolve/main/H4rmoniousAnthea_V.2.1.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
pritamdeka/S-PubMedBert-MS-MARCO-SCIFACT | pritamdeka | "2023-07-02T11:43:37Z" | 3,190 | 5 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2022-03-02T23:29:05Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# S-PubMedBert-MS-MARCO-SCIFACT
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('S-PubMedBert-MS-MARCO-SCIFACT')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('S-PubMedBert-MS-MARCO-SCIFACT')
model = AutoModel.from_pretrained('S-PubMedBert-MS-MARCO-SCIFACT')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 560 with parameters:
```
{'batch_size': 16}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 1,
"evaluation_steps": 10000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 56,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
If you use this model cite the following paper
```
@article{deka2022improved,
title={Improved Methods To Aid Unsupervised Evidence-Based Fact Checking For Online Health News},
author={Deka, Pritam and Jurek-Loughrey, Anna and Deepak, P},
journal={Journal of Data Intelligence},
volume={3},
number={4},
pages={474--504},
year={2022}
}
``` |
dhruv0808/autotrain-ad_detection_ver_1-1395053127 | dhruv0808 | "2022-09-09T12:35:54Z" | 3,190 | 1 | transformers | [
"transformers",
"pytorch",
"autotrain",
"vision",
"image-classification",
"dataset:dhruv0808/autotrain-data-ad_detection_ver_1",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-09-09T12:33:49Z" | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- dhruv0808/autotrain-data-ad_detection_ver_1
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.009652698067986935
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1395053127
- CO2 Emissions (in grams): 0.0097
## Validation Metrics
- Loss: 0.178
- Accuracy: 0.941
- Precision: 0.947
- Recall: 0.947
- AUC: 0.974
- F1: 0.947 |
digiplay/nk15_diffusers | digiplay | "2024-04-16T08:01:45Z" | 3,190 | 3 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-06-06T17:15:35Z" | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/84604/nk15

if you use this model in your diffusers,
show some AutoencoderKL errors,
Don't worry, just use the codes below,
you can still generate images :)
```
modelid="digiplay/nk15_diffusers"
from diffusers.models import AutoencoderKL
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse")
pipe = DiffusionPipeline.from_pretrained(modelid, vae=vae)
```
DEMO image is generated by using diffusers + Google Colab
|
OpenBuddy/openbuddy-zephyr-7b-v14.1 | OpenBuddy | "2023-11-07T10:03:09Z" | 3,190 | 50 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-06T00:37:45Z" | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
pipeline_tag: text-generation
inference: false
library_name: transformers
license: apache-2.0
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/HuggingFaceH4/zephyr-7b-beta
License: Apache 2.0
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。 |
mradermacher/Samantha-Qwen2-7B-GGUF | mradermacher | "2024-06-17T15:42:44Z" | 3,190 | 0 | transformers | [
"transformers",
"gguf",
"en",
"zh",
"dataset:macadeliccc/opus_samantha",
"dataset:HuggingfaceH4/ultrachat_200k",
"dataset:teknium/OpenHermes-2.5",
"dataset:Sao10K/Claude-3-Opus-Instruct-15K",
"base_model:macadeliccc/Samantha-Qwen2-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-17T14:41:59Z" | ---
base_model: macadeliccc/Samantha-Qwen2-7B
datasets:
- macadeliccc/opus_samantha
- HuggingfaceH4/ultrachat_200k
- teknium/OpenHermes-2.5
- Sao10K/Claude-3-Opus-Instruct-15K
language:
- en
- zh
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/macadeliccc/Samantha-Qwen2-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Samantha-Qwen2-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-GGUF/resolve/main/Samantha-Qwen2-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-GGUF/resolve/main/Samantha-Qwen2-7B.IQ3_XS.gguf) | IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-GGUF/resolve/main/Samantha-Qwen2-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-GGUF/resolve/main/Samantha-Qwen2-7B.IQ3_S.gguf) | IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-GGUF/resolve/main/Samantha-Qwen2-7B.IQ3_M.gguf) | IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-GGUF/resolve/main/Samantha-Qwen2-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-GGUF/resolve/main/Samantha-Qwen2-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-GGUF/resolve/main/Samantha-Qwen2-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-GGUF/resolve/main/Samantha-Qwen2-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-GGUF/resolve/main/Samantha-Qwen2-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-GGUF/resolve/main/Samantha-Qwen2-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-GGUF/resolve/main/Samantha-Qwen2-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-GGUF/resolve/main/Samantha-Qwen2-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-GGUF/resolve/main/Samantha-Qwen2-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-GGUF/resolve/main/Samantha-Qwen2-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jay6944/EEVE-Korean-Instruct-10.8B-geoheim20-8bit-gguf | jay6944 | "2024-07-02T05:45:41Z" | 3,190 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:yanolja/EEVE-Korean-Instruct-10.8B-v1.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T03:08:21Z" | ---
base_model: yanolja/EEVE-Korean-Instruct-10.8B-v1.0
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** jay6944
- **License:** apache-2.0
- **Finetuned from model :** yanolja/EEVE-Korean-Instruct-10.8B-v1.0
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
albert/albert-xlarge-v2 | albert | "2024-04-10T09:57:46Z" | 3,188 | 6 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:04Z" | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# ALBERT XLarge v2
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the second version of the xlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 24 repeating layers
- 128 embedding dimension
- 2048 hidden dimension
- 16 attention heads
- 58M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xlarge-v2')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"â–modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"â–modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"â–model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"â–runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"â–lingerie"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xlarge-v2')
model = AlbertModel.from_pretrained("albert-xlarge-v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xlarge-v2')
model = TFAlbertModel.from_pretrained("albert-xlarge-v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xlarge-v2')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"â–shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"â–blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"â–lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"â–receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"â–paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"â–waitress"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf | RichardErkhov | "2024-06-15T04:58:19Z" | 3,188 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-15T03:52:30Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
CodeMaster-v1-9b - GGUF
- Model creator: https://huggingface.co/KingNish/
- Original model: https://huggingface.co/KingNish/CodeMaster-v1-9b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [CodeMaster-v1-9b.Q2_K.gguf](https://huggingface.co/RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf/blob/main/CodeMaster-v1-9b.Q2_K.gguf) | Q2_K | 3.19GB |
| [CodeMaster-v1-9b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf/blob/main/CodeMaster-v1-9b.IQ3_XS.gguf) | IQ3_XS | 3.52GB |
| [CodeMaster-v1-9b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf/blob/main/CodeMaster-v1-9b.IQ3_S.gguf) | IQ3_S | 3.72GB |
| [CodeMaster-v1-9b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf/blob/main/CodeMaster-v1-9b.Q3_K_S.gguf) | Q3_K_S | 3.72GB |
| [CodeMaster-v1-9b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf/blob/main/CodeMaster-v1-9b.IQ3_M.gguf) | IQ3_M | 3.93GB |
| [CodeMaster-v1-9b.Q3_K.gguf](https://huggingface.co/RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf/blob/main/CodeMaster-v1-9b.Q3_K.gguf) | Q3_K | 4.16GB |
| [CodeMaster-v1-9b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf/blob/main/CodeMaster-v1-9b.Q3_K_M.gguf) | Q3_K_M | 4.16GB |
| [CodeMaster-v1-9b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf/blob/main/CodeMaster-v1-9b.Q3_K_L.gguf) | Q3_K_L | 4.55GB |
| [CodeMaster-v1-9b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf/blob/main/CodeMaster-v1-9b.IQ4_XS.gguf) | IQ4_XS | 4.61GB |
| [CodeMaster-v1-9b.Q4_0.gguf](https://huggingface.co/RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf/blob/main/CodeMaster-v1-9b.Q4_0.gguf) | Q4_0 | 4.84GB |
| [CodeMaster-v1-9b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf/blob/main/CodeMaster-v1-9b.IQ4_NL.gguf) | IQ4_NL | 4.86GB |
| [CodeMaster-v1-9b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf/blob/main/CodeMaster-v1-9b.Q4_K_S.gguf) | Q4_K_S | 4.87GB |
| [CodeMaster-v1-9b.Q4_K.gguf](https://huggingface.co/RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf/blob/main/CodeMaster-v1-9b.Q4_K.gguf) | Q4_K | 5.16GB |
| [CodeMaster-v1-9b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf/blob/main/CodeMaster-v1-9b.Q4_K_M.gguf) | Q4_K_M | 5.16GB |
| [CodeMaster-v1-9b.Q4_1.gguf](https://huggingface.co/RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf/blob/main/CodeMaster-v1-9b.Q4_1.gguf) | Q4_1 | 5.36GB |
| [CodeMaster-v1-9b.Q5_0.gguf](https://huggingface.co/RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf/blob/main/CodeMaster-v1-9b.Q5_0.gguf) | Q5_0 | 5.89GB |
| [CodeMaster-v1-9b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf/blob/main/CodeMaster-v1-9b.Q5_K_S.gguf) | Q5_K_S | 5.89GB |
| [CodeMaster-v1-9b.Q5_K.gguf](https://huggingface.co/RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf/blob/main/CodeMaster-v1-9b.Q5_K.gguf) | Q5_K | 6.06GB |
| [CodeMaster-v1-9b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf/blob/main/CodeMaster-v1-9b.Q5_K_M.gguf) | Q5_K_M | 6.06GB |
| [CodeMaster-v1-9b.Q5_1.gguf](https://huggingface.co/RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf/blob/main/CodeMaster-v1-9b.Q5_1.gguf) | Q5_1 | 6.41GB |
| [CodeMaster-v1-9b.Q6_K.gguf](https://huggingface.co/RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf/blob/main/CodeMaster-v1-9b.Q6_K.gguf) | Q6_K | 7.01GB |
| [CodeMaster-v1-9b.Q8_0.gguf](https://huggingface.co/RichardErkhov/KingNish_-_CodeMaster-v1-9b-gguf/blob/main/CodeMaster-v1-9b.Q8_0.gguf) | Q8_0 | 9.07GB |
Original model description:
---
tags:
- merge
- mergekit
- lazymergekit
- KingNish/CodeMaster-v1-7b
base_model:
- KingNish/CodeMaster-v1-7b
- KingNish/CodeMaster-v1-7b
license: mit
pipeline_tag: text-generation
---
# CodeMaster-v1-9b
CodeMaster-v1-9b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [KingNish/CodeMaster-v1-7b](https://huggingface.co/KingNish/CodeMaster-v1-7b)
* [KingNish/CodeMaster-v1-7b](https://huggingface.co/KingNish/CodeMaster-v1-7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: KingNish/CodeMaster-v1-7b
layer_range: [0, 22]
- sources:
- model: KingNish/CodeMaster-v1-7b
layer_range: [10, 32]
merge_method: passthrough
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "KingNish/CodeMaster-v1-9b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=8192, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
mcgrady164/ShieldLM-13B-baichuan2-Q4_K_M-GGUF | mcgrady164 | "2024-07-01T06:17:43Z" | 3,188 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"zh",
"base_model:thu-coai/ShieldLM-13B-baichuan2",
"license:mit",
"region:us"
] | null | "2024-07-01T06:17:02Z" | ---
base_model: thu-coai/ShieldLM-13B-baichuan2
language:
- en
- zh
license: mit
tags:
- llama-cpp
- gguf-my-repo
---
# mcgrady164/ShieldLM-13B-baichuan2-Q4_K_M-GGUF
This model was converted to GGUF format from [`thu-coai/ShieldLM-13B-baichuan2`](https://huggingface.co/thu-coai/ShieldLM-13B-baichuan2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/thu-coai/ShieldLM-13B-baichuan2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo mcgrady164/ShieldLM-13B-baichuan2-Q4_K_M-GGUF --hf-file shieldlm-13b-baichuan2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo mcgrady164/ShieldLM-13B-baichuan2-Q4_K_M-GGUF --hf-file shieldlm-13b-baichuan2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo mcgrady164/ShieldLM-13B-baichuan2-Q4_K_M-GGUF --hf-file shieldlm-13b-baichuan2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo mcgrady164/ShieldLM-13B-baichuan2-Q4_K_M-GGUF --hf-file shieldlm-13b-baichuan2-q4_k_m.gguf -c 2048
```
|
facebook/convnextv2-tiny-22k-224 | facebook | "2023-09-26T17:19:29Z" | 3,186 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"convnextv2",
"image-classification",
"vision",
"dataset:imagenet-22k",
"arxiv:2301.00808",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-02-19T07:33:39Z" | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-22k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ConvNeXt V2 (tiny-sized model)
ConvNeXt V2 model pretrained using the FCMAE framework and fine-tuned on the ImageNet-22K dataset at resolution 224x224. It was introduced in the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Woo et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt-V2).
Disclaimer: The team releasing ConvNeXT V2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXt V2 is a pure convolutional model (ConvNet) that introduces a fully convolutional masked autoencoder framework (FCMAE) and a new Global Response Normalization (GRN) layer to ConvNeXt. ConvNeXt V2 significantly improves the performance of pure ConvNets on various recognition benchmarks.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnextv2) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, ConvNextV2ForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
preprocessor = AutoImageProcessor.from_pretrained("facebook/convnextv2-tiny-22k-224")
model = ConvNextV2ForImageClassification.from_pretrained("facebook/convnextv2-tiny-22k-224")
inputs = preprocessor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnextv2).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2301-00808,
author = {Sanghyun Woo and
Shoubhik Debnath and
Ronghang Hu and
Xinlei Chen and
Zhuang Liu and
In So Kweon and
Saining Xie},
title = {ConvNeXt {V2:} Co-designing and Scaling ConvNets with Masked Autoencoders},
journal = {CoRR},
volume = {abs/2301.00808},
year = {2023},
url = {https://doi.org/10.48550/arXiv.2301.00808},
doi = {10.48550/arXiv.2301.00808},
eprinttype = {arXiv},
eprint = {2301.00808},
timestamp = {Tue, 10 Jan 2023 15:10:12 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2301-00808.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
mradermacher/Mistral-7B-Holodeck-1-GGUF | mradermacher | "2024-06-11T08:42:35Z" | 3,186 | 0 | transformers | [
"transformers",
"gguf",
"pytorch",
"mistral",
"finetuned",
"en",
"base_model:KoboldAI/Mistral-7B-Holodeck-1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T18:31:37Z" | ---
base_model: KoboldAI/Mistral-7B-Holodeck-1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- pytorch
- mistral
- finetuned
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/KoboldAI/Mistral-7B-Holodeck-1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mistral-7B-Holodeck-1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Holodeck-1-GGUF/resolve/main/Mistral-7B-Holodeck-1.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Holodeck-1-GGUF/resolve/main/Mistral-7B-Holodeck-1.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Holodeck-1-GGUF/resolve/main/Mistral-7B-Holodeck-1.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Holodeck-1-GGUF/resolve/main/Mistral-7B-Holodeck-1.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Holodeck-1-GGUF/resolve/main/Mistral-7B-Holodeck-1.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Holodeck-1-GGUF/resolve/main/Mistral-7B-Holodeck-1.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Holodeck-1-GGUF/resolve/main/Mistral-7B-Holodeck-1.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Holodeck-1-GGUF/resolve/main/Mistral-7B-Holodeck-1.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Holodeck-1-GGUF/resolve/main/Mistral-7B-Holodeck-1.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Holodeck-1-GGUF/resolve/main/Mistral-7B-Holodeck-1.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Holodeck-1-GGUF/resolve/main/Mistral-7B-Holodeck-1.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Holodeck-1-GGUF/resolve/main/Mistral-7B-Holodeck-1.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Holodeck-1-GGUF/resolve/main/Mistral-7B-Holodeck-1.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Holodeck-1-GGUF/resolve/main/Mistral-7B-Holodeck-1.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Holodeck-1-GGUF/resolve/main/Mistral-7B-Holodeck-1.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/FusionTruthful-Gk-MoE-7b-slerp-GGUF | mradermacher | "2024-06-05T07:22:24Z" | 3,184 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"powermove72/FusionTruthful-Gk-MoE-13b-slerp",
"en",
"base_model:powermove72/FusionTruthful-Gk-MoE-7b-slerp",
"endpoints_compatible",
"region:us"
] | null | "2024-06-05T06:58:11Z" | ---
base_model: powermove72/FusionTruthful-Gk-MoE-7b-slerp
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- powermove72/FusionTruthful-Gk-MoE-13b-slerp
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/powermove72/FusionTruthful-Gk-MoE-7b-slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/FusionTruthful-Gk-MoE-7b-slerp-GGUF/resolve/main/FusionTruthful-Gk-MoE-7b-slerp.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/FusionTruthful-Gk-MoE-7b-slerp-GGUF/resolve/main/FusionTruthful-Gk-MoE-7b-slerp.IQ3_XS.gguf) | IQ3_XS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/FusionTruthful-Gk-MoE-7b-slerp-GGUF/resolve/main/FusionTruthful-Gk-MoE-7b-slerp.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/FusionTruthful-Gk-MoE-7b-slerp-GGUF/resolve/main/FusionTruthful-Gk-MoE-7b-slerp.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/FusionTruthful-Gk-MoE-7b-slerp-GGUF/resolve/main/FusionTruthful-Gk-MoE-7b-slerp.IQ3_M.gguf) | IQ3_M | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/FusionTruthful-Gk-MoE-7b-slerp-GGUF/resolve/main/FusionTruthful-Gk-MoE-7b-slerp.Q3_K_M.gguf) | Q3_K_M | 3.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/FusionTruthful-Gk-MoE-7b-slerp-GGUF/resolve/main/FusionTruthful-Gk-MoE-7b-slerp.Q3_K_L.gguf) | Q3_K_L | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/FusionTruthful-Gk-MoE-7b-slerp-GGUF/resolve/main/FusionTruthful-Gk-MoE-7b-slerp.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/FusionTruthful-Gk-MoE-7b-slerp-GGUF/resolve/main/FusionTruthful-Gk-MoE-7b-slerp.Q4_K_S.gguf) | Q4_K_S | 3.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/FusionTruthful-Gk-MoE-7b-slerp-GGUF/resolve/main/FusionTruthful-Gk-MoE-7b-slerp.Q4_K_M.gguf) | Q4_K_M | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/FusionTruthful-Gk-MoE-7b-slerp-GGUF/resolve/main/FusionTruthful-Gk-MoE-7b-slerp.Q5_K_S.gguf) | Q5_K_S | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/FusionTruthful-Gk-MoE-7b-slerp-GGUF/resolve/main/FusionTruthful-Gk-MoE-7b-slerp.Q5_K_M.gguf) | Q5_K_M | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/FusionTruthful-Gk-MoE-7b-slerp-GGUF/resolve/main/FusionTruthful-Gk-MoE-7b-slerp.Q6_K.gguf) | Q6_K | 5.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/FusionTruthful-Gk-MoE-7b-slerp-GGUF/resolve/main/FusionTruthful-Gk-MoE-7b-slerp.Q8_0.gguf) | Q8_0 | 7.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/FusionTruthful-Gk-MoE-7b-slerp-GGUF/resolve/main/FusionTruthful-Gk-MoE-7b-slerp.f16.gguf) | f16 | 13.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
huspacy/hu_core_news_lg | huspacy | "2023-10-27T17:59:14Z" | 3,182 | 4 | spacy | [
"spacy",
"token-classification",
"hu",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
] | token-classification | "2022-03-02T23:29:05Z" | ---
tags:
- spacy
- token-classification
language:
- hu
license: cc-by-sa-4.0
model-index:
- name: hu_core_news_lg
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8714565876
- name: NER Recall
type: recall
value: 0.8593530239
- name: NER F Score
type: f_score
value: 0.8653624856
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9688501842
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9670319154
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.9362618432
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.9759831595
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.8332400672
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.76922216
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.9821428571
---
Core Hungarian model for HuSpaCy. Components: tok2vec, senter, tagger, morphologizer, lemmatizer, parser, ner
| Feature | Description |
| --- | --- |
| **Name** | `hu_core_news_lg` |
| **Version** | `3.7.0` |
| **spaCy** | `>=3.7.0,<3.8.0` |
| **Default Pipeline** | `tok2vec`, `senter`, `tagger`, `morphologizer`, `lookup_lemmatizer`, `trainable_lemmatizer`, `parser`, `ner` |
| **Components** | `tok2vec`, `senter`, `tagger`, `morphologizer`, `lookup_lemmatizer`, `trainable_lemmatizer`, `parser`, `ner` |
| **Vectors** | -1 keys, 200000 unique vectors (300 dimensions) |
| **Sources** | [UD Hungarian Szeged](https://universaldependencies.org/treebanks/hu_szeged/index.html) (Richárd Farkas, Katalin Simkó, Zsolt Szántó, Viktor Varga, Veronika Vincze (MTA-SZTE Research Group on Artificial Intelligence))<br>[NYTK-NerKor Corpus](https://github.com/nytud/NYTK-NerKor) (Eszter Simon, Noémi Vadász (Department of Language Technology and Applied Linguistics))<br>[Szeged NER Corpus](https://rgai.inf.u-szeged.hu/node/130) (György Szarvas, Richárd Farkas, László Felföldi, András Kocsor, János Csirik (MTA-SZTE Research Group on Artificial Intelligence))<br>[Hungarian lg Floret vectors](https://huggingface.co/huspacy/hu_vectors_web_lg) (Szeged AI) |
| **License** | `cc-by-sa-4.0` |
| **Author** | [SzegedAI, MILAB](https://github.com/huspacy/huspacy) |
### Label Scheme
<details>
<summary>View label scheme (1209 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `ADJ`, `ADP`, `ADV`, `AUX`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PRON`, `PROPN`, `PUNCT`, `SCONJ`, `SYM`, `VERB`, `X` |
| **`morphologizer`** | `Definite=Def\|POS=DET\|PronType=Art`, `Case=Ine\|Number=Sing\|POS=NOUN`, `POS=ADV`, `Case=Nom\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=NOUN`, `Definite=Ind\|POS=DET\|PronType=Tot`, `Case=Ade\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `POS=PUNCT`, `Case=Nom\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Ind\|POS=DET\|PronType=Ind`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=ADP`, `POS=CCONJ`, `Case=Del\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Sbl\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=ADJ\|VerbForm=PartPast`, `Case=Del\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=PROPN`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|POS=NOUN`, `Case=Sup\|Number=Sing\|POS=PROPN`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Plur\|POS=NOUN`, `Degree=Pos\|POS=ADV`, `Case=Sup\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Cau\|Number=Plur\|POS=NOUN`, `Case=Cau\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Tra\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Nom\|Number=Plur\|POS=NOUN`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Def\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ins\|Number=Sing\|POS=NOUN`, `POS=ADV\|PronType=Neg`, `Case=Ine\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `POS=SCONJ`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Sbl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Abl\|Number=Sing\|POS=NOUN`, `Case=Dat\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Degree=Sup\|Number=Sing\|POS=ADJ`, `POS=ADV\|PronType=Dem`, `Case=Ins\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ade\|Degree=Pos\|Number=Sing\|POS=ADJ`, `POS=ADV\|PronType=Int`, `Case=Tra\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sbl\|Number=Sing\|POS=PROPN`, `Case=Sbl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=PART`, `Case=Sup\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `POS=ADV\|PronType=Tot`, `Case=Ill\|Definite=Ind\|POS=DET\|PronType=Ind`, `Number=Sing\|POS=VERB\|Person=3\|VerbForm=Inf\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `Definite=Ind\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Sup\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Number=Sing\|POS=ADJ\|VerbForm=PartPast`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ess\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Acc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Number=Sing\|POS=ADJ\|VerbForm=PartFut`, `Case=Ine\|NumType=Card\|Number=Sing\|POS=NUM`, `Definite=Ind\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=NOUN`, `Case=Del\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Tra\|Number=Sing\|POS=NOUN`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Ind\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Definite=Ind\|POS=DET\|PronType=Art`, `Case=Dat\|Number=Plur\|POS=NOUN`, `Case=Ins\|Number=Plur\|POS=NOUN`, `Case=Sbl\|Number=Plur\|POS=NOUN`, `Case=Ela\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=All\|Number=Sing\|POS=NOUN`, `Case=Ine\|Number=Plur\|POS=NOUN`, `Case=Dat\|Number=Plur\|POS=ADJ\|VerbForm=PartPres`, `Case=Ela\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Abl\|Number=Sing\|POS=PROPN`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Number=Sing\|POS=PROPN`, `Case=Ess\|Number=Sing\|POS=ADJ\|VerbForm=PartPast`, `Number=Plur\|POS=VERB\|Person=3\|VerbForm=Inf\|Voice=Act`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Abl\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ`, `POS=ADV\|PronType=Rel`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Cau`, `Case=Del\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Gen\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Ill\|Number=Plur\|POS=NOUN`, `Case=Ela\|Number=Plur\|POS=NOUN`, `Case=Ill\|Number=Sing\|POS=PROPN`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Definite=Def\|POS=DET\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ter\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `POS=ADV\|VerbForm=Conv`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sup\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Aspect=Iter\|Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Iter\|Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dis\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ade\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=ADJ\|VerbForm=PartPast`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Plur\|POS=PROPN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Cau\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Dat\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Sing\|POS=PROPN`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Cau`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Cau`, `Case=Abs\|Number=Sing\|POS=NOUN`, `Case=Ade\|Number=Sing\|POS=PROPN`, `Case=Ins\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Number=Sing\|POS=PROPN`, `Case=Del\|Number=Sing\|POS=PROPN`, `Case=Sbl\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Loc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Definite=Ind\|POS=DET\|PronType=Ind`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Definite=Ind\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Ind\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ter\|Number=Sing\|POS=NOUN`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `POS=X`, `Definite=Def\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Del\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Case=Tra\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Degree=Pos\|POS=ADV\|PronType=Dem`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|Reflex=Yes`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Definite=Ind\|Mood=Cnd,Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Definite=Def\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Iter\|Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Ine\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Cnd,Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Sbl\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=All\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ess\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Dat\|Number=Sing\|POS=PROPN`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Number=Plur\|POS=ADJ\|VerbForm=PartPres`, `Case=Sbl\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Ess\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3\|VerbForm=PartPast`, `Definite=Def\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Definite=Ind\|POS=DET\|PronType=Neg`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ter\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Def\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Def\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Nom\|Number=Plur\|POS=ADJ\|VerbForm=PartPast`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Number=Plur\|POS=ADJ\|VerbForm=PartPast`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Cau\|Number=Sing\|POS=PROPN`, `Case=Abs\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Ine\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ess\|Number=Sing\|POS=NOUN`, `Case=Ter\|Number=Plur\|POS=NOUN`, `Case=Tem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=INTJ`, `Case=Ine\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Number=Plur\|POS=VERB\|Person=1\|VerbForm=Inf\|Voice=Act`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Definite=Ind\|Mood=Pot\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=All\|Number=Sing\|POS=PROPN`, `Case=Ter\|Number=Sing\|POS=PROPN`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sbl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Ind\|Mood=Imp\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Definite=Ind\|Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Ind\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Sbl\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Definite=Def\|POS=DET\|PronType=Prs`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Del\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Acc\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Definite=Ind\|Mood=Imp,Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Def\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Def\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Ind\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Cau\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ins\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Definite=2\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Sbl\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Ind\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=All\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=All\|Number=Plur\|POS=NOUN`, `Case=Ela\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Abs\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ine\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Definite=Def\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=All\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Aspect=Iter\|Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Ter\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Number=Plur\|POS=NOUN`, `Case=Tem\|Number=Sing\|POS=NOUN`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ins\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `POS=ADV\|PronType=Ind`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Def\|POS=DET\|PronType=Int`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Abs\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Del\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Number=Plur\|POS=PROPN`, `Case=Abl\|NumType=Card\|Number=Sing\|POS=NUM`, `Definite=Def\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Abs\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Definite=Def\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ela\|Number=Sing\|POS=PROPN`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ela\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Sbl\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Def\|Mood=Imp,Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|POS=DET\|PronType=Tot`, `Definite=Def\|POS=DET\|PronType=Neg`, `Case=Ins\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Sup\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sbl\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Cau`, `Case=Sbl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Tra\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Ess\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ess\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Sup\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Acc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ\|VerbForm=PartPast`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Degree=Cmp\|POS=ADV`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ela\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ins\|NumType=Card\|Number=Sing\|POS=NUM`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ\|VerbForm=PartFut`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ\|VerbForm=PartPast`, `Degree=Sup\|POS=ADV`, `Case=Acc\|NumType=Card\|Number=Sing\|POS=NUM`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ade\|Number=Plur\|POS=NOUN`, `Case=Acc\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Tra\|Degree=Pos\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ\|VerbForm=PartPres`, `Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Inf\|Voice=Act`, `Case=All\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Cau\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psed]=Sing\|POS=ADJ`, `Case=Nom\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ela\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Ine\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Mood=Pot\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Ela\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ade\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Definite=Def\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Inf\|Voice=Act`, `Case=Ela\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Sbl\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Definite=Ind\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Case=Ade\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ\|VerbForm=PartPres`, `Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Inf\|Voice=Act`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Number=Plur\|POS=ADV\|Person=1\|PronType=PrsPron`, `POS=ADV\|PronType=v`, `Definite=Ind\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Number=Sing\|POS=ADV\|Person=3\|PronType=PrsPron`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=Tem\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=Abl\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Number=Sing\|POS=ADV\|Person=1\|PronType=PrsPron`, `Case=Ter\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Number=Sing\|POS=VERB\|Person=1\|VerbForm=Inf\|Voice=Act`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Ine\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Number=Plur\|POS=ADV\|Person=3\|PronType=PrsPron`, `Case=Ins\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ter\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Sbl\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Cas=6\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Sup\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sup\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Del\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Nom\|NumType=Dist\|Number=Sing\|POS=NUM`, `Case=Sup\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Definite=Ind\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Sbl\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|Voice=Act`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|Voice=Act`, `Case=Acc\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Number=Plur\|POS=ADV\|Person=2\|PronType=PrsPron`, `Case=Ine\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=All\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Ind\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sup\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Del\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Nom\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=All\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Number=Plur\|POS=NOUN`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ade\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Cas=6\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ess\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Sbl\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=All\|Number=Plur\|POS=PROPN`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Ind`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Acc\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ine\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Ade\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Ins\|Number=Plur\|POS=PROPN`, `Case=Nom\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Definite=Def\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=All\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Nom\|NumType[sem]=Dot\|Number=Sing\|POS=NUM`, `Case=Sup\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Degree=Pos\|POS=ADV\|PronType=Ind`, `Case=Ela\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|Voice=Act`, `Case=Ade\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Sup\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=Gen\|Number=Plur\|POS=PROPN`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Ins\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Ill\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Del\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Number=Sing\|POS=ADV\|Person=2\|PronType=PrsPron`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ill\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Degree=Cmp\|POS=ADV\|PronType=Dem`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ins\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Del\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Tot`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=1`, `Case=Sbl\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Tem\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Tem\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|NumType[sem]=Result\|Number=Sing\|POS=NUM`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Acc\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Tot`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ela\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Del\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Del\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Dat\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Definite=2\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Com\|Number=Sing\|POS=NOUN`, `Case=Tra\|Number=Plur\|POS=NOUN`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Tot`, `Case=Ade\|Number=Plur\|POS=PROPN`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Ess\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Definite=Def\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sbl\|NumType[sem]=Quotient\|Number=Sing\|POS=NUM`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Del\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Del\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Ess\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Del\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Tem\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ill\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Ill\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person[psor]=1`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Gen\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ins\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Dat\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Acc\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Definite=Def\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Sbl\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Sup\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Tem\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Tra\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Abs\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=All\|Number=Plur\|POS=PRON\|Person=1\|PronType=Tot`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Ind`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Sup\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=All\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Cas=1\|Number=Sing\|POS=PROPN`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|Voice=Act`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Sbl\|NumType[sem]=Result\|Number=Sing\|POS=NUM`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Sbl\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Ela\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Tot`, `Definite=Ind\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Cau\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Tot`, `Case=Abl\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Tra\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Cau\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Sup\|Number=Plur\|POS=PROPN`, `Case=Ess\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dis\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Nom\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Cas=6\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Sup\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Sbl\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Sup\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Abs\|Number=Plur\|POS=NOUN`, `Case=Sup\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Tot`, `Case=Ine\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Tra\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ins\|Degree=Cmp\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sbl\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=All\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PROPN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Del\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Del\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ter\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Sup\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Sbl\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Definite=Ind\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Gen\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=All\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Definite=2\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Cas=6\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Abl\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Abs\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Ade\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Del\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Number=Sing\|POS=VERB\|Person=2\|VerbForm=Inf\|Voice=Act`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Cau\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Ine\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Ela\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Sup\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Cau\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Sbl\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ter\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Tra\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Ind\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Dat\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Tem\|Number=Plur\|POS=NOUN`, `Case=Abs\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ins\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=All\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Acc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=All\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ade\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ade\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=2\|PronType=Tot`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Cau\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Del\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ill\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Ins\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=All\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ess\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Cau\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Cas=6\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Tra\|NumType=Card\|Number=Sing\|POS=NUM`, `Number=Plur\|POS=VERB\|Person=2\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Cas=6\|Number=Sing\|POS=NOUN`, `Case=Ins\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Sup\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Abl\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Int`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Del\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Tra\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Sbl\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Acc\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Int`, `Case=Nom\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Abl\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Gen\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=All\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Tra\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Sbl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ins\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Dem`, `Case=Nom\|Degree=Cmp\|Number=Plur\|Number[psed]=Sing\|POS=ADJ`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=2\|PronType=Ind`, `Case=All\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Tem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Cau\|Number=Plur\|POS=PROPN`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Del\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|Voice=Act`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|Voice=Act`, `Case=Sup\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Tra\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Sup\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Nom\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Sbl\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Nom\|Number=Plur\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=All\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ill\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ela\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Del\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Tra\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ter\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Ter\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|Voice=Act`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Cau\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ins\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Del\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Tem\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Del\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Sup\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ter\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Abs\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=All\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Sup\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Cau\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Sup\|NumType=Ord\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Sup\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Int`, `Case=Ela\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Dat\|NumType=Ord\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Ill\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=All\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=1`, `Case=Ine\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=1`, `Case=Ela\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Ade\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=PROPN\|Person[psor]=1`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=All\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Acc\|Degree=Pos\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person[psor]=3`, `Case=Dat\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=SYM\|Type=w`, `Case=Gen\|Number=Sing\|POS=SYM\|Type=w`, `Case=Abl\|Number=Sing\|POS=SYM\|Type=w`, `Case=Acc\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ade\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=All\|Number=Sing\|POS=SYM\|Type=w`, `Case=Tra\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Ins\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Abl\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Plur\|Number[psed]=Sing\|POS=ADJ`, `Case=Sup\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Sup\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Nom\|NumType[sem]=Quotient\|Number=Sing\|POS=NUM`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PROPN\|Person[psor]=1`, `Case=Ins\|Number=Sing\|Number[psed]=Plur\|POS=NOUN`, `Case=Gen\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Ine\|Degree=Pos\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person[psor]=3`, `Case=Abs\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ela\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Dat\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=SYM\|Type=o`, `Case=Gen\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Sup\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|NumType[sem]=Signed\|Number=Sing\|POS=NUM`, `Case=Com\|Number=Sing\|POS=PROPN`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Ins\|Number=Sing\|Number[psed]=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ill\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Nom\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ins\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Gen\|NumType=Dist\|Number=Sing\|POS=NUM`, `Case=Nom\|NumType[sem]=Formula\|Number=Sing\|POS=NUM`, `Case=Del\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ade\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Rel`, `Case=Ine\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person[psor]=3`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Number=Sing\|POS=SYM\|Type=o`, `Case=Ins\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ela\|Number=Sing\|POS=SYM\|Type=o`, `Case=Dat\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=All\|Number=Plur\|Number[psed]=Sing\|POS=SYM\|Type=w`, `Case=Ade\|Number=Sing\|POS=SYM\|Type=w`, `Case=Sbl\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ade\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Acc\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Ill\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Sup\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Sup\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Dat\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Ill\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ine\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Ins\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Ela\|Number=Sing\|POS=SYM\|Type=w`, `Case=Sbl\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Nom\|Number=Sing\|POS=SYM\|Type=p`, `Case=Abl\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|NumType[sem]=Measure\|Number=Sing\|POS=NUM`, `Case=Abs\|Number=Sing\|POS=PROPN`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Nom\|Number=Sing\|Number[psed]=Plur\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=SYM\|Type=m`, `Case=Acc\|Number=Sing\|POS=SYM\|Type=m`, `Case=Sup\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=SYM\|Type=o`, `Case=Ins\|Number=Sing\|POS=SYM\|Type=o`, `Case=Ins\|Number=Sing\|POS=SYM\|Type=w`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Acc\|Number=Sing\|Number[psed]=Plur\|POS=NOUN`, `Case=Gen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Abl\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Acc\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Abs\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ill\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Abl\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=Gen\|Degree=Sup\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Abs\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Sup\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Sup\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Abs\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Acc\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Acc\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ter\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Dat\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Acc\|NumType[sem]=Percent\|Number=Sing\|POS=NUM`, `Case=Ter\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ade\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ins\|NumType[sem]=Percent\|Number=Sing\|POS=NUM`, `Case=Ins\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Gen\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Dat\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Sbl\|NumType[sem]=Percent\|Number=Sing\|POS=NUM`, `Case=Ine\|NumType[sem]=Percent\|Number=Sing\|POS=NUM`, `Case=All\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ade\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Nom\|NumType[sem]=Percent\|Number=Sing\|POS=NUM`, `Case=All\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Abl\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ter\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Acc\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ter\|NumType[sem]=Formula\|Number=Sing\|POS=NUM`, `Case=Sbl\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Del\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Cau\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Ins\|NumType=Ord\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Ade\|NumType=Frac\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Ine\|NumType=Frac\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Sup\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Tra\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ine\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Tra\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Gen\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Tem\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Dat\|NumType[sem]=Dot\|Number=Sing\|POS=NUM`, `Case=Sbl\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=All\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ine\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=All\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Sbl\|Number=Plur\|POS=PROPN`, `Case=Tra\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Sup\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Dat\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Dat\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ill\|Number=Plur\|POS=PROPN`, `Case=Loc\|Number=Sing\|POS=PROPN`, `Case=Ess\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Acc\|Degree=Pos\|Number=Plur\|Number[psed]=Sing\|POS=ADJ`, `Case=Abl\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=All\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Ade\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Ine\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Sing\|POS=SYM\|Type=w`, `Case=Cau\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Abs\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Sbl\|NumType[sem]=Dot\|Number=Sing\|POS=NUM`, `Case=Tem\|Number=Sing\|POS=PROPN`, `Case=Del\|NumType[sem]=Dot\|Number=Sing\|POS=NUM`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Ine\|Number=Sing\|Number[psor]=Plur\|POS=PROPN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Acc\|Degree=Sup\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Ade\|Number=Plur\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ela\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|Number[psed]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Del\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Degree=Sup\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Dat\|Number=Plur\|POS=PROPN`, `Case=Ill\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Sbl\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ter\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ess\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Sup\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=2\|PronType=Tot`, `Case=Gen\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Int`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `Case=Tra\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Del\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Sbl\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=2\|PronType=Ind`, `Case=All\|Number=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Sbl\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Ill\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Ine\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Del\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=2\|PronType=Tot`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Int`, `Case=Ine\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Cau\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Del\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Nom\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=2`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ine\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Definite=2\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Ela\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Sing\|POS=SYM\|Type=p`, `Case=Abl\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ine\|Number=Plur\|POS=PROPN`, `Case=Sbl\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Ter\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=All\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Tot` |
| **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `advmod:locy`, `advmod:mode`, `advmod:que`, `advmod:tfrom`, `advmod:tlocy`, `advmod:to`, `advmod:tto`, `amod:att`, `appos`, `aux`, `case`, `cc`, `ccomp`, `ccomp:obj`, `ccomp:obl`, `ccomp:pred`, `compound`, `compound:preverb`, `conj`, `cop`, `csubj`, `dep`, `det`, `flat:name`, `iobj`, `list`, `mark`, `nmod`, `nmod:att`, `nmod:obl`, `nsubj`, `nummod`, `obj`, `obj:lvc`, `obl`, `orphan`, `parataxis`, `punct`, `xcomp` |
| **`ner`** | `LOC`, `MISC`, `ORG`, `PER` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 99.99 |
| `TOKEN_P` | 99.86 |
| `TOKEN_R` | 99.93 |
| `TOKEN_F` | 99.89 |
| `SENTS_P` | 98.43 |
| `SENTS_R` | 98.00 |
| `SENTS_F` | 98.21 |
| `TAG_ACC` | 96.89 |
| `POS_ACC` | 96.70 |
| `MORPH_ACC` | 93.63 |
| `MORPH_MICRO_P` | 97.03 |
| `MORPH_MICRO_R` | 95.98 |
| `MORPH_MICRO_F` | 96.50 |
| `LEMMA_ACC` | 97.60 |
| `DEP_UAS` | 83.32 |
| `DEP_LAS` | 76.92 |
| `ENTS_P` | 87.15 |
| `ENTS_R` | 85.94 |
| `ENTS_F` | 86.54 | |
cerebras/Cerebras-GPT-13B | cerebras | "2023-11-22T21:49:12Z" | 3,181 | 638 | transformers | [
"transformers",
"pytorch",
"gpt2",
"feature-extraction",
"causal-lm",
"text-generation",
"en",
"dataset:the_pile",
"arxiv:2304.03208",
"arxiv:2203.15556",
"arxiv:2101.00027",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | text-generation | "2023-03-20T20:45:54Z" | ---
language:
- en
inference: false
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- the_pile
pipeline_tag: text-generation
---
# Cerebras-GPT 13B
Check out our [Blog Post](https://www.cerebras.net/cerebras-gpt) and [arXiv paper](https://arxiv.org/abs/2304.03208)!
## Model Description
The Cerebras-GPT family is released to facilitate research into LLM scaling laws using open architectures and data sets and demonstrate the simplicity of and scalability of training LLMs on the Cerebras software and hardware stack. All Cerebras-GPT models are available on Hugging Face.
The family includes 111M, 256M, 590M, 1.3B, 2.7B, 6.7B, and 13B models.
All models in the Cerebras-GPT family have been trained in accordance with [Chinchilla scaling laws](https://arxiv.org/abs/2203.15556) (20 tokens per model parameter) which is compute-optimal.
These models were trained on the [Andromeda](https://www.cerebras.net/andromeda/) AI supercomputer comprised of 16 CS-2 wafer scale systems. Cerebras' [weight streaming technology](https://www.cerebras.net/blog/linear-scaling-made-possible-with-weight-streaming) simplifies the training of LLMs by disaggregating compute from model storage. This allowed for efficient scaling of training across nodes using simple data parallelism.
Cerebras systems for pre-training and fine tuning are available in the cloud via the [Cerebras Model Studio](https://www.cerebras.net/product-cloud/). Cerebras CS-2 compatible checkpoints are available in [Cerebras Model Zoo](https://github.com/Cerebras/modelzoo).
## Model Details
* Developed by: [Cerebras Systems](https://www.cerebras.net/)
* License: Apache 2.0
* Model type: Transformer-based Language Model
* Architecture: GPT-3 style architecture
* Data set: The Pile
* Tokenizer: Byte Pair Encoding
* Vocabulary Size: 50257
* Sequence Length: 2048
* Optimizer: AdamW, (β1, β2) = (0.9, 0.95), adam_eps = 1e−8 (1e−9 for larger models)
* Positional Encoding: Learned
* Language: English
* Learn more: Dense Scaling Laws Paper for training procedure, config files, and details on how to use.
**Contact**: To ask questions about Cerebras-GPT models, join the [Cerebras Discord](https://discord.gg/q6bZcMWJVu).
This is the standard parameterization version of Cerebras-GPT with **13B** parameters
Related models: [Cerebras-GPT Models](https://huggingface.co/models?sort=downloads&search=cerebras-gpt)
<br><br>
| Model | Parameters | Layers | d_model | Heads | d_head | d_ffn | LR | BS (seq) | BS (tokens) |
|---------------|------------|--------|---------|-------|--------|--------|----------|----------|----------------|
| Cerebras-GPT | 111M | 10 | 768 | 12 | 64 | 3072 | 6.0E-04 | 120 | 246K |
| Cerebras-GPT | 256M | 14 | 1088 | 17 | 64 | 4352 | 6.0E-04 | 264 | 541K |
| Cerebras-GPT | 590M | 18 | 1536 | 12 | 128 | 6144 | 2.0E-04 | 264 | 541K |
| Cerebras-GPT | 1.3B | 24 | 2048 | 16 | 128 | 8192 | 2.0E-04 | 528 | 1.08M |
| Cerebras-GPT | 2.7B | 32 | 2560 | 32 | 80 | 10240 | 2.0E-04 | 528 | 1.08M |
| Cerebras-GPT | 6.7B | 32 | 4096 | 32 | 128 | 16384 | 1.2E-04 | 1040 | 2.13M |
| Cerebras-GPT | 13B | 40 | 5120 | 40 | 128 | 20480 | 1.2E-04 | 720 → 1080 | 1.47M → 2.21M |
<br><br>
## Quickstart
This model can be easily loaded using the AutoModelForCausalLM functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("cerebras/Cerebras-GPT-13B")
model = AutoModelForCausalLM.from_pretrained("cerebras/Cerebras-GPT-13B")
text = "Generative AI is "
```
And can be used with Hugging Face Pipelines
```python
from transformers import pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
generated_text = pipe(text, max_length=50, do_sample=False, no_repeat_ngram_size=2)[0]
print(generated_text['generated_text'])
```
or with `model.generate()`
```python
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, num_beams=5,
max_new_tokens=50, early_stopping=True,
no_repeat_ngram_size=2)
text_output = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(text_output[0])
```
<br><br>
## Training data
Cerebras-GPT is trained using [the Pile](https://pile.eleuther.ai) dataset from [EleutherAI](https://www.eleuther.ai). See the [Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed breakdown of data sources and methodology. The Pile was cleaned using the ftfy library to normalize the text, then filtered using scripts provided by Eleuther.
We tokenized the data using byte-pair encoding using the GPT-2 vocabulary. Our tokenized version of the Pile has 371B tokens. We include more details about the training dataset preprocessing in Appendix A.1 of our paper.
Recent works find significant duplicate data present in the Pile. Eleuther’s Pythia applies a deduplication process to reduce replicated data, decreasing the Pile dataset size. Pythia was trained on both the standard dataset and deduplicated dataset to characterize the impact. Our models are trained on the standard Pile without deduplication, which may present an opportunity for further improvement with the deduplicated data set.
<br><br>
## Training procedure
We use the GPT-3 style model architecture. All of our layers use full attention as opposed to the GPT-3 style sparse banded attention. The model shapes were selected to either follow aspect ratio 80 or are the same shape as GPT-3 models. Learning rate warmed up for 375M tokens (1500 steps for 111M and 256M models) and 10x cosine decayed. No dropout was used and weight decay was set to 0.1. All models are trained with MSL of 2048.
All models were trained to Chinchilla point: 20 tokens per model parameter. Number of steps was chosen based on optimal batch size (varied by model) and fixed sequence length (2048). See Training Table, below, for details.
<br>
Model Params | Sequence Length | Batch Size | Number of Steps | Tokens | Tokens per Parameter | Flops
------------ | -------------- | ---------- | --------------- | ------ | -------------------- | -----
111M | 2048 | 120 | 9037 | 2.22E+09 | 20 | 2.6E+18
256M | 2048 | 264 | 9468 | 5.12E+09 | 20 | 1.3E+19
590M | 2048 | 264 | 21836 | 1.18E+10 | 20 | 6.1E+19
1.3B | 2048 | 528 | 24334 | 2.63E+10 | 20 | 2.8E+20
2.7B | 2048 | 528 | 49041 | 5.30E+10 | 20 | 1.1E+21
6.7B | 2048 | 1040 | 62522 | 1.33E+11 | 20 | 6.3E+21
13B | 2048 | 720 | 174335 | 2.57E+11 | 20 | 2.3E+22
<br><br>
## Evaluations
We trained models from smallest to largest and fit a power law as we went along. The power law was helpful for extrapolating the validation loss of the next largest model we trained and provided confidence about whether the training run was going well.
We performed upstream (pre-training) evaluations of text prediction cross-entropy using the Pile validation and test splits. We performed downstream evaluations of text generation accuracy on standardized tasks using the [Eleuther lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). Results are compared against many publicly available large language models in Section 3 of the paper.
#### 0-shot Evaluation
| Model | Params | Training FLOPs | PILE test xent | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA | Downstream Average |
| ------- | ----- | -------------- | -------------- | ---------- | ----- | ----------- | ------- | ----- | ----- | ---------- | ------------------ |
| Cerebras-GPT | 111M | 2.6E+18 | 2.566 | 0.268 | 0.594 | 0.488 | 0.194 | 0.380 | 0.166 | 0.118 | 0.315 |
| Cerebras-GPT | 256M | 1.3E+19 | 2.299 | 0.274 | 0.613 | 0.511 | 0.293 | 0.410 | 0.170 | 0.158 | 0.347 |
| Cerebras-GPT | 590M | 6.1E+19 | 2.184 | 0.291 | 0.627 | 0.498 | 0.366 | 0.464 | 0.190 | 0.158 | 0.370 |
| Cerebras-GPT | 1.3B | 2.8E+20 | 1.996 | 0.325 | 0.664 | 0.521 | 0.462 | 0.508 | 0.224 | 0.166 | 0.410 |
| Cerebras-GPT | 2.7B | 1.1E+21 | 1.834 | 0.386 | 0.701 | 0.559 | 0.567 | 0.571 | 0.246 | 0.206 | 0.462 |
| Cerebras-GPT | 6.7B | 6.3E+21 | 1.704 | 0.447 | 0.739 | 0.602 | 0.636 | 0.643 | 0.282 | 0.238 | 0.512 |
| Cerebras-GPT | 13B | 2.3E+22 | 1.575 | 0.513 | 0.766 | 0.646 | 0.696 | 0.714 | 0.367 | 0.286 | 0.570 |
#### 5-shot Evaluation
| Model | Params | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA |
| -------- | ----- | ----------| ----- | ----------- | -------| ----- | ----- | ---------- |
| Cerebras-GPT | 111M | 0.267 | 0.588 | 0.475 | 0.158 | 0.356 | 0.166 | 0.136 |
| Cerebras-GPT | 256M | 0.278 | 0.606 | 0.522 | 0.225 | 0.422 | 0.183 | 0.164 |
| Cerebras-GPT | 590M | 0.291 | 0.634 | 0.479 | 0.281 | 0.475 | 0.206 | 0.152 |
| Cerebras-GPT | 1.3B | 0.326 | 0.668 | 0.536 | 0.395 | 0.529 | 0.241 | 0.174 |
| Cerebras-GPT | 2.7B | 0.382 | 0.697 | 0.543 | 0.487 | 0.590 | 0.267 | 0.224 |
| Cerebras-GPT | 6.7B | 0.444 | 0.736 | 0.590 | 0.591 | 0.667 | 0.314 | 0.270 |
| Cerebras-GPT | 13B | 0.514 | 0.768 | 0.674 | 0.655 | 0.743 | 0.398 | 0.318 |
<br><br>
## Uses and Limitations
### Intended Use
The primary intended use is to further research into large language models. These models can be used as a foundation model for NLP, applications, ethics, and alignment research. Our primary intended users are researchers who are working to improve LLMs and practitioners seeking reference implementations, training setups, hyperparameters, or pre-trained models. We release these models with a fully permissive Apache license for the community to use freely.
You may fine-tune and adapt Cerebras-GPT models for deployment via either Cerebras [Model Studio](https://www.cerebras.net/product-cloud/) or third-party libraries. Further safety-related testing and mitigations should be applied beore using the Cerebras-GPT model family in production downstream applications.
Due to financial and compute budgets, Cerebras-GPT models were only trained and evaluated following the approaches described in the paper.
### Out of Scope Use
Cerebras-GPT models are trained on the Pile, with English language only, and are not suitable for machine translation tasks.
Cerebras-GPT models have not been tuned for human-facing dialog applications like chatbots and will not respond to prompts in a similar way to models that have received instruction tuning or reinforcement learning from human feedback (RLHF) like Flan-T5 or ChatGPT. Cerebras-GPT models can be tuned using those methods.
### Risk, Bias, Ethical Considerations
* **Data**: The Pile dataset has been thoroughly analyzed from various ethical standpoints such as toxicity analysis, gender bias, pejorative content, racially sensitive content etc. Please refer to Pile dataset references.
* **Human life**: The outputs from this model may or may not align with human values. The risk needs to be thoroughly investigated before deploying this model in a production environment where it can directly impact human life.
* **Risks and harms**: There can be distributional bias in the Pile dataset that can manifest in various forms in the downstream model deployment. There are other risks associated with large language models such as amplifying stereotypes, memorizing training data, or revealing private or secure information.
* **Mitigations**: Only mitigations in standard Pile dataset pre-processing were employed when pre-training Cerebras-GPT.
<br><br>
## Acknowledgements
We are thankful to all Cerebras engineers, past and present, that made this work possible. |
TheBloke/CausalLM-7B-GGUF | TheBloke | "2023-10-23T14:10:42Z" | 3,181 | 41 | transformers | [
"transformers",
"gguf",
"llama",
"llama2",
"qwen",
"text-generation",
"en",
"zh",
"dataset:JosephusCheung/GuanacoDataset",
"dataset:Open-Orca/OpenOrca",
"dataset:stingning/ultrachat",
"dataset:meta-math/MetaMathQA",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"dataset:jondurbin/airoboros-3.1",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:RyokoAI/ShareGPT52K",
"dataset:RyokoAI/Fandom23K",
"dataset:milashkaarshif/MoeGirlPedia_wikitext_raw_archive",
"dataset:wikipedia",
"dataset:wiki_lingua",
"dataset:fnlp/moss-003-sft-data",
"dataset:garage-bAInd/Open-Platypus",
"dataset:LDJnr/Puffin",
"dataset:openbmb/llava_zh",
"dataset:BAAI/COIG",
"dataset:TigerResearch/tigerbot-zhihu-zh-10k",
"dataset:liwu/MNBVC",
"dataset:teknium/openhermes",
"base_model:CausalLM/7B",
"license:wtfpl",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-22T16:34:43Z" | ---
base_model: CausalLM/7B
datasets:
- JosephusCheung/GuanacoDataset
- Open-Orca/OpenOrca
- stingning/ultrachat
- meta-math/MetaMathQA
- liuhaotian/LLaVA-Instruct-150K
- jondurbin/airoboros-3.1
- WizardLM/WizardLM_evol_instruct_V2_196k
- RyokoAI/ShareGPT52K
- RyokoAI/Fandom23K
- milashkaarshif/MoeGirlPedia_wikitext_raw_archive
- wikipedia
- wiki_lingua
- fnlp/moss-003-sft-data
- garage-bAInd/Open-Platypus
- LDJnr/Puffin
- openbmb/llava_zh
- BAAI/COIG
- TigerResearch/tigerbot-zhihu-zh-10k
- liwu/MNBVC
- teknium/openhermes
inference: false
language:
- en
- zh
license: wtfpl
model_creator: CausalLM
model_name: CausalLM 7B
model_type: llama
pipeline_tag: text-generation
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- llama
- llama2
- qwen
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# CausalLM 7B - GGUF
- Model creator: [CausalLM](https://huggingface.co/CausalLM)
- Original model: [CausalLM 7B](https://huggingface.co/CausalLM/7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [CausalLM's CausalLM 7B](https://huggingface.co/CausalLM/7B).
**NOTE**: The GGUFs originally uploaded here did not work due to a vocab issue. This was fixed on 23rd October, 15:00 UTC. The files uploaded now are confirmed to work.
Please re-download the GGUFs if you had downloaded the originally uploaded GGUF file(s).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/CausalLM-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CausalLM-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CausalLM-7B-GGUF)
* [CausalLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/CausalLM/7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `wtfpl`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [CausalLM's CausalLM 7B](https://huggingface.co/CausalLM/7B).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [causallm_7b.Q2_K.gguf](https://huggingface.co/TheBloke/CausalLM-7B-GGUF/blob/main/causallm_7b.Q2_K.gguf) | Q2_K | 2 | 3.40 GB| 5.90 GB | smallest, significant quality loss - not recommended for most purposes |
| [causallm_7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/CausalLM-7B-GGUF/blob/main/causallm_7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.57 GB| 6.07 GB | very small, high quality loss |
| [causallm_7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/CausalLM-7B-GGUF/blob/main/causallm_7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.92 GB| 6.42 GB | very small, high quality loss |
| [causallm_7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/CausalLM-7B-GGUF/blob/main/causallm_7b.Q3_K_L.gguf) | Q3_K_L | 3 | 4.22 GB| 6.72 GB | small, substantial quality loss |
| [causallm_7b.Q4_0.gguf](https://huggingface.co/TheBloke/CausalLM-7B-GGUF/blob/main/causallm_7b.Q4_0.gguf) | Q4_0 | 4 | 4.51 GB| 7.01 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [causallm_7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/CausalLM-7B-GGUF/blob/main/causallm_7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.54 GB| 7.04 GB | small, greater quality loss |
| [causallm_7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/CausalLM-7B-GGUF/blob/main/causallm_7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.77 GB| 7.27 GB | medium, balanced quality - recommended |
| [causallm_7b.Q5_0.gguf](https://huggingface.co/TheBloke/CausalLM-7B-GGUF/blob/main/causallm_7b.Q5_0.gguf) | Q5_0 | 5 | 5.40 GB| 7.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [causallm_7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/CausalLM-7B-GGUF/blob/main/causallm_7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.40 GB| 7.90 GB | large, low quality loss - recommended |
| [causallm_7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/CausalLM-7B-GGUF/blob/main/causallm_7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.53 GB| 8.03 GB | large, very low quality loss - recommended |
| [causallm_7b.Q6_K.gguf](https://huggingface.co/TheBloke/CausalLM-7B-GGUF/blob/main/causallm_7b.Q6_K.gguf) | Q6_K | 6 | 6.34 GB| 8.84 GB | very large, extremely low quality loss |
| [causallm_7b.Q8_0.gguf](https://huggingface.co/TheBloke/CausalLM-7B-GGUF/blob/main/causallm_7b.Q8_0.gguf) | Q8_0 | 8 | 8.21 GB| 10.71 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/CausalLM-7B-GGUF and below it, a specific filename to download, such as: causallm_7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/CausalLM-7B-GGUF causallm_7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/CausalLM-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/CausalLM-7B-GGUF causallm_7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m causallm_7b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/CausalLM-7B-GGUF", model_file="causallm_7b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: CausalLM's CausalLM 7B

*Image drawn by GPT-4 DALL·E 3* TL;DR: Perhaps this 7B model, better than all existing models <= 33B, in most quantitative evaluations...
# Please Stop Using WRONG unofficial quant models unless you know what you're doing
GPTQ quants require a good dataset for calibration, and the default C4 dataset is not capable.
**llama.cpp GGUF models**
GPT2Tokenizer fixed by [Kerfuffle](https://github.com/KerfuffleV2) on [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743), new models to be reuploaded.
## Read Me:
Also see [14B Version](https://huggingface.co/CausalLM/14B)
This model was trained based on the model weights of Qwen (and LLaMA2 was used, yes, for calculating some initial weights), you may also need to comply with the commercial use restrictions of these two models depending on the situation. The training process utilized a model structure that was identical to LLaMA2, using the same attention calculation method as the original MHA LLaMA2 models, and no additional scaling applied to the Relative Positional Encoding (RoPE).
We manually curated a SFT dataset of 1.3B tokens for training, utilizing open source datasets from Hugging Face. For most of these sentences, we performed manual or synthetic rewrites and generated alternate language versions using larger language models. Additionally, we conducted augmented text training using carefully selected entries from Wikipedia, as well as featured entries from Fandom and filtered entries from Moegirlpedia. In order to strike a balance between efficiency and quality, 100% of the data used for training was synthetic data, no direct use of text from the internet or original texts from publicly available datasets was employed for fine-tuning.
The 7B version of the model is a distilled version of the 14B model, specifically designed for speculative sampling. Therefore, it is important to exercise caution when directly using the model, as it may produce hallucinations or unreliable outputs.
Please note that the model was trained on unfiltered internet data. Since we do not have the capacity to vet all of it, there may be a substantial amount of objectionable content, pornography, violence, and offensive language present that we are unable to remove. Therefore, you will still need to complete your own checks on the model's safety and filter keywords in the output. Due to computational resource constraints, we are presently unable to implement RLHF for the model's ethics and safety, nor training on SFT samples that refuse to answer certain questions for restrictive fine-tuning.
Bonus: The model underwent some fine-tuning on the prompt format introduced in LLaVA1.5 that is unrelated to image attention calculation. Therefore, aligning the ViT Projection module with frozen LM under visual instructions would enable rapid implementation of effective multimodal capabilities.
## PROMPT FORMAT:
[chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
**System Prompt must not be empty!**
## MMLU:
stem ACC: 56.83
Humanities ACC: 58.79
other ACC: 70.04
social ACC: 72.41
**AVERAGE ACC:63.82** (Outperforms / Equal to the best Mistral-7B Chat-style fine-tunes, and ALL other models under 33B.)
## CEval (Val):
STEM acc: 61.67
Social Science acc: 81.94
Humanities acc: 77.19
Other acc: 68.35
Hard acc:48.03
**AVERAGE acc:70.27** (Outperforms ALL 7B models currently.)
## GSM8K
**Zero-shot ACC 0.5921152388172858** (Outperforms WizardMath-7B and Qwen-7B)
**llama.cpp GGUF models**
GPT2Tokenizer 支持由 [Kerfuffle](https://github.com/KerfuffleV2) 修复于 [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743),新模型稍后上传。
## 请读我:
另请参阅[14B版本](https://huggingface.co/CausalLM/14B)
该模型是基于Qwen的权重(并使用了LLaMA2权重,是的,用于计算一些权重初始化),您根据情况可能还需要遵守这两个模型的商业使用限制。训练过程中使用了与LLaMA2相同的模型结构,使用原始MHA LLaMA2模型的相同注意力计算方法,对相对位置编码(RoPE)没有进行额外的缩放。
我们手动筛选了一个包含13亿个标记的SFT数据集进行训练,利用了Hugging Face的开源数据集。对于大多数句子,我们进行了手动或合成改写,并使用更大的语言模型生成了其他语言版本。此外,我们还使用了精心挑选的来自维基百科的条目、来自Fandom的精选条目以及来自萌娘百科的过滤条目进行增强文本训练。为了在效率和质量之间取得平衡,训练所使用的100%数据都是合成数据,没有直接使用来自互联网或公开可用数据集的原始文本进行微调。
7B版本的模型是14B模型的精简版本,专门设计用于推测抽样。因此,在直接使用模型时,需要谨慎行事,因为它可能会产生幻觉或不可靠的输出。
请注意,模型是在未经过滤的互联网数据上进行训练的。由于我们无法审核所有数据,可能会出现大量不良内容、色情、暴力和冒犯性语言,我们无法删除这些内容。因此,您仍然需要对模型的安全性进行自己的检查,并对输出中的关键词进行过滤。由于计算资源的限制,我们目前无法为模型的伦理和安全实施RLHF,也无法对拒绝回答某些问题的SFT样本进行训练以进行限制性微调。
额外奖励:模型在LLaVA1.5中引入的提示格式上进行了一些微调,与图像注意力计算无关。因此,将ViT投影模块与冻结的LM对齐,并根据视觉指令实施快速实现有效的多模态能力。
## 提示格式:
[chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
**系统提示不能为空!**
## MMLU:
STEM准确率:56.83
人文学科准确率:58.79
其他准确率:70.04
社会学准确率:72.41
**平均准确率:63.82** (优于/平于最好的 Mistral-7B 聊天格式的微调,和其余的33B及以下模型。)
## CEval(验证集):
STEM准确率:61.67
社会科学准确率:81.94
人文学科准确率:77.19
其他准确率:68.35
困难准确率:48.03
**平均准确率:70.27** (优于当前所有7B模型。)
## GSM8K
**零样本准确率0.5921152388172858** (优于WizardMath-7B和Qwen-7B)
<!-- original-model-card end -->
|
MCZK/Ninja-V2-7B-GGUF | MCZK | "2024-06-15T21:25:59Z" | 3,181 | 0 | transformers | [
"transformers",
"gguf",
"Mistral",
"text-generation",
"en",
"ja",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-15T16:23:41Z" | ---
license: apache-2.0
language:
- en
- ja
tags:
- Mistral
library_name: transformers
pipeline_tag: text-generation
---
Local-Novel-LLM-project様の [Ninja-V2-7B](https://huggingface.co/Local-Novel-LLM-project/Ninja-V2-7B) をGGUF形式に変換したものです。
K量子化モデルについてもiMatrix適用してあります。
iMatrixテキストはTFMC様の[c4_en_ja_imatrix.txt](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用しています。
|
mradermacher/MasherAI-1.5B-v1-GGUF | mradermacher | "2024-06-20T18:22:06Z" | 3,181 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"sft",
"en",
"base_model:mahiatlinux/MasherAI-1.5B-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-20T18:16:26Z" | ---
base_model: mahiatlinux/MasherAI-1.5B-v1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mahiatlinux/MasherAI-1.5B-v1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MasherAI-1.5B-v1-GGUF/resolve/main/MasherAI-1.5B-v1.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-1.5B-v1-GGUF/resolve/main/MasherAI-1.5B-v1.IQ3_XS.gguf) | IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-1.5B-v1-GGUF/resolve/main/MasherAI-1.5B-v1.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-1.5B-v1-GGUF/resolve/main/MasherAI-1.5B-v1.IQ3_S.gguf) | IQ3_S | 0.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-1.5B-v1-GGUF/resolve/main/MasherAI-1.5B-v1.IQ3_M.gguf) | IQ3_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-1.5B-v1-GGUF/resolve/main/MasherAI-1.5B-v1.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-1.5B-v1-GGUF/resolve/main/MasherAI-1.5B-v1.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-1.5B-v1-GGUF/resolve/main/MasherAI-1.5B-v1.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-1.5B-v1-GGUF/resolve/main/MasherAI-1.5B-v1.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-1.5B-v1-GGUF/resolve/main/MasherAI-1.5B-v1.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-1.5B-v1-GGUF/resolve/main/MasherAI-1.5B-v1.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-1.5B-v1-GGUF/resolve/main/MasherAI-1.5B-v1.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-1.5B-v1-GGUF/resolve/main/MasherAI-1.5B-v1.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-1.5B-v1-GGUF/resolve/main/MasherAI-1.5B-v1.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MasherAI-1.5B-v1-GGUF/resolve/main/MasherAI-1.5B-v1.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Panacea-7B-Chat-GGUF | mradermacher | "2024-06-17T10:25:23Z" | 3,180 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:linjc16/Panacea-7B-Chat",
"endpoints_compatible",
"region:us"
] | null | "2024-06-17T08:19:31Z" | ---
base_model: linjc16/Panacea-7B-Chat
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/linjc16/Panacea-7B-Chat
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Panacea-7B-Chat-GGUF/resolve/main/Panacea-7B-Chat.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Panacea-7B-Chat-GGUF/resolve/main/Panacea-7B-Chat.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Panacea-7B-Chat-GGUF/resolve/main/Panacea-7B-Chat.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Panacea-7B-Chat-GGUF/resolve/main/Panacea-7B-Chat.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Panacea-7B-Chat-GGUF/resolve/main/Panacea-7B-Chat.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Panacea-7B-Chat-GGUF/resolve/main/Panacea-7B-Chat.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Panacea-7B-Chat-GGUF/resolve/main/Panacea-7B-Chat.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Panacea-7B-Chat-GGUF/resolve/main/Panacea-7B-Chat.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Panacea-7B-Chat-GGUF/resolve/main/Panacea-7B-Chat.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Panacea-7B-Chat-GGUF/resolve/main/Panacea-7B-Chat.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Panacea-7B-Chat-GGUF/resolve/main/Panacea-7B-Chat.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Panacea-7B-Chat-GGUF/resolve/main/Panacea-7B-Chat.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Panacea-7B-Chat-GGUF/resolve/main/Panacea-7B-Chat.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Panacea-7B-Chat-GGUF/resolve/main/Panacea-7B-Chat.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Panacea-7B-Chat-GGUF/resolve/main/Panacea-7B-Chat.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Mahou-1.3a-mistral-7B-GGUF | mradermacher | "2024-06-02T16:39:38Z" | 3,179 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:flammenai/MahouMix-v1",
"base_model:flammenai/Mahou-1.3a-mistral-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-02T06:51:04Z" | ---
base_model: flammenai/Mahou-1.3a-mistral-7B
datasets:
- flammenai/MahouMix-v1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/flammenai/Mahou-1.3a-mistral-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mahou-1.3a-mistral-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3a-mistral-7B-GGUF/resolve/main/Mahou-1.3a-mistral-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3a-mistral-7B-GGUF/resolve/main/Mahou-1.3a-mistral-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3a-mistral-7B-GGUF/resolve/main/Mahou-1.3a-mistral-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3a-mistral-7B-GGUF/resolve/main/Mahou-1.3a-mistral-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3a-mistral-7B-GGUF/resolve/main/Mahou-1.3a-mistral-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3a-mistral-7B-GGUF/resolve/main/Mahou-1.3a-mistral-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3a-mistral-7B-GGUF/resolve/main/Mahou-1.3a-mistral-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3a-mistral-7B-GGUF/resolve/main/Mahou-1.3a-mistral-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3a-mistral-7B-GGUF/resolve/main/Mahou-1.3a-mistral-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3a-mistral-7B-GGUF/resolve/main/Mahou-1.3a-mistral-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3a-mistral-7B-GGUF/resolve/main/Mahou-1.3a-mistral-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3a-mistral-7B-GGUF/resolve/main/Mahou-1.3a-mistral-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3a-mistral-7B-GGUF/resolve/main/Mahou-1.3a-mistral-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3a-mistral-7B-GGUF/resolve/main/Mahou-1.3a-mistral-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3a-mistral-7B-GGUF/resolve/main/Mahou-1.3a-mistral-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/phi3-finedragon1-merged-GGUF | mradermacher | "2024-06-08T14:31:36Z" | 3,179 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:zachaman/phi3-finedragon1-merged",
"endpoints_compatible",
"region:us"
] | null | "2024-06-08T13:44:55Z" | ---
base_model: zachaman/phi3-finedragon1-merged
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/zachaman/phi3-finedragon1-merged
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/phi3-finedragon1-merged-GGUF/resolve/main/phi3-finedragon1-merged.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/phi3-finedragon1-merged-GGUF/resolve/main/phi3-finedragon1-merged.IQ3_XS.gguf) | IQ3_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/phi3-finedragon1-merged-GGUF/resolve/main/phi3-finedragon1-merged.IQ3_S.gguf) | IQ3_S | 1.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/phi3-finedragon1-merged-GGUF/resolve/main/phi3-finedragon1-merged.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/phi3-finedragon1-merged-GGUF/resolve/main/phi3-finedragon1-merged.IQ3_M.gguf) | IQ3_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/phi3-finedragon1-merged-GGUF/resolve/main/phi3-finedragon1-merged.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/phi3-finedragon1-merged-GGUF/resolve/main/phi3-finedragon1-merged.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/phi3-finedragon1-merged-GGUF/resolve/main/phi3-finedragon1-merged.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/phi3-finedragon1-merged-GGUF/resolve/main/phi3-finedragon1-merged.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/phi3-finedragon1-merged-GGUF/resolve/main/phi3-finedragon1-merged.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/phi3-finedragon1-merged-GGUF/resolve/main/phi3-finedragon1-merged.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/phi3-finedragon1-merged-GGUF/resolve/main/phi3-finedragon1-merged.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/phi3-finedragon1-merged-GGUF/resolve/main/phi3-finedragon1-merged.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/phi3-finedragon1-merged-GGUF/resolve/main/phi3-finedragon1-merged.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/phi3-finedragon1-merged-GGUF/resolve/main/phi3-finedragon1-merged.f16.gguf) | f16 | 7.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.