modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF | mradermacher | "2024-06-14T01:41:27Z" | 5,266 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"not-for-all-audiences",
"en",
"base_model:invisietch/EtherealRainbow-v0.2-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T22:47:43Z" | ---
base_model: invisietch/EtherealRainbow-v0.2-8B
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- mergekit
- merge
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
weighted/imatrix quants of https://huggingface.co/invisietch/EtherealRainbow-v0.2-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF/resolve/main/EtherealRainbow-v0.2-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF/resolve/main/EtherealRainbow-v0.2-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF/resolve/main/EtherealRainbow-v0.2-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF/resolve/main/EtherealRainbow-v0.2-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF/resolve/main/EtherealRainbow-v0.2-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF/resolve/main/EtherealRainbow-v0.2-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF/resolve/main/EtherealRainbow-v0.2-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF/resolve/main/EtherealRainbow-v0.2-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF/resolve/main/EtherealRainbow-v0.2-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF/resolve/main/EtherealRainbow-v0.2-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF/resolve/main/EtherealRainbow-v0.2-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF/resolve/main/EtherealRainbow-v0.2-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF/resolve/main/EtherealRainbow-v0.2-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF/resolve/main/EtherealRainbow-v0.2-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF/resolve/main/EtherealRainbow-v0.2-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF/resolve/main/EtherealRainbow-v0.2-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF/resolve/main/EtherealRainbow-v0.2-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF/resolve/main/EtherealRainbow-v0.2-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF/resolve/main/EtherealRainbow-v0.2-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF/resolve/main/EtherealRainbow-v0.2-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF/resolve/main/EtherealRainbow-v0.2-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Autolycus-Mistral_7B-i1-GGUF | mradermacher | "2024-06-11T14:32:48Z" | 5,265 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"instruct",
"finetune",
"chatml",
"gpt4",
"en",
"base_model:FPHam/Autolycus-Mistral_7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-11T13:22:00Z" | ---
base_model: FPHam/Autolycus-Mistral_7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mistral
- instruct
- finetune
- chatml
- gpt4
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/FPHam/Autolycus-Mistral_7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Autolycus-Mistral_7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Autolycus-Mistral_7B-i1-GGUF/resolve/main/Autolycus-Mistral_7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Autolycus-Mistral_7B-i1-GGUF/resolve/main/Autolycus-Mistral_7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Autolycus-Mistral_7B-i1-GGUF/resolve/main/Autolycus-Mistral_7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Autolycus-Mistral_7B-i1-GGUF/resolve/main/Autolycus-Mistral_7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Autolycus-Mistral_7B-i1-GGUF/resolve/main/Autolycus-Mistral_7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Autolycus-Mistral_7B-i1-GGUF/resolve/main/Autolycus-Mistral_7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Autolycus-Mistral_7B-i1-GGUF/resolve/main/Autolycus-Mistral_7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Autolycus-Mistral_7B-i1-GGUF/resolve/main/Autolycus-Mistral_7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Autolycus-Mistral_7B-i1-GGUF/resolve/main/Autolycus-Mistral_7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Autolycus-Mistral_7B-i1-GGUF/resolve/main/Autolycus-Mistral_7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Autolycus-Mistral_7B-i1-GGUF/resolve/main/Autolycus-Mistral_7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Autolycus-Mistral_7B-i1-GGUF/resolve/main/Autolycus-Mistral_7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Autolycus-Mistral_7B-i1-GGUF/resolve/main/Autolycus-Mistral_7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Autolycus-Mistral_7B-i1-GGUF/resolve/main/Autolycus-Mistral_7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Autolycus-Mistral_7B-i1-GGUF/resolve/main/Autolycus-Mistral_7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Autolycus-Mistral_7B-i1-GGUF/resolve/main/Autolycus-Mistral_7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Autolycus-Mistral_7B-i1-GGUF/resolve/main/Autolycus-Mistral_7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Autolycus-Mistral_7B-i1-GGUF/resolve/main/Autolycus-Mistral_7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Autolycus-Mistral_7B-i1-GGUF/resolve/main/Autolycus-Mistral_7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Autolycus-Mistral_7B-i1-GGUF/resolve/main/Autolycus-Mistral_7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Autolycus-Mistral_7B-i1-GGUF/resolve/main/Autolycus-Mistral_7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
Yntec/RetroLife | Yntec | "2024-03-09T09:15:42Z" | 5,262 | 4 | diffusers | [
"diffusers",
"safetensors",
"Photorealistic",
"Retro",
"Base model",
"Abstract",
"Elldreths",
"Fusch",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-03-05T13:51:24Z" | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Photorealistic
- Retro
- Base model
- Abstract
- Elldreths
- Fusch
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Retro Life
A mix of Elldreth's Retro Mix and Real Life 2.0. The old version hosted here has been renamed to RetroLifeAlpha, the new one improves the anatomy. Original pages:
https://huggingface.co/Yntec/RealLife
https://huggingface.co/Yntec/ElldrethsRetroMix
Samples and prompts:

(Click for larger)
Top left: Stock washed out worn Retro colors TV movie TRAILER. Closeup Santa Claus and daughters enjoying enchiladas with tacos. sitting with a pretty cute little girl, Art Christmas Theme by Gil_Elvgren and Haddon_Sundblom. Posing
Top right: Retropunk painting of a rainbow fantasy phoenix by Bnhr, fire eyes, nature, grass, tree, outdoors, forest, animal focus, blue eyes
Bottom left: vintage colors photo of view from diagonally above, Heidi Bloom, central adjustment, skinny young northern european female, long reddish ponytail hair, real hair movement, elongated head, beautiful face, grey eyes, thin bowed eyebrows, snub nose, gentle lower jaw line, narrow chin, da vinci lips, slightly smiling with parted lips, curious friendly facial expression, small, slim narrow tapered hips
Bottom right: 1977 kodachrome camera transparency, dramatic lighting film grain, PARTY HARD BACKGROUND, pretty cute little girl in Zone 51, Extraterrestrial, Alien Space Ship Delivering Christmas Presents, Alien Space Ship Decorated With Garlands and Christmas Balls, Snowstorm
# Recipes:
- SuperMerger Weight sum MBW 0,1,1,1,1,1,1,0,0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0,0
Model A: Real Life 2.0
Model B: ElldrethsRetroMix
Output: RetroLifeAlpha
- SuperMerger Weight sum MBW 0,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0,1
Model A: Real Life 2.0
Model B: ElldrethsRetroMix
Output: RetroLife |
Remek/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-GGUF | Remek | "2024-05-12T13:10:22Z" | 5,257 | 3 | null | [
"gguf",
"text-generation",
"pl",
"en",
"region:us"
] | text-generation | "2024-04-22T20:25:02Z" | ---
language:
- pl
- en
pipeline_tag: text-generation
---
# Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-GGUF
To repozytorum zawiera konwersję modeli Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT do formatu GGUF - Q8_0 oraz Q4_K_M. Przetestowana została w dwóch środowiskach uruchomieniowych:
#### LM Studio
Wersja minimum 0.2.20 - koniecznie wybierz format promptu Llama 3 (!) (opcja Preset)
#### Ollama
Wersja 0.1.32. Konfiguracja ollama plik Modelfile. Uwaga! Nie zmieniaj SYSTEM mimo, że chcesz rozmawiać w języku polskim. Pozostaw treść pola systemowego po angielsku tak jak jest.
```
FROM ./Llama-3-Omnibus-PL-v01-GGUF.Q4_K_M.gguf
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>"""
SYSTEM """You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability."""
PARAMETER num_ctx 8192
PARAMETER num_gpu 99
```
Repozytorium zawiera model Meta Llama-3-8B-Omnibus-1-PL-v01 w wersji polskojęzycznej. Model postał na podstawie finetuningu modelu bazowego Llama-3-8B. Wykorzystano do tego dataset instrukcji Omnibus-1-PL (stworzyłem go na własne potrzeby przeprowadzania eksperymenów finetuningu modeli w języku polskim). Szczegóły parametrów treningu w sekcji Trening. Celem tego eksperymentu było sprawdzenie czy można namówić Llama-3-8B do płynnego rozmawiania w języku polskim (oryginalny model instrukcyjny 8B ma z tym problem - woli zdecydowanie bardziej rozmawiać po angielsku).
<img src="Llama-3-8B-PL-small.jpg" width="420" />
Uwaga!
* Model NIE jest CENZUROWANY. To wersja do zabawy. Nie została ujarzmiona.
* Model będzie dalej rozwijany ponieważ eksperymentuję z a. kolejnymi wersjami datasetu, b. model jest świetną bazą do testowania różnych technik finetunowania (LoRA, QLoRA; DPO, ORPO itd.)
* Udostępniłem go spontanicznie by użytkownicy mogli go używać i sprawdzać jakość Llama 3 ale w kontekście języka polskiego.
* Po informacji, że baza była trenowana na 15T tokenów (tylko 5% nie angielskich) uznałem, że to świetny model do finetuningu. Być może lekkie dotrenowanie modelu za pomocą contingued-pretraining da jeszcze większy uzysk.
### Sposób kodowania nazwy modelu
* Nazwa modelu bazowego: Llama-3-8B
* Nazwa datasetu: Omnibus-1
* Wersja językowa: PL (polska)
* Wersja modelu: v01
### Dataset
Omnibus-1 to zbiór polskich instrukcji (100% kontekstu Polskiego - fakty, osoby, miejsca osadzone w Polsce), który został w 100% syntetycznie wygenerowany. Zawiera on instrukcje z kategorii - matematyka, umiejętność pisania, dialogi, tematy medyczne, zagadki logiczne, tłumaczenia itd. Powstał on w ramach moich prac związanych z badaniem jakości modeli w kontekście języka polskiego. Pozwala on na finetuning modelu i sprawdzenie podatności modelu do mówienia w naszym rodzimym języku. Dataset zawiera obecnie 75.000 instrukcji. Będzie cały czas udoskonalony i być może w przyszłości udostępniony (jak uznam, że już jest wtstarczająco pełen i obejmuje szerokie spektrum tematyki i umiejętności). Dataset jest w 100% generowany za pomocą innych LLM (GPT3.5, GPT4, Mixtral itd.)
### Szablon konwersacji
Szablon konwersacji to oryginalna wersja Llama3
```
<|start_header_id|>You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.<|end_header_id|>
{System}
<|eot_id|>
<|start_header_id|>user<|end_header_id|>
{User}
<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{Assistant}
```
### Trening
Poniżej szczegóły hiperparametrów treningu:
* learning_rate: 2e-05
* train_batch_size: 8
* eval_batch_size: 8
* seed: 42
* distributed_type: single-GPU (Nvidia A6000 Ada)
* num_devices: 1
* gradient_accumulation_steps: 4
* optimizer: adamw_8bit
* lr_scheduler_type: linear
* lr_scheduler_warmup_steps: 5
* num_epochs: 1
* QLoRa - 4bit: rank 64, alpha 128
#### Unsloth
<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/made with unsloth.png" width="200px" align="center" />
[Unsloth](https://unsloth.ai), narzędzie dzięki któremu powstał ten model.
### Licencja
Licencja na zasadzie nie do komercyjnego użycia (ze względu na dataset - generowany syntetycznie za pomocą modeli GPT4, GPT3.5) oraz licencja Llama3 (proszę o zapoznanie się ze szczegółami licencji).
|
lcw99/zephykor-ko-beta-7b-chang | lcw99 | "2023-12-25T01:17:13Z" | 5,252 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"ko",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-27T00:46:19Z" | ---
language:
- ko
- en
---
* Under construction, be carefull. |
RLHFlow/pair-preference-model-LLaMA3-8B | RLHFlow | "2024-05-24T07:05:10Z" | 5,250 | 26 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2405.07863",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-23T04:51:16Z" | ---
license: llama3
---
This preference model is trained from [LLaMA3-8B-it](meta-llama/Meta-Llama-3-8B-Instruct) with the training script at [Reward Modeling](https://github.com/RLHFlow/RLHF-Reward-Modeling/tree/pm_dev/pair-pm).
The dataset is RLHFlow/pair_preference_model_dataset. It achieves Chat-98.6, Char-hard 65.8, Safety 89.6, and reasoning 94.9 in reward bench.
See our paper [RLHF Workflow: From Reward Modeling to Online RLHF](https://arxiv.org/abs/2405.07863) for more details of this model.
## Service the RM
Here is an example to use the Preference Model to rank a pair. For n>2 responses, it is recommened to use the tournament style ranking strategy to get the best response so that the complexity is linear in n.
```python
device = 0
model = AutoModelForCausalLM.from_pretrained(script_args.preference_name_or_path,
torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2").cuda()
tokenizer = AutoTokenizer.from_pretrained(script_args.preference_name_or_path, use_fast=True)
tokenizer_plain = AutoTokenizer.from_pretrained(script_args.preference_name_or_path, use_fast=True)
tokenizer_plain.chat_template = "\n{% for message in messages %}{% if loop.index0 % 2 == 0 %}\n\n<turn> user\n {{ message['content'] }}{% else %}\n\n<turn> assistant\n {{ message['content'] }}{% endif %}{% endfor %}\n\n\n"
prompt_template = "[CONTEXT] {context} [RESPONSE A] {response_A} [RESPONSE B] {response_B} \n"
token_id_A = tokenizer.encode("A", add_special_tokens=False)
token_id_B = tokenizer.encode("B", add_special_tokens=False)
assert len(token_id_A) == 1 and len(token_id_B) == 1
token_id_A = token_id_A[0]
token_id_B = token_id_B[0]
temperature = 1.0
model.eval()
response_chosen = "BBBB"
response_rejected = "CCCC"
## We can also handle multi-turn conversation.
instruction = [{"role": "user", "content": ...},
{"role": "assistant", "content": ...},
{"role": "user", "content": ...},
]
context = tokenizer_plain.apply_chat_template(instruction, tokenize=False)
responses = [response_chosen, response_rejected]
probs_chosen = []
for chosen_position in [0, 1]:
# we swap order to mitigate position bias
response_A = responses[chosen_position]
response_B = responses[1 - chosen_position]
prompt = prompt_template.format(context=context, response_A=response_A, response_B=response_B)
message = [
{"role": "user", "content": prompt},
]
input_ids = tokenizer.encode(tokenizer.apply_chat_template(message, tokenize=False).replace(tokenizer.bos_token, ""), return_tensors='pt', add_special_tokens=False).cuda()
with torch.no_grad():
output = model(input_ids)
logit_A = output.logits[0, -1, token_id_A].item()
logit_B = output.logits[0, -1, token_id_B].item()
# take softmax to get the probability; using numpy
Z = np.exp(logit_A / temperature) + np.exp(logit_B / temperature)
logit_chosen = [logit_A, logit_B][chosen_position]
prob_chosen = np.exp(logit_chosen / temperature) / Z
probs_chosen.append(prob_chosen)
avg_prob_chosen = np.mean(probs_chosen)
correct = 0.5 if avg_prob_chosen == 0.5 else float(avg_prob_chosen > 0.5)
print(correct)
```
## Citation
If you use this model in your research, please consider citing our paper
```
@misc{rlhflow,
title={RLHF Workflow: From Reward Modeling to Online RLHF},
author={Hanze Dong and Wei Xiong and Bo Pang and Haoxiang Wang and Han Zhao and Yingbo Zhou and Nan Jiang and Doyen Sahoo and Caiming Xiong and Tong Zhang},
year={2024},
eprint={2405.07863},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
and Google's Slic paper (which initially proposes this pairwise preference model)
```
@article{zhao2023slic,
title={Slic-hf: Sequence likelihood calibration with human feedback},
author={Zhao, Yao and Joshi, Rishabh and Liu, Tianqi and Khalman, Misha and Saleh, Mohammad and Liu, Peter J},
journal={arXiv preprint arXiv:2305.10425},
year={2023}
}
``` |
01-ai/Yi-1.5-34B-32K | 01-ai | "2024-06-26T10:42:31Z" | 5,246 | 33 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-15T10:42:51Z" | ---
license: apache-2.0
---
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://01-ai.github.io/">💪 Tech Blog</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
mradermacher/Winterreise-m7-i1-GGUF | mradermacher | "2024-06-05T08:43:46Z" | 5,246 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:LDJnr/Capybara",
"dataset:chargoddard/rpguild",
"dataset:PocketDoc/Guanaco-Unchained-Refined",
"dataset:lemonilia/LimaRP",
"base_model:Sao10K/Winterreise-m7",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-04T13:07:32Z" | ---
base_model: Sao10K/Winterreise-m7
datasets:
- LDJnr/Capybara
- chargoddard/rpguild
- PocketDoc/Guanaco-Unchained-Refined
- lemonilia/LimaRP
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Sao10K/Winterreise-m7
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Winterreise-m7-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Winterreise-m7-i1-GGUF/resolve/main/Winterreise-m7.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Winterreise-m7-i1-GGUF/resolve/main/Winterreise-m7.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Winterreise-m7-i1-GGUF/resolve/main/Winterreise-m7.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Winterreise-m7-i1-GGUF/resolve/main/Winterreise-m7.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Winterreise-m7-i1-GGUF/resolve/main/Winterreise-m7.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Winterreise-m7-i1-GGUF/resolve/main/Winterreise-m7.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Winterreise-m7-i1-GGUF/resolve/main/Winterreise-m7.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Winterreise-m7-i1-GGUF/resolve/main/Winterreise-m7.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Winterreise-m7-i1-GGUF/resolve/main/Winterreise-m7.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Winterreise-m7-i1-GGUF/resolve/main/Winterreise-m7.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Winterreise-m7-i1-GGUF/resolve/main/Winterreise-m7.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Winterreise-m7-i1-GGUF/resolve/main/Winterreise-m7.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Winterreise-m7-i1-GGUF/resolve/main/Winterreise-m7.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Winterreise-m7-i1-GGUF/resolve/main/Winterreise-m7.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Winterreise-m7-i1-GGUF/resolve/main/Winterreise-m7.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Winterreise-m7-i1-GGUF/resolve/main/Winterreise-m7.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Winterreise-m7-i1-GGUF/resolve/main/Winterreise-m7.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Winterreise-m7-i1-GGUF/resolve/main/Winterreise-m7.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Winterreise-m7-i1-GGUF/resolve/main/Winterreise-m7.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Winterreise-m7-i1-GGUF/resolve/main/Winterreise-m7.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Winterreise-m7-i1-GGUF/resolve/main/Winterreise-m7.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
Muennighoff/SGPT-125M-weightedmean-msmarco-specb-bitfit | Muennighoff | "2023-03-27T22:19:34Z" | 5,241 | 2 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"mteb",
"arxiv:2202.08904",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-03-02T23:29:04Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
model-index:
- name: SGPT-125M-weightedmean-msmarco-specb-bitfit
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 61.23880597014926
- type: ap
value: 25.854431650388644
- type: f1
value: 55.751862762818604
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (de)
config: de
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 56.88436830835117
- type: ap
value: 72.67279104379772
- type: f1
value: 54.449840243786404
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en-ext)
config: en-ext
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 58.27586206896551
- type: ap
value: 14.067357642500387
- type: f1
value: 48.172318518691334
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (ja)
config: ja
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 54.64668094218415
- type: ap
value: 11.776694555054965
- type: f1
value: 44.526622834078765
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: 80714f8dcf8cefc218ef4f8c5a966dd83f75a0e1
metrics:
- type: accuracy
value: 65.401225
- type: ap
value: 60.22809958678552
- type: f1
value: 65.0251824898292
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 31.165999999999993
- type: f1
value: 30.908870050167437
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (de)
config: de
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 24.79
- type: f1
value: 24.5833598854121
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (es)
config: es
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 26.643999999999995
- type: f1
value: 26.39012792213563
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (fr)
config: fr
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 26.386000000000003
- type: f1
value: 26.276867791454873
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (ja)
config: ja
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 22.078000000000003
- type: f1
value: 21.797960290226843
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 24.274
- type: f1
value: 23.887054434822627
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: 5b3e3697907184a9b77a3c99ee9ea1a9cbb1e4e3
metrics:
- type: map_at_1
value: 22.404
- type: map_at_10
value: 36.845
- type: map_at_100
value: 37.945
- type: map_at_1000
value: 37.966
- type: map_at_3
value: 31.78
- type: map_at_5
value: 34.608
- type: mrr_at_1
value: 22.902
- type: mrr_at_10
value: 37.034
- type: mrr_at_100
value: 38.134
- type: mrr_at_1000
value: 38.155
- type: mrr_at_3
value: 31.935000000000002
- type: mrr_at_5
value: 34.812
- type: ndcg_at_1
value: 22.404
- type: ndcg_at_10
value: 45.425
- type: ndcg_at_100
value: 50.354
- type: ndcg_at_1000
value: 50.873999999999995
- type: ndcg_at_3
value: 34.97
- type: ndcg_at_5
value: 40.081
- type: precision_at_1
value: 22.404
- type: precision_at_10
value: 7.303999999999999
- type: precision_at_100
value: 0.951
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 14.746
- type: precision_at_5
value: 11.337
- type: recall_at_1
value: 22.404
- type: recall_at_10
value: 73.044
- type: recall_at_100
value: 95.092
- type: recall_at_1000
value: 99.075
- type: recall_at_3
value: 44.239
- type: recall_at_5
value: 56.686
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: 0bbdb47bcbe3a90093699aefeed338a0f28a7ee8
metrics:
- type: v_measure
value: 39.70858340673288
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: b73bd54100e5abfa6e3a23dcafb46fe4d2438dc3
metrics:
- type: v_measure
value: 28.242847713721048
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 4d853f94cd57d85ec13805aeeac3ae3e5eb4c49c
metrics:
- type: map
value: 55.83700395192393
- type: mrr
value: 70.3891307215407
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: 9ee918f184421b6bd48b78f6c714d86546106103
metrics:
- type: cos_sim_pearson
value: 79.25366801756223
- type: cos_sim_spearman
value: 75.20954502580506
- type: euclidean_pearson
value: 78.79900722991617
- type: euclidean_spearman
value: 77.79996549607588
- type: manhattan_pearson
value: 78.18408109480399
- type: manhattan_spearman
value: 76.85958262303106
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 44fa15921b4c889113cc5df03dd4901b49161ab7
metrics:
- type: accuracy
value: 77.70454545454545
- type: f1
value: 77.6929000113803
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 11d0121201d1f1f280e8cc8f3d98fb9c4d9f9c55
metrics:
- type: v_measure
value: 33.63260395543984
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: c0fab014e1bcb8d3a5e31b2088972a1e01547dc1
metrics:
- type: v_measure
value: 27.038042665369925
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 22.139
- type: map_at_10
value: 28.839
- type: map_at_100
value: 30.023
- type: map_at_1000
value: 30.153000000000002
- type: map_at_3
value: 26.521
- type: map_at_5
value: 27.775
- type: mrr_at_1
value: 26.466
- type: mrr_at_10
value: 33.495000000000005
- type: mrr_at_100
value: 34.416999999999994
- type: mrr_at_1000
value: 34.485
- type: mrr_at_3
value: 31.402
- type: mrr_at_5
value: 32.496
- type: ndcg_at_1
value: 26.466
- type: ndcg_at_10
value: 33.372
- type: ndcg_at_100
value: 38.7
- type: ndcg_at_1000
value: 41.696
- type: ndcg_at_3
value: 29.443
- type: ndcg_at_5
value: 31.121
- type: precision_at_1
value: 26.466
- type: precision_at_10
value: 6.037
- type: precision_at_100
value: 1.0670000000000002
- type: precision_at_1000
value: 0.16199999999999998
- type: precision_at_3
value: 13.782
- type: precision_at_5
value: 9.757
- type: recall_at_1
value: 22.139
- type: recall_at_10
value: 42.39
- type: recall_at_100
value: 65.427
- type: recall_at_1000
value: 86.04899999999999
- type: recall_at_3
value: 31.127
- type: recall_at_5
value: 35.717999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 20.652
- type: map_at_10
value: 27.558
- type: map_at_100
value: 28.473
- type: map_at_1000
value: 28.577
- type: map_at_3
value: 25.402
- type: map_at_5
value: 26.68
- type: mrr_at_1
value: 25.223000000000003
- type: mrr_at_10
value: 31.966
- type: mrr_at_100
value: 32.664
- type: mrr_at_1000
value: 32.724
- type: mrr_at_3
value: 30.074
- type: mrr_at_5
value: 31.249
- type: ndcg_at_1
value: 25.223000000000003
- type: ndcg_at_10
value: 31.694
- type: ndcg_at_100
value: 35.662
- type: ndcg_at_1000
value: 38.092
- type: ndcg_at_3
value: 28.294000000000004
- type: ndcg_at_5
value: 30.049
- type: precision_at_1
value: 25.223000000000003
- type: precision_at_10
value: 5.777
- type: precision_at_100
value: 0.9730000000000001
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 13.397
- type: precision_at_5
value: 9.605
- type: recall_at_1
value: 20.652
- type: recall_at_10
value: 39.367999999999995
- type: recall_at_100
value: 56.485
- type: recall_at_1000
value: 73.292
- type: recall_at_3
value: 29.830000000000002
- type: recall_at_5
value: 34.43
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 25.180000000000003
- type: map_at_10
value: 34.579
- type: map_at_100
value: 35.589999999999996
- type: map_at_1000
value: 35.68
- type: map_at_3
value: 31.735999999999997
- type: map_at_5
value: 33.479
- type: mrr_at_1
value: 29.467
- type: mrr_at_10
value: 37.967
- type: mrr_at_100
value: 38.800000000000004
- type: mrr_at_1000
value: 38.858
- type: mrr_at_3
value: 35.465
- type: mrr_at_5
value: 37.057
- type: ndcg_at_1
value: 29.467
- type: ndcg_at_10
value: 39.796
- type: ndcg_at_100
value: 44.531
- type: ndcg_at_1000
value: 46.666000000000004
- type: ndcg_at_3
value: 34.676
- type: ndcg_at_5
value: 37.468
- type: precision_at_1
value: 29.467
- type: precision_at_10
value: 6.601999999999999
- type: precision_at_100
value: 0.9900000000000001
- type: precision_at_1000
value: 0.124
- type: precision_at_3
value: 15.568999999999999
- type: precision_at_5
value: 11.172
- type: recall_at_1
value: 25.180000000000003
- type: recall_at_10
value: 52.269
- type: recall_at_100
value: 73.574
- type: recall_at_1000
value: 89.141
- type: recall_at_3
value: 38.522
- type: recall_at_5
value: 45.323
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 16.303
- type: map_at_10
value: 21.629
- type: map_at_100
value: 22.387999999999998
- type: map_at_1000
value: 22.489
- type: map_at_3
value: 19.608
- type: map_at_5
value: 20.774
- type: mrr_at_1
value: 17.740000000000002
- type: mrr_at_10
value: 23.214000000000002
- type: mrr_at_100
value: 23.97
- type: mrr_at_1000
value: 24.054000000000002
- type: mrr_at_3
value: 21.243000000000002
- type: mrr_at_5
value: 22.322
- type: ndcg_at_1
value: 17.740000000000002
- type: ndcg_at_10
value: 25.113000000000003
- type: ndcg_at_100
value: 29.287999999999997
- type: ndcg_at_1000
value: 32.204
- type: ndcg_at_3
value: 21.111
- type: ndcg_at_5
value: 23.061999999999998
- type: precision_at_1
value: 17.740000000000002
- type: precision_at_10
value: 3.955
- type: precision_at_100
value: 0.644
- type: precision_at_1000
value: 0.093
- type: precision_at_3
value: 8.851
- type: precision_at_5
value: 6.418
- type: recall_at_1
value: 16.303
- type: recall_at_10
value: 34.487
- type: recall_at_100
value: 54.413999999999994
- type: recall_at_1000
value: 77.158
- type: recall_at_3
value: 23.733
- type: recall_at_5
value: 28.381
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 10.133000000000001
- type: map_at_10
value: 15.665999999999999
- type: map_at_100
value: 16.592000000000002
- type: map_at_1000
value: 16.733999999999998
- type: map_at_3
value: 13.625000000000002
- type: map_at_5
value: 14.721
- type: mrr_at_1
value: 12.562000000000001
- type: mrr_at_10
value: 18.487000000000002
- type: mrr_at_100
value: 19.391
- type: mrr_at_1000
value: 19.487
- type: mrr_at_3
value: 16.418
- type: mrr_at_5
value: 17.599999999999998
- type: ndcg_at_1
value: 12.562000000000001
- type: ndcg_at_10
value: 19.43
- type: ndcg_at_100
value: 24.546
- type: ndcg_at_1000
value: 28.193
- type: ndcg_at_3
value: 15.509999999999998
- type: ndcg_at_5
value: 17.322000000000003
- type: precision_at_1
value: 12.562000000000001
- type: precision_at_10
value: 3.794
- type: precision_at_100
value: 0.74
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 7.546
- type: precision_at_5
value: 5.721
- type: recall_at_1
value: 10.133000000000001
- type: recall_at_10
value: 28.261999999999997
- type: recall_at_100
value: 51.742999999999995
- type: recall_at_1000
value: 78.075
- type: recall_at_3
value: 17.634
- type: recall_at_5
value: 22.128999999999998
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 19.991999999999997
- type: map_at_10
value: 27.346999999999998
- type: map_at_100
value: 28.582
- type: map_at_1000
value: 28.716
- type: map_at_3
value: 24.907
- type: map_at_5
value: 26.1
- type: mrr_at_1
value: 23.773
- type: mrr_at_10
value: 31.647
- type: mrr_at_100
value: 32.639
- type: mrr_at_1000
value: 32.706
- type: mrr_at_3
value: 29.195
- type: mrr_at_5
value: 30.484
- type: ndcg_at_1
value: 23.773
- type: ndcg_at_10
value: 32.322
- type: ndcg_at_100
value: 37.996
- type: ndcg_at_1000
value: 40.819
- type: ndcg_at_3
value: 27.876
- type: ndcg_at_5
value: 29.664
- type: precision_at_1
value: 23.773
- type: precision_at_10
value: 5.976999999999999
- type: precision_at_100
value: 1.055
- type: precision_at_1000
value: 0.15
- type: precision_at_3
value: 13.122
- type: precision_at_5
value: 9.451
- type: recall_at_1
value: 19.991999999999997
- type: recall_at_10
value: 43.106
- type: recall_at_100
value: 67.264
- type: recall_at_1000
value: 86.386
- type: recall_at_3
value: 30.392000000000003
- type: recall_at_5
value: 34.910999999999994
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 17.896
- type: map_at_10
value: 24.644
- type: map_at_100
value: 25.790000000000003
- type: map_at_1000
value: 25.913999999999998
- type: map_at_3
value: 22.694
- type: map_at_5
value: 23.69
- type: mrr_at_1
value: 21.346999999999998
- type: mrr_at_10
value: 28.594
- type: mrr_at_100
value: 29.543999999999997
- type: mrr_at_1000
value: 29.621
- type: mrr_at_3
value: 26.807
- type: mrr_at_5
value: 27.669
- type: ndcg_at_1
value: 21.346999999999998
- type: ndcg_at_10
value: 28.833
- type: ndcg_at_100
value: 34.272000000000006
- type: ndcg_at_1000
value: 37.355
- type: ndcg_at_3
value: 25.373
- type: ndcg_at_5
value: 26.756
- type: precision_at_1
value: 21.346999999999998
- type: precision_at_10
value: 5.2170000000000005
- type: precision_at_100
value: 0.954
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 11.948
- type: precision_at_5
value: 8.425
- type: recall_at_1
value: 17.896
- type: recall_at_10
value: 37.291000000000004
- type: recall_at_100
value: 61.138000000000005
- type: recall_at_1000
value: 83.212
- type: recall_at_3
value: 27.705999999999996
- type: recall_at_5
value: 31.234
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 17.195166666666665
- type: map_at_10
value: 23.329083333333333
- type: map_at_100
value: 24.30308333333333
- type: map_at_1000
value: 24.422416666666667
- type: map_at_3
value: 21.327416666666664
- type: map_at_5
value: 22.419999999999998
- type: mrr_at_1
value: 19.999916666666667
- type: mrr_at_10
value: 26.390166666666666
- type: mrr_at_100
value: 27.230999999999998
- type: mrr_at_1000
value: 27.308333333333334
- type: mrr_at_3
value: 24.4675
- type: mrr_at_5
value: 25.541083333333336
- type: ndcg_at_1
value: 19.999916666666667
- type: ndcg_at_10
value: 27.248666666666665
- type: ndcg_at_100
value: 32.00258333333334
- type: ndcg_at_1000
value: 34.9465
- type: ndcg_at_3
value: 23.58566666666667
- type: ndcg_at_5
value: 25.26341666666666
- type: precision_at_1
value: 19.999916666666667
- type: precision_at_10
value: 4.772166666666666
- type: precision_at_100
value: 0.847
- type: precision_at_1000
value: 0.12741666666666668
- type: precision_at_3
value: 10.756166666666669
- type: precision_at_5
value: 7.725416666666667
- type: recall_at_1
value: 17.195166666666665
- type: recall_at_10
value: 35.99083333333334
- type: recall_at_100
value: 57.467999999999996
- type: recall_at_1000
value: 78.82366666666667
- type: recall_at_3
value: 25.898499999999995
- type: recall_at_5
value: 30.084333333333333
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 16.779
- type: map_at_10
value: 21.557000000000002
- type: map_at_100
value: 22.338
- type: map_at_1000
value: 22.421
- type: map_at_3
value: 19.939
- type: map_at_5
value: 20.903
- type: mrr_at_1
value: 18.404999999999998
- type: mrr_at_10
value: 23.435
- type: mrr_at_100
value: 24.179000000000002
- type: mrr_at_1000
value: 24.25
- type: mrr_at_3
value: 21.907
- type: mrr_at_5
value: 22.781000000000002
- type: ndcg_at_1
value: 18.404999999999998
- type: ndcg_at_10
value: 24.515
- type: ndcg_at_100
value: 28.721000000000004
- type: ndcg_at_1000
value: 31.259999999999998
- type: ndcg_at_3
value: 21.508
- type: ndcg_at_5
value: 23.01
- type: precision_at_1
value: 18.404999999999998
- type: precision_at_10
value: 3.834
- type: precision_at_100
value: 0.641
- type: precision_at_1000
value: 0.093
- type: precision_at_3
value: 9.151
- type: precision_at_5
value: 6.503
- type: recall_at_1
value: 16.779
- type: recall_at_10
value: 31.730000000000004
- type: recall_at_100
value: 51.673
- type: recall_at_1000
value: 71.17599999999999
- type: recall_at_3
value: 23.518
- type: recall_at_5
value: 27.230999999999998
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 9.279
- type: map_at_10
value: 13.822000000000001
- type: map_at_100
value: 14.533
- type: map_at_1000
value: 14.649999999999999
- type: map_at_3
value: 12.396
- type: map_at_5
value: 13.214
- type: mrr_at_1
value: 11.149000000000001
- type: mrr_at_10
value: 16.139
- type: mrr_at_100
value: 16.872
- type: mrr_at_1000
value: 16.964000000000002
- type: mrr_at_3
value: 14.613000000000001
- type: mrr_at_5
value: 15.486
- type: ndcg_at_1
value: 11.149000000000001
- type: ndcg_at_10
value: 16.82
- type: ndcg_at_100
value: 20.73
- type: ndcg_at_1000
value: 23.894000000000002
- type: ndcg_at_3
value: 14.11
- type: ndcg_at_5
value: 15.404000000000002
- type: precision_at_1
value: 11.149000000000001
- type: precision_at_10
value: 3.063
- type: precision_at_100
value: 0.587
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 6.699
- type: precision_at_5
value: 4.928
- type: recall_at_1
value: 9.279
- type: recall_at_10
value: 23.745
- type: recall_at_100
value: 41.873
- type: recall_at_1000
value: 64.982
- type: recall_at_3
value: 16.152
- type: recall_at_5
value: 19.409000000000002
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 16.36
- type: map_at_10
value: 21.927
- type: map_at_100
value: 22.889
- type: map_at_1000
value: 22.994
- type: map_at_3
value: 20.433
- type: map_at_5
value: 21.337
- type: mrr_at_1
value: 18.75
- type: mrr_at_10
value: 24.859
- type: mrr_at_100
value: 25.746999999999996
- type: mrr_at_1000
value: 25.829
- type: mrr_at_3
value: 23.383000000000003
- type: mrr_at_5
value: 24.297
- type: ndcg_at_1
value: 18.75
- type: ndcg_at_10
value: 25.372
- type: ndcg_at_100
value: 30.342999999999996
- type: ndcg_at_1000
value: 33.286
- type: ndcg_at_3
value: 22.627
- type: ndcg_at_5
value: 24.04
- type: precision_at_1
value: 18.75
- type: precision_at_10
value: 4.1419999999999995
- type: precision_at_100
value: 0.738
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 10.261000000000001
- type: precision_at_5
value: 7.164
- type: recall_at_1
value: 16.36
- type: recall_at_10
value: 32.949
- type: recall_at_100
value: 55.552
- type: recall_at_1000
value: 77.09899999999999
- type: recall_at_3
value: 25.538
- type: recall_at_5
value: 29.008
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 17.39
- type: map_at_10
value: 23.058
- type: map_at_100
value: 24.445
- type: map_at_1000
value: 24.637999999999998
- type: map_at_3
value: 21.037
- type: map_at_5
value: 21.966
- type: mrr_at_1
value: 19.96
- type: mrr_at_10
value: 26.301000000000002
- type: mrr_at_100
value: 27.297
- type: mrr_at_1000
value: 27.375
- type: mrr_at_3
value: 24.340999999999998
- type: mrr_at_5
value: 25.339
- type: ndcg_at_1
value: 19.96
- type: ndcg_at_10
value: 27.249000000000002
- type: ndcg_at_100
value: 32.997
- type: ndcg_at_1000
value: 36.359
- type: ndcg_at_3
value: 23.519000000000002
- type: ndcg_at_5
value: 24.915000000000003
- type: precision_at_1
value: 19.96
- type: precision_at_10
value: 5.356000000000001
- type: precision_at_100
value: 1.198
- type: precision_at_1000
value: 0.20400000000000001
- type: precision_at_3
value: 10.738
- type: precision_at_5
value: 7.904999999999999
- type: recall_at_1
value: 17.39
- type: recall_at_10
value: 35.254999999999995
- type: recall_at_100
value: 61.351
- type: recall_at_1000
value: 84.395
- type: recall_at_3
value: 25.194
- type: recall_at_5
value: 28.546
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 14.238999999999999
- type: map_at_10
value: 19.323
- type: map_at_100
value: 19.994
- type: map_at_1000
value: 20.102999999999998
- type: map_at_3
value: 17.631
- type: map_at_5
value: 18.401
- type: mrr_at_1
value: 15.157000000000002
- type: mrr_at_10
value: 20.578
- type: mrr_at_100
value: 21.252
- type: mrr_at_1000
value: 21.346999999999998
- type: mrr_at_3
value: 18.762
- type: mrr_at_5
value: 19.713
- type: ndcg_at_1
value: 15.157000000000002
- type: ndcg_at_10
value: 22.468
- type: ndcg_at_100
value: 26.245
- type: ndcg_at_1000
value: 29.534
- type: ndcg_at_3
value: 18.981
- type: ndcg_at_5
value: 20.349999999999998
- type: precision_at_1
value: 15.157000000000002
- type: precision_at_10
value: 3.512
- type: precision_at_100
value: 0.577
- type: precision_at_1000
value: 0.091
- type: precision_at_3
value: 8.01
- type: precision_at_5
value: 5.656
- type: recall_at_1
value: 14.238999999999999
- type: recall_at_10
value: 31.038
- type: recall_at_100
value: 49.122
- type: recall_at_1000
value: 74.919
- type: recall_at_3
value: 21.436
- type: recall_at_5
value: 24.692
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: 392b78eb68c07badcd7c2cd8f39af108375dfcce
metrics:
- type: map_at_1
value: 8.828
- type: map_at_10
value: 14.982000000000001
- type: map_at_100
value: 16.495
- type: map_at_1000
value: 16.658
- type: map_at_3
value: 12.366000000000001
- type: map_at_5
value: 13.655000000000001
- type: mrr_at_1
value: 19.088
- type: mrr_at_10
value: 29.29
- type: mrr_at_100
value: 30.291
- type: mrr_at_1000
value: 30.342000000000002
- type: mrr_at_3
value: 25.907000000000004
- type: mrr_at_5
value: 27.840999999999998
- type: ndcg_at_1
value: 19.088
- type: ndcg_at_10
value: 21.858
- type: ndcg_at_100
value: 28.323999999999998
- type: ndcg_at_1000
value: 31.561
- type: ndcg_at_3
value: 17.175
- type: ndcg_at_5
value: 18.869
- type: precision_at_1
value: 19.088
- type: precision_at_10
value: 6.9190000000000005
- type: precision_at_100
value: 1.376
- type: precision_at_1000
value: 0.197
- type: precision_at_3
value: 12.703999999999999
- type: precision_at_5
value: 9.993
- type: recall_at_1
value: 8.828
- type: recall_at_10
value: 27.381
- type: recall_at_100
value: 50.0
- type: recall_at_1000
value: 68.355
- type: recall_at_3
value: 16.118
- type: recall_at_5
value: 20.587
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: f097057d03ed98220bc7309ddb10b71a54d667d6
metrics:
- type: map_at_1
value: 5.586
- type: map_at_10
value: 10.040000000000001
- type: map_at_100
value: 12.55
- type: map_at_1000
value: 13.123999999999999
- type: map_at_3
value: 7.75
- type: map_at_5
value: 8.835999999999999
- type: mrr_at_1
value: 42.25
- type: mrr_at_10
value: 51.205999999999996
- type: mrr_at_100
value: 51.818
- type: mrr_at_1000
value: 51.855
- type: mrr_at_3
value: 48.875
- type: mrr_at_5
value: 50.488
- type: ndcg_at_1
value: 32.25
- type: ndcg_at_10
value: 22.718
- type: ndcg_at_100
value: 24.359
- type: ndcg_at_1000
value: 29.232000000000003
- type: ndcg_at_3
value: 25.974000000000004
- type: ndcg_at_5
value: 24.291999999999998
- type: precision_at_1
value: 42.25
- type: precision_at_10
value: 17.75
- type: precision_at_100
value: 5.032
- type: precision_at_1000
value: 1.117
- type: precision_at_3
value: 28.833
- type: precision_at_5
value: 24.25
- type: recall_at_1
value: 5.586
- type: recall_at_10
value: 14.16
- type: recall_at_100
value: 28.051
- type: recall_at_1000
value: 45.157000000000004
- type: recall_at_3
value: 8.758000000000001
- type: recall_at_5
value: 10.975999999999999
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 829147f8f75a25f005913200eb5ed41fae320aa1
metrics:
- type: accuracy
value: 39.075
- type: f1
value: 35.01420354708222
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: 1429cf27e393599b8b359b9b72c666f96b2525f9
metrics:
- type: map_at_1
value: 43.519999999999996
- type: map_at_10
value: 54.368
- type: map_at_100
value: 54.918
- type: map_at_1000
value: 54.942
- type: map_at_3
value: 51.712
- type: map_at_5
value: 53.33599999999999
- type: mrr_at_1
value: 46.955000000000005
- type: mrr_at_10
value: 58.219
- type: mrr_at_100
value: 58.73500000000001
- type: mrr_at_1000
value: 58.753
- type: mrr_at_3
value: 55.518
- type: mrr_at_5
value: 57.191
- type: ndcg_at_1
value: 46.955000000000005
- type: ndcg_at_10
value: 60.45
- type: ndcg_at_100
value: 63.047
- type: ndcg_at_1000
value: 63.712999999999994
- type: ndcg_at_3
value: 55.233
- type: ndcg_at_5
value: 58.072
- type: precision_at_1
value: 46.955000000000005
- type: precision_at_10
value: 8.267
- type: precision_at_100
value: 0.962
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 22.326999999999998
- type: precision_at_5
value: 14.940999999999999
- type: recall_at_1
value: 43.519999999999996
- type: recall_at_10
value: 75.632
- type: recall_at_100
value: 87.41600000000001
- type: recall_at_1000
value: 92.557
- type: recall_at_3
value: 61.597
- type: recall_at_5
value: 68.518
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: 41b686a7f28c59bcaaa5791efd47c67c8ebe28be
metrics:
- type: map_at_1
value: 9.549000000000001
- type: map_at_10
value: 15.762
- type: map_at_100
value: 17.142
- type: map_at_1000
value: 17.329
- type: map_at_3
value: 13.575000000000001
- type: map_at_5
value: 14.754000000000001
- type: mrr_at_1
value: 19.753
- type: mrr_at_10
value: 26.568
- type: mrr_at_100
value: 27.606
- type: mrr_at_1000
value: 27.68
- type: mrr_at_3
value: 24.203
- type: mrr_at_5
value: 25.668999999999997
- type: ndcg_at_1
value: 19.753
- type: ndcg_at_10
value: 21.118000000000002
- type: ndcg_at_100
value: 27.308
- type: ndcg_at_1000
value: 31.304
- type: ndcg_at_3
value: 18.319
- type: ndcg_at_5
value: 19.414
- type: precision_at_1
value: 19.753
- type: precision_at_10
value: 6.08
- type: precision_at_100
value: 1.204
- type: precision_at_1000
value: 0.192
- type: precision_at_3
value: 12.191
- type: precision_at_5
value: 9.383
- type: recall_at_1
value: 9.549000000000001
- type: recall_at_10
value: 26.131
- type: recall_at_100
value: 50.544999999999995
- type: recall_at_1000
value: 74.968
- type: recall_at_3
value: 16.951
- type: recall_at_5
value: 20.95
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: 766870b35a1b9ca65e67a0d1913899973551fc6c
metrics:
- type: map_at_1
value: 25.544
- type: map_at_10
value: 32.62
- type: map_at_100
value: 33.275
- type: map_at_1000
value: 33.344
- type: map_at_3
value: 30.851
- type: map_at_5
value: 31.868999999999996
- type: mrr_at_1
value: 51.087
- type: mrr_at_10
value: 57.704
- type: mrr_at_100
value: 58.175
- type: mrr_at_1000
value: 58.207
- type: mrr_at_3
value: 56.106
- type: mrr_at_5
value: 57.074000000000005
- type: ndcg_at_1
value: 51.087
- type: ndcg_at_10
value: 40.876000000000005
- type: ndcg_at_100
value: 43.762
- type: ndcg_at_1000
value: 45.423
- type: ndcg_at_3
value: 37.65
- type: ndcg_at_5
value: 39.305
- type: precision_at_1
value: 51.087
- type: precision_at_10
value: 8.304
- type: precision_at_100
value: 1.059
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 22.875999999999998
- type: precision_at_5
value: 15.033
- type: recall_at_1
value: 25.544
- type: recall_at_10
value: 41.519
- type: recall_at_100
value: 52.957
- type: recall_at_1000
value: 64.132
- type: recall_at_3
value: 34.315
- type: recall_at_5
value: 37.583
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 8d743909f834c38949e8323a8a6ce8721ea6c7f4
metrics:
- type: accuracy
value: 58.6696
- type: ap
value: 55.3644880984279
- type: f1
value: 58.07942097405652
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: validation
revision: e6838a846e2408f22cf5cc337ebc83e0bcf77849
metrics:
- type: map_at_1
value: 14.442
- type: map_at_10
value: 22.932
- type: map_at_100
value: 24.132
- type: map_at_1000
value: 24.213
- type: map_at_3
value: 20.002
- type: map_at_5
value: 21.636
- type: mrr_at_1
value: 14.841999999999999
- type: mrr_at_10
value: 23.416
- type: mrr_at_100
value: 24.593999999999998
- type: mrr_at_1000
value: 24.669
- type: mrr_at_3
value: 20.494
- type: mrr_at_5
value: 22.14
- type: ndcg_at_1
value: 14.841999999999999
- type: ndcg_at_10
value: 27.975
- type: ndcg_at_100
value: 34.143
- type: ndcg_at_1000
value: 36.370000000000005
- type: ndcg_at_3
value: 21.944
- type: ndcg_at_5
value: 24.881
- type: precision_at_1
value: 14.841999999999999
- type: precision_at_10
value: 4.537
- type: precision_at_100
value: 0.767
- type: precision_at_1000
value: 0.096
- type: precision_at_3
value: 9.322
- type: precision_at_5
value: 7.074
- type: recall_at_1
value: 14.442
- type: recall_at_10
value: 43.557
- type: recall_at_100
value: 72.904
- type: recall_at_1000
value: 90.40700000000001
- type: recall_at_3
value: 27.088
- type: recall_at_5
value: 34.144000000000005
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 86.95622435020519
- type: f1
value: 86.58363130708494
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (de)
config: de
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 62.73034657650043
- type: f1
value: 60.78623915840713
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (es)
config: es
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 67.54503002001334
- type: f1
value: 65.34879794116112
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (fr)
config: fr
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 65.35233322893829
- type: f1
value: 62.994001882446646
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (hi)
config: hi
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 45.37110075295806
- type: f1
value: 44.26285860740745
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (th)
config: th
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 55.276672694394215
- type: f1
value: 53.28388179869587
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 62.25262197902417
- type: f1
value: 43.44084037148853
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (de)
config: de
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 49.56043956043956
- type: f1
value: 32.86333673498598
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (es)
config: es
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 49.93995997331555
- type: f1
value: 34.726671876888126
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (fr)
config: fr
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 46.32947071719386
- type: f1
value: 32.325273615982795
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (hi)
config: hi
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 32.208676945141626
- type: f1
value: 21.32185122815139
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (th)
config: th
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 43.627486437613015
- type: f1
value: 27.04872922347508
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (af)
config: af
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 40.548083389374575
- type: f1
value: 39.490307545239716
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (am)
config: am
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 24.18291862811029
- type: f1
value: 23.437620034727473
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ar)
config: ar
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 30.134498991257562
- type: f1
value: 28.787175191531283
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (az)
config: az
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 35.88433086751849
- type: f1
value: 36.264500398782126
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (bn)
config: bn
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 29.17283120376597
- type: f1
value: 27.8101616531901
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (cy)
config: cy
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 41.788836583725626
- type: f1
value: 39.71413181054801
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (da)
config: da
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 44.176193678547406
- type: f1
value: 42.192499826552286
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (de)
config: de
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 42.07464694014795
- type: f1
value: 39.44188259183162
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (el)
config: el
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 36.254203093476804
- type: f1
value: 34.46592715936761
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 61.40887693342301
- type: f1
value: 59.79854802683996
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (es)
config: es
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 42.679892400807
- type: f1
value: 42.04801248338172
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (fa)
config: fa
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 35.59179556153329
- type: f1
value: 34.045862930486166
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (fi)
config: fi
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 40.036987222595826
- type: f1
value: 38.117703439362785
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (fr)
config: fr
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 43.43981170141224
- type: f1
value: 42.7084388987865
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (he)
config: he
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 31.593813046402154
- type: f1
value: 29.98550522450782
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (hi)
config: hi
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 27.044384667114997
- type: f1
value: 27.313059184832667
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (hu)
config: hu
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 38.453261600538
- type: f1
value: 37.309189326110435
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (hy)
config: hy
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 27.979152656355076
- type: f1
value: 27.430939684346445
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (id)
config: id
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 43.97108271687963
- type: f1
value: 43.40585705688761
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (is)
config: is
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 40.302622730329524
- type: f1
value: 39.108052180520744
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (it)
config: it
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 45.474108944182916
- type: f1
value: 45.85950328241134
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ja)
config: ja
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 45.60860793544048
- type: f1
value: 43.94920708216737
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (jv)
config: jv
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 38.668459986550104
- type: f1
value: 37.6990034018859
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ka)
config: ka
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 25.6523201075992
- type: f1
value: 25.279084273189582
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (km)
config: km
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 28.295225285810353
- type: f1
value: 26.645825638771548
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (kn)
config: kn
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 23.480161398789505
- type: f1
value: 22.275241866506732
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ko)
config: ko
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 36.55682582380632
- type: f1
value: 36.004753171063605
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (lv)
config: lv
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 41.84936112979153
- type: f1
value: 41.38932672359119
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ml)
config: ml
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 24.90921318090114
- type: f1
value: 23.968687483768807
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (mn)
config: mn
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 29.86213853396099
- type: f1
value: 29.977152075255407
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ms)
config: ms
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 42.42098184263618
- type: f1
value: 41.50877432664628
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (my)
config: my
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 25.131136516476126
- type: f1
value: 23.938932214086776
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (nb)
config: nb
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 39.81506388702084
- type: f1
value: 38.809586587791664
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (nl)
config: nl
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 43.62138533960995
- type: f1
value: 42.01386842914633
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (pl)
config: pl
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 42.19569603227976
- type: f1
value: 40.00556559825827
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (pt)
config: pt
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 45.20847343644923
- type: f1
value: 44.24115005029051
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ro)
config: ro
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 41.80901143241426
- type: f1
value: 40.474074848670085
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ru)
config: ru
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 35.96839273705447
- type: f1
value: 35.095456843621
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (sl)
config: sl
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 40.60524546065905
- type: f1
value: 39.302383051500136
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (sq)
config: sq
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 42.75722932078009
- type: f1
value: 41.53763931497389
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (sv)
config: sv
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 42.347007397444514
- type: f1
value: 41.04366017948627
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (sw)
config: sw
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 41.12306657700067
- type: f1
value: 39.712940473289024
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ta)
config: ta
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 24.603227975790183
- type: f1
value: 23.969236788828606
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (te)
config: te
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 25.03698722259583
- type: f1
value: 24.37196123281459
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (th)
config: th
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 35.40013449899126
- type: f1
value: 35.063600413688036
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (tl)
config: tl
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 41.19031607262945
- type: f1
value: 40.240432304273014
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (tr)
config: tr
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 36.405514458641555
- type: f1
value: 36.03844992856558
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ur)
config: ur
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 25.934767989240076
- type: f1
value: 25.2074457023531
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (vi)
config: vi
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 38.79959650302622
- type: f1
value: 37.160233794673125
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-CN)
config: zh-CN
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 46.244115669132476
- type: f1
value: 44.367480561291906
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-TW)
config: zh-TW
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 42.30665770006724
- type: f1
value: 41.9642223283514
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (af)
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 43.2481506388702
- type: f1
value: 40.924230769590785
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (am)
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 25.30262273032952
- type: f1
value: 24.937105830264066
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ar)
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 32.07128446536651
- type: f1
value: 31.80245816594883
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (az)
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 36.681237390719566
- type: f1
value: 36.37219042508338
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (bn)
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 29.56624075319435
- type: f1
value: 28.386042056362758
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (cy)
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 42.1049092131809
- type: f1
value: 38.926150886991294
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (da)
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 45.44384667114997
- type: f1
value: 42.578252395460005
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (de)
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 43.211163416274374
- type: f1
value: 41.04465858304789
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (el)
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 36.503026227303295
- type: f1
value: 34.49785095312759
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.73772696704773
- type: f1
value: 69.21759502909043
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (es)
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 44.078681909885674
- type: f1
value: 43.05914426901129
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (fa)
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 32.61264290517821
- type: f1
value: 32.02463177462754
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (fi)
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 40.35642232683255
- type: f1
value: 38.13642481807678
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (fr)
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 45.06724949562878
- type: f1
value: 43.19827608343738
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (he)
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 32.178883658372555
- type: f1
value: 29.979761884698775
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (hi)
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 26.903160726294555
- type: f1
value: 25.833010434083363
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (hu)
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 40.379959650302624
- type: f1
value: 37.93134355292882
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (hy)
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 28.375924680564896
- type: f1
value: 26.96255693013172
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (id)
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 44.361129791526565
- type: f1
value: 43.54445012295126
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (is)
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 39.290517821116346
- type: f1
value: 37.26982052174147
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (it)
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 46.4694014794889
- type: f1
value: 44.060986162841566
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ja)
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 46.25756556825824
- type: f1
value: 45.625139456758816
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (jv)
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 41.12642905178212
- type: f1
value: 39.54392378396527
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ka)
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 24.72763954270343
- type: f1
value: 23.337743140804484
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (km)
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 29.741089441829182
- type: f1
value: 27.570876190083748
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (kn)
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 23.850033624747816
- type: f1
value: 22.86733484540032
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ko)
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 36.56691324815064
- type: f1
value: 35.504081677134565
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (lv)
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 40.928043039677206
- type: f1
value: 39.108589131211254
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ml)
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 25.527908540685946
- type: f1
value: 25.333391622280477
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (mn)
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 29.105581708137183
- type: f1
value: 28.478235012692814
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ms)
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 43.78614660390047
- type: f1
value: 41.9640143926267
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (my)
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 27.269670477471415
- type: f1
value: 26.228386764141852
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (nb)
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 39.018157363819775
- type: f1
value: 37.641949339321854
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (nl)
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 45.35978480161399
- type: f1
value: 42.6851176096831
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (pl)
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 41.89307330195023
- type: f1
value: 40.888710642615024
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (pt)
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 45.901143241425686
- type: f1
value: 44.496942353920545
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ro)
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 44.11566913248151
- type: f1
value: 41.953945105870616
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ru)
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 32.76395427034297
- type: f1
value: 31.436372571600934
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (sl)
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 40.504371217215876
- type: f1
value: 39.322752749628165
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (sq)
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 42.51849361129792
- type: f1
value: 41.4139297118463
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (sv)
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 42.293207800941495
- type: f1
value: 40.50409536806683
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (sw)
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 42.9993275050437
- type: f1
value: 41.045416224973266
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ta)
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 28.32548755884331
- type: f1
value: 27.276841995561867
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (te)
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 26.593813046402154
- type: f1
value: 25.483878616197586
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (th)
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 36.788836583725626
- type: f1
value: 34.603932909177686
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (tl)
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 42.5689307330195
- type: f1
value: 40.924469309079825
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (tr)
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 37.09482178883658
- type: f1
value: 37.949628822857164
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ur)
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 28.836583725622063
- type: f1
value: 27.806558655512344
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (vi)
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 37.357094821788834
- type: f1
value: 37.507918961038165
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-CN)
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 49.37794216543375
- type: f1
value: 47.20421153697707
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-TW)
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 44.42165433759248
- type: f1
value: 44.34741861198931
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: dcefc037ef84348e49b0d29109e891c01067226b
metrics:
- type: v_measure
value: 31.374938993074252
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 3cd0e71dfbe09d4de0f9e5ecba43e7ce280959dc
metrics:
- type: v_measure
value: 26.871455379644093
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.402396942935333
- type: mrr
value: 31.42600938803256
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: 7eb63cc0c1eb59324d709ebed25fcab851fa7610
metrics:
- type: map_at_1
value: 3.7740000000000005
- type: map_at_10
value: 7.614999999999999
- type: map_at_100
value: 9.574
- type: map_at_1000
value: 10.711
- type: map_at_3
value: 5.7540000000000004
- type: map_at_5
value: 6.6659999999999995
- type: mrr_at_1
value: 33.127
- type: mrr_at_10
value: 40.351
- type: mrr_at_100
value: 41.144
- type: mrr_at_1000
value: 41.202
- type: mrr_at_3
value: 38.029
- type: mrr_at_5
value: 39.190000000000005
- type: ndcg_at_1
value: 31.579
- type: ndcg_at_10
value: 22.792
- type: ndcg_at_100
value: 21.698999999999998
- type: ndcg_at_1000
value: 30.892999999999997
- type: ndcg_at_3
value: 26.828999999999997
- type: ndcg_at_5
value: 25.119000000000003
- type: precision_at_1
value: 33.127
- type: precision_at_10
value: 16.718
- type: precision_at_100
value: 5.7090000000000005
- type: precision_at_1000
value: 1.836
- type: precision_at_3
value: 24.768
- type: precision_at_5
value: 21.3
- type: recall_at_1
value: 3.7740000000000005
- type: recall_at_10
value: 10.302999999999999
- type: recall_at_100
value: 23.013
- type: recall_at_1000
value: 54.864999999999995
- type: recall_at_3
value: 6.554
- type: recall_at_5
value: 8.087
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: 6062aefc120bfe8ece5897809fb2e53bfe0d128c
metrics:
- type: map_at_1
value: 15.620999999999999
- type: map_at_10
value: 24.519
- type: map_at_100
value: 25.586
- type: map_at_1000
value: 25.662000000000003
- type: map_at_3
value: 21.619
- type: map_at_5
value: 23.232
- type: mrr_at_1
value: 17.497
- type: mrr_at_10
value: 26.301000000000002
- type: mrr_at_100
value: 27.235
- type: mrr_at_1000
value: 27.297
- type: mrr_at_3
value: 23.561
- type: mrr_at_5
value: 25.111
- type: ndcg_at_1
value: 17.497
- type: ndcg_at_10
value: 29.725
- type: ndcg_at_100
value: 34.824
- type: ndcg_at_1000
value: 36.907000000000004
- type: ndcg_at_3
value: 23.946
- type: ndcg_at_5
value: 26.739
- type: precision_at_1
value: 17.497
- type: precision_at_10
value: 5.2170000000000005
- type: precision_at_100
value: 0.8099999999999999
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 11.114
- type: precision_at_5
value: 8.285
- type: recall_at_1
value: 15.620999999999999
- type: recall_at_10
value: 43.999
- type: recall_at_100
value: 67.183
- type: recall_at_1000
value: 83.174
- type: recall_at_3
value: 28.720000000000002
- type: recall_at_5
value: 35.154
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: 6205996560df11e3a3da9ab4f926788fc30a7db4
metrics:
- type: map_at_1
value: 54.717000000000006
- type: map_at_10
value: 67.514
- type: map_at_100
value: 68.484
- type: map_at_1000
value: 68.523
- type: map_at_3
value: 64.169
- type: map_at_5
value: 66.054
- type: mrr_at_1
value: 62.46000000000001
- type: mrr_at_10
value: 71.503
- type: mrr_at_100
value: 71.91499999999999
- type: mrr_at_1000
value: 71.923
- type: mrr_at_3
value: 69.46799999999999
- type: mrr_at_5
value: 70.677
- type: ndcg_at_1
value: 62.480000000000004
- type: ndcg_at_10
value: 72.98
- type: ndcg_at_100
value: 76.023
- type: ndcg_at_1000
value: 76.512
- type: ndcg_at_3
value: 68.138
- type: ndcg_at_5
value: 70.458
- type: precision_at_1
value: 62.480000000000004
- type: precision_at_10
value: 11.373
- type: precision_at_100
value: 1.437
- type: precision_at_1000
value: 0.154
- type: precision_at_3
value: 29.622999999999998
- type: precision_at_5
value: 19.918
- type: recall_at_1
value: 54.717000000000006
- type: recall_at_10
value: 84.745
- type: recall_at_100
value: 96.528
- type: recall_at_1000
value: 99.39
- type: recall_at_3
value: 71.60600000000001
- type: recall_at_5
value: 77.511
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: b2805658ae38990172679479369a78b86de8c390
metrics:
- type: v_measure
value: 40.23390747226228
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 49.090518272935626
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: 5c59ef3e437a0a9651c8fe6fde943e7dce59fba5
metrics:
- type: map_at_1
value: 3.028
- type: map_at_10
value: 6.968000000000001
- type: map_at_100
value: 8.200000000000001
- type: map_at_1000
value: 8.432
- type: map_at_3
value: 5.3069999999999995
- type: map_at_5
value: 6.099
- type: mrr_at_1
value: 14.799999999999999
- type: mrr_at_10
value: 22.425
- type: mrr_at_100
value: 23.577
- type: mrr_at_1000
value: 23.669999999999998
- type: mrr_at_3
value: 20.233
- type: mrr_at_5
value: 21.318
- type: ndcg_at_1
value: 14.799999999999999
- type: ndcg_at_10
value: 12.206
- type: ndcg_at_100
value: 17.799
- type: ndcg_at_1000
value: 22.891000000000002
- type: ndcg_at_3
value: 12.128
- type: ndcg_at_5
value: 10.212
- type: precision_at_1
value: 14.799999999999999
- type: precision_at_10
value: 6.17
- type: precision_at_100
value: 1.428
- type: precision_at_1000
value: 0.266
- type: precision_at_3
value: 11.333
- type: precision_at_5
value: 8.74
- type: recall_at_1
value: 3.028
- type: recall_at_10
value: 12.522
- type: recall_at_100
value: 28.975
- type: recall_at_1000
value: 54.038
- type: recall_at_3
value: 6.912999999999999
- type: recall_at_5
value: 8.883000000000001
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cos_sim_pearson
value: 76.62983928119752
- type: cos_sim_spearman
value: 65.92910683118656
- type: euclidean_pearson
value: 71.10290039690963
- type: euclidean_spearman
value: 64.80076622426652
- type: manhattan_pearson
value: 70.8944726230188
- type: manhattan_spearman
value: 64.75082576033986
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: fdf84275bb8ce4b49c971d02e84dd1abc677a50f
metrics:
- type: cos_sim_pearson
value: 74.42679147085553
- type: cos_sim_spearman
value: 66.52980061546658
- type: euclidean_pearson
value: 74.87039477408763
- type: euclidean_spearman
value: 70.63397666902786
- type: manhattan_pearson
value: 74.97015137513088
- type: manhattan_spearman
value: 70.75951355434326
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 1591bfcbe8c69d4bf7fe2a16e2451017832cafb9
metrics:
- type: cos_sim_pearson
value: 75.62472426599543
- type: cos_sim_spearman
value: 76.1662886374236
- type: euclidean_pearson
value: 76.3297128081315
- type: euclidean_spearman
value: 77.19385151966563
- type: manhattan_pearson
value: 76.50363291423257
- type: manhattan_spearman
value: 77.37081896355399
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: e2125984e7df8b7871f6ae9949cf6b6795e7c54b
metrics:
- type: cos_sim_pearson
value: 74.48227705407035
- type: cos_sim_spearman
value: 69.04572664009687
- type: euclidean_pearson
value: 71.76138185714849
- type: euclidean_spearman
value: 68.93415452043307
- type: manhattan_pearson
value: 71.68010915543306
- type: manhattan_spearman
value: 68.99176321262806
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: 1cd7298cac12a96a373b6a2f18738bb3e739a9b6
metrics:
- type: cos_sim_pearson
value: 78.1566527175902
- type: cos_sim_spearman
value: 79.23677712825851
- type: euclidean_pearson
value: 76.29138438696417
- type: euclidean_spearman
value: 77.20108266215374
- type: manhattan_pearson
value: 76.27464935799118
- type: manhattan_spearman
value: 77.15286174478099
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 360a0b2dff98700d09e634a01e1cc1624d3e42cd
metrics:
- type: cos_sim_pearson
value: 75.068454465977
- type: cos_sim_spearman
value: 76.06792422441929
- type: euclidean_pearson
value: 70.64605440627699
- type: euclidean_spearman
value: 70.21776051117844
- type: manhattan_pearson
value: 70.32479295054918
- type: manhattan_spearman
value: 69.89782458638528
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (ko-ko)
config: ko-ko
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 39.43327289939437
- type: cos_sim_spearman
value: 52.386010275505654
- type: euclidean_pearson
value: 46.40999904885745
- type: euclidean_spearman
value: 51.00333465175934
- type: manhattan_pearson
value: 46.55753533133655
- type: manhattan_spearman
value: 51.07550440519388
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (ar-ar)
config: ar-ar
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 55.54431928210687
- type: cos_sim_spearman
value: 55.61674586076298
- type: euclidean_pearson
value: 58.07442713714088
- type: euclidean_spearman
value: 55.74066216931719
- type: manhattan_pearson
value: 57.84021675638542
- type: manhattan_spearman
value: 55.20365812536853
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-ar)
config: en-ar
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 11.378463868809098
- type: cos_sim_spearman
value: 8.209569244801065
- type: euclidean_pearson
value: 1.07041700730406
- type: euclidean_spearman
value: 2.2052197108931892
- type: manhattan_pearson
value: 0.7671300251104268
- type: manhattan_spearman
value: 3.430645020535567
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-de)
config: en-de
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 32.71403560929013
- type: cos_sim_spearman
value: 30.18181775929109
- type: euclidean_pearson
value: 25.57368595910298
- type: euclidean_spearman
value: 23.316649115731376
- type: manhattan_pearson
value: 24.144200325329614
- type: manhattan_spearman
value: 21.64621546338457
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 83.36340470799158
- type: cos_sim_spearman
value: 84.95398260629699
- type: euclidean_pearson
value: 80.69876969911644
- type: euclidean_spearman
value: 80.97451731130427
- type: manhattan_pearson
value: 80.65869354146945
- type: manhattan_spearman
value: 80.8540858718528
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-tr)
config: en-tr
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 1.9200044163754912
- type: cos_sim_spearman
value: 1.0393399782021342
- type: euclidean_pearson
value: 1.1376003191297994
- type: euclidean_spearman
value: 1.8947106671763914
- type: manhattan_pearson
value: 3.8362564474484335
- type: manhattan_spearman
value: 4.242750882792888
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (es-en)
config: es-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 26.561262451099577
- type: cos_sim_spearman
value: 28.776666666659906
- type: euclidean_pearson
value: 14.640410196999088
- type: euclidean_spearman
value: 16.10557011701786
- type: manhattan_pearson
value: 15.019405495911272
- type: manhattan_spearman
value: 15.37192083104197
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (es-es)
config: es-es
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 69.7544202001433
- type: cos_sim_spearman
value: 71.88444295144646
- type: euclidean_pearson
value: 73.84934185952773
- type: euclidean_spearman
value: 73.26911108021089
- type: manhattan_pearson
value: 74.04354196954574
- type: manhattan_spearman
value: 73.37650787943872
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (fr-en)
config: fr-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 27.70511842301491
- type: cos_sim_spearman
value: 26.339466714066447
- type: euclidean_pearson
value: 9.323158236506385
- type: euclidean_spearman
value: 7.32083231520273
- type: manhattan_pearson
value: 7.807399527573071
- type: manhattan_spearman
value: 5.525546663067113
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (it-en)
config: it-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 24.226521799447692
- type: cos_sim_spearman
value: 20.72992940458968
- type: euclidean_pearson
value: 6.753378617205011
- type: euclidean_spearman
value: 6.281654679029505
- type: manhattan_pearson
value: 7.087180250449323
- type: manhattan_spearman
value: 6.41611659259516
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (nl-en)
config: nl-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 29.131412364061234
- type: cos_sim_spearman
value: 25.053429612793547
- type: euclidean_pearson
value: 10.657141303962
- type: euclidean_spearman
value: 9.712124819778452
- type: manhattan_pearson
value: 12.481782693315688
- type: manhattan_spearman
value: 11.287958480905973
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 64.04750650962879
- type: cos_sim_spearman
value: 65.66183708171826
- type: euclidean_pearson
value: 66.90887604405887
- type: euclidean_spearman
value: 66.89814072484552
- type: manhattan_pearson
value: 67.31627110509089
- type: manhattan_spearman
value: 67.01048176165322
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de)
config: de
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 19.26519187000913
- type: cos_sim_spearman
value: 21.987647321429005
- type: euclidean_pearson
value: 17.850618752342946
- type: euclidean_spearman
value: 22.86669392885474
- type: manhattan_pearson
value: 18.16183594260708
- type: manhattan_spearman
value: 23.637510352837907
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (es)
config: es
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 34.221261828226936
- type: cos_sim_spearman
value: 49.811823238907664
- type: euclidean_pearson
value: 44.50394399762147
- type: euclidean_spearman
value: 50.959184495072876
- type: manhattan_pearson
value: 45.83191034038624
- type: manhattan_spearman
value: 50.190409866117946
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (pl)
config: pl
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 3.620381732096531
- type: cos_sim_spearman
value: 23.30843951799194
- type: euclidean_pearson
value: 0.965453312113125
- type: euclidean_spearman
value: 24.235967620790316
- type: manhattan_pearson
value: 1.4408922275701606
- type: manhattan_spearman
value: 25.161920137046096
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (tr)
config: tr
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 16.69489628726267
- type: cos_sim_spearman
value: 34.66348380997687
- type: euclidean_pearson
value: 29.415825529188606
- type: euclidean_spearman
value: 38.33011033170646
- type: manhattan_pearson
value: 31.23273195263394
- type: manhattan_spearman
value: 39.10055785755795
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (ar)
config: ar
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 9.134927430889528
- type: cos_sim_spearman
value: 28.18922448944151
- type: euclidean_pearson
value: 19.86814169549051
- type: euclidean_spearman
value: 27.519588644948627
- type: manhattan_pearson
value: 21.80949221238945
- type: manhattan_spearman
value: 28.25217200494078
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (ru)
config: ru
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 3.6386482942352085
- type: cos_sim_spearman
value: 9.068119621940966
- type: euclidean_pearson
value: 0.8123129118737714
- type: euclidean_spearman
value: 9.173672890166147
- type: manhattan_pearson
value: 0.754518899822658
- type: manhattan_spearman
value: 8.431719541986524
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 2.972091574908432
- type: cos_sim_spearman
value: 25.48511383289232
- type: euclidean_pearson
value: 12.751569670148918
- type: euclidean_spearman
value: 24.940721642439286
- type: manhattan_pearson
value: 14.310238482989826
- type: manhattan_spearman
value: 24.69821216148647
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (fr)
config: fr
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 54.4745185734135
- type: cos_sim_spearman
value: 67.66493409568727
- type: euclidean_pearson
value: 60.13580336797049
- type: euclidean_spearman
value: 66.12319300814538
- type: manhattan_pearson
value: 60.816210368708155
- type: manhattan_spearman
value: 65.70010026716766
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de-en)
config: de-en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 49.37865412588201
- type: cos_sim_spearman
value: 53.07135629778897
- type: euclidean_pearson
value: 49.29201416711091
- type: euclidean_spearman
value: 50.54523702399645
- type: manhattan_pearson
value: 51.265764141268534
- type: manhattan_spearman
value: 51.979086403193605
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (es-en)
config: es-en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 44.925652392562135
- type: cos_sim_spearman
value: 49.51253904767726
- type: euclidean_pearson
value: 48.79346518897415
- type: euclidean_spearman
value: 51.47957870101565
- type: manhattan_pearson
value: 49.51314553898044
- type: manhattan_spearman
value: 51.895207893189166
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (it)
config: it
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 45.241690321111875
- type: cos_sim_spearman
value: 48.24795739512037
- type: euclidean_pearson
value: 49.22719494399897
- type: euclidean_spearman
value: 49.64102442042809
- type: manhattan_pearson
value: 49.497887732970256
- type: manhattan_spearman
value: 49.940515338096304
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (pl-en)
config: pl-en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 36.42138324083909
- type: cos_sim_spearman
value: 36.79867489417801
- type: euclidean_pearson
value: 27.760612942610084
- type: euclidean_spearman
value: 29.140966500287625
- type: manhattan_pearson
value: 28.456674031350115
- type: manhattan_spearman
value: 27.46356370924497
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh-en)
config: zh-en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 26.55350664089358
- type: cos_sim_spearman
value: 28.681707196975008
- type: euclidean_pearson
value: 12.613577889195138
- type: euclidean_spearman
value: 13.589493311702933
- type: manhattan_pearson
value: 11.640157427420958
- type: manhattan_spearman
value: 10.345223941212415
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (es-it)
config: es-it
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 38.54682179114309
- type: cos_sim_spearman
value: 45.782560880405704
- type: euclidean_pearson
value: 46.496857002368486
- type: euclidean_spearman
value: 48.21270426410012
- type: manhattan_pearson
value: 46.871839119374044
- type: manhattan_spearman
value: 47.556987773851525
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de-fr)
config: de-fr
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 35.12956772546032
- type: cos_sim_spearman
value: 32.96920218281008
- type: euclidean_pearson
value: 34.23140384382136
- type: euclidean_spearman
value: 32.19303153191447
- type: manhattan_pearson
value: 34.189468276600635
- type: manhattan_spearman
value: 34.887065709732376
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de-pl)
config: de-pl
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 30.507667380509634
- type: cos_sim_spearman
value: 20.447284723752716
- type: euclidean_pearson
value: 29.662041381794474
- type: euclidean_spearman
value: 20.939990379746757
- type: manhattan_pearson
value: 32.5112080506328
- type: manhattan_spearman
value: 23.773047901712495
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (fr-pl)
config: fr-pl
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 71.10820459712156
- type: cos_sim_spearman
value: 61.97797868009122
- type: euclidean_pearson
value: 60.30910689156633
- type: euclidean_spearman
value: 61.97797868009122
- type: manhattan_pearson
value: 66.3405176964038
- type: manhattan_spearman
value: 61.97797868009122
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: 8913289635987208e6e7c72789e4be2fe94b6abd
metrics:
- type: cos_sim_pearson
value: 76.53032504460737
- type: cos_sim_spearman
value: 75.33716094627373
- type: euclidean_pearson
value: 69.64662673290599
- type: euclidean_spearman
value: 67.30188896368857
- type: manhattan_pearson
value: 69.45096082050807
- type: manhattan_spearman
value: 67.0718727259371
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: 56a6d0140cf6356659e2a7c1413286a774468d44
metrics:
- type: map
value: 71.33941904192648
- type: mrr
value: 89.73766429648782
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: a75ae049398addde9b70f6b268875f5cbce99089
metrics:
- type: map_at_1
value: 43.333
- type: map_at_10
value: 52.364
- type: map_at_100
value: 53.184
- type: map_at_1000
value: 53.234
- type: map_at_3
value: 49.832
- type: map_at_5
value: 51.244
- type: mrr_at_1
value: 45.333
- type: mrr_at_10
value: 53.455
- type: mrr_at_100
value: 54.191
- type: mrr_at_1000
value: 54.235
- type: mrr_at_3
value: 51.556000000000004
- type: mrr_at_5
value: 52.622
- type: ndcg_at_1
value: 45.333
- type: ndcg_at_10
value: 56.899
- type: ndcg_at_100
value: 60.702
- type: ndcg_at_1000
value: 62.046
- type: ndcg_at_3
value: 52.451
- type: ndcg_at_5
value: 54.534000000000006
- type: precision_at_1
value: 45.333
- type: precision_at_10
value: 7.8
- type: precision_at_100
value: 0.987
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 20.778
- type: precision_at_5
value: 13.866999999999999
- type: recall_at_1
value: 43.333
- type: recall_at_10
value: 69.69999999999999
- type: recall_at_100
value: 86.9
- type: recall_at_1000
value: 97.6
- type: recall_at_3
value: 57.81699999999999
- type: recall_at_5
value: 62.827999999999996
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: 5a8256d0dff9c4bd3be3ba3e67e4e70173f802ea
metrics:
- type: cos_sim_accuracy
value: 99.7
- type: cos_sim_ap
value: 89.88577913120001
- type: cos_sim_f1
value: 84.62694041061593
- type: cos_sim_precision
value: 84.7542627883651
- type: cos_sim_recall
value: 84.5
- type: dot_accuracy
value: 99.24752475247524
- type: dot_ap
value: 56.81855467290009
- type: dot_f1
value: 56.084126189283936
- type: dot_precision
value: 56.16850551654965
- type: dot_recall
value: 56.00000000000001
- type: euclidean_accuracy
value: 99.7059405940594
- type: euclidean_ap
value: 90.12451226491524
- type: euclidean_f1
value: 84.44211629125196
- type: euclidean_precision
value: 88.66886688668868
- type: euclidean_recall
value: 80.60000000000001
- type: manhattan_accuracy
value: 99.7128712871287
- type: manhattan_ap
value: 90.67590584183216
- type: manhattan_f1
value: 84.85436893203884
- type: manhattan_precision
value: 82.45283018867924
- type: manhattan_recall
value: 87.4
- type: max_accuracy
value: 99.7128712871287
- type: max_ap
value: 90.67590584183216
- type: max_f1
value: 84.85436893203884
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 70a89468f6dccacc6aa2b12a6eac54e74328f235
metrics:
- type: v_measure
value: 52.74481093815175
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: d88009ab563dd0b16cfaf4436abaf97fa3550cf0
metrics:
- type: v_measure
value: 32.65999453562101
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: ef807ea29a75ec4f91b50fd4191cb4ee4589a9f9
metrics:
- type: map
value: 44.74498464555465
- type: mrr
value: 45.333879764026825
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: 8753c2788d36c01fc6f05d03fe3f7268d63f9122
metrics:
- type: cos_sim_pearson
value: 29,603788751645216
- type: cos_sim_spearman
value: 29.705103354786033
- type: dot_pearson
value: 28.07425338095399
- type: dot_spearman
value: 26.841406359135367
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: 2c8041b2c07a79b6f7ba8fe6acc72e5d9f92d217
metrics:
- type: map_at_1
value: 0.241
- type: map_at_10
value: 1.672
- type: map_at_100
value: 7.858999999999999
- type: map_at_1000
value: 17.616
- type: map_at_3
value: 0.631
- type: map_at_5
value: 0.968
- type: mrr_at_1
value: 90.0
- type: mrr_at_10
value: 92.952
- type: mrr_at_100
value: 93.036
- type: mrr_at_1000
value: 93.036
- type: mrr_at_3
value: 92.667
- type: mrr_at_5
value: 92.667
- type: ndcg_at_1
value: 83.0
- type: ndcg_at_10
value: 70.30199999999999
- type: ndcg_at_100
value: 48.149
- type: ndcg_at_1000
value: 40.709
- type: ndcg_at_3
value: 79.173
- type: ndcg_at_5
value: 75.347
- type: precision_at_1
value: 90.0
- type: precision_at_10
value: 72.6
- type: precision_at_100
value: 48.46
- type: precision_at_1000
value: 18.093999999999998
- type: precision_at_3
value: 84.0
- type: precision_at_5
value: 78.8
- type: recall_at_1
value: 0.241
- type: recall_at_10
value: 1.814
- type: recall_at_100
value: 11.141
- type: recall_at_1000
value: 37.708999999999996
- type: recall_at_3
value: 0.647
- type: recall_at_5
value: 1.015
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: 527b7d77e16e343303e68cb6af11d6e18b9f7b3b
metrics:
- type: map_at_1
value: 2.782
- type: map_at_10
value: 9.06
- type: map_at_100
value: 14.571000000000002
- type: map_at_1000
value: 16.006999999999998
- type: map_at_3
value: 5.037
- type: map_at_5
value: 6.63
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 48.243
- type: mrr_at_100
value: 49.065
- type: mrr_at_1000
value: 49.065
- type: mrr_at_3
value: 44.897999999999996
- type: mrr_at_5
value: 46.428999999999995
- type: ndcg_at_1
value: 31.633
- type: ndcg_at_10
value: 22.972
- type: ndcg_at_100
value: 34.777
- type: ndcg_at_1000
value: 45.639
- type: ndcg_at_3
value: 26.398
- type: ndcg_at_5
value: 24.418
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 19.796
- type: precision_at_100
value: 7.224
- type: precision_at_1000
value: 1.4449999999999998
- type: precision_at_3
value: 26.531
- type: precision_at_5
value: 23.265
- type: recall_at_1
value: 2.782
- type: recall_at_10
value: 14.841
- type: recall_at_100
value: 44.86
- type: recall_at_1000
value: 78.227
- type: recall_at_3
value: 5.959
- type: recall_at_5
value: 8.969000000000001
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 62.657999999999994
- type: ap
value: 10.96353161716344
- type: f1
value: 48.294226423442645
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: 62146448f05be9e52a36b8ee9936447ea787eede
metrics:
- type: accuracy
value: 52.40803621958121
- type: f1
value: 52.61009636022186
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 091a54f9a36281ce7d6590ec8c75dd485e7e01d4
metrics:
- type: v_measure
value: 32.12697126747911
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 80.69976753889253
- type: cos_sim_ap
value: 54.74680676121268
- type: cos_sim_f1
value: 53.18923998590391
- type: cos_sim_precision
value: 47.93563413084904
- type: cos_sim_recall
value: 59.73614775725594
- type: dot_accuracy
value: 79.3348036001669
- type: dot_ap
value: 48.46902128933627
- type: dot_f1
value: 50.480109739369006
- type: dot_precision
value: 42.06084051345173
- type: dot_recall
value: 63.113456464379944
- type: euclidean_accuracy
value: 79.78780473266973
- type: euclidean_ap
value: 50.258327255164815
- type: euclidean_f1
value: 49.655838666827684
- type: euclidean_precision
value: 45.78044978846582
- type: euclidean_recall
value: 54.24802110817942
- type: manhattan_accuracy
value: 79.76992310901831
- type: manhattan_ap
value: 49.89892485714363
- type: manhattan_f1
value: 49.330433787341185
- type: manhattan_precision
value: 43.56175459874672
- type: manhattan_recall
value: 56.86015831134564
- type: max_accuracy
value: 80.69976753889253
- type: max_ap
value: 54.74680676121268
- type: max_f1
value: 53.18923998590391
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 86.90573213800597
- type: cos_sim_ap
value: 81.05760818661524
- type: cos_sim_f1
value: 73.64688856729379
- type: cos_sim_precision
value: 69.46491946491946
- type: cos_sim_recall
value: 78.3646442870342
- type: dot_accuracy
value: 83.80680715644041
- type: dot_ap
value: 72.49774005947461
- type: dot_f1
value: 68.68460650173216
- type: dot_precision
value: 62.954647507858105
- type: dot_recall
value: 75.56205728364644
- type: euclidean_accuracy
value: 85.97430822369697
- type: euclidean_ap
value: 78.86101740829326
- type: euclidean_f1
value: 71.07960824663695
- type: euclidean_precision
value: 70.36897306270279
- type: euclidean_recall
value: 71.8047428395442
- type: manhattan_accuracy
value: 85.94132029339853
- type: manhattan_ap
value: 78.77876711171923
- type: manhattan_f1
value: 71.07869075515912
- type: manhattan_precision
value: 69.80697847067557
- type: manhattan_recall
value: 72.39759778256852
- type: max_accuracy
value: 86.90573213800597
- type: max_ap
value: 81.05760818661524
- type: max_f1
value: 73.64688856729379
---
# SGPT-125M-weightedmean-msmarco-specb-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to the eval folder or our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15600 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0002
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
redponike/Prox-Llama-3-8B-GGUF | redponike | "2024-06-21T06:28:03Z" | 5,240 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-20T15:09:51Z" | GGUF quants of [openvoid/Prox-Llama-3-8B](https://huggingface.co/openvoid/Prox-Llama-3-8B) |
sbintuitions/tiny-lm | sbintuitions | "2024-06-27T09:47:28Z" | 5,235 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ja",
"en",
"dataset:wikipedia",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-07T13:12:16Z" | ---
license: mit
datasets:
- wikipedia
language:
- ja
- en
---
# tiny-lm
This repository provides a tiny 16M parameters language model for debugging and testing purposes.
Trained on English and Japanese Wikipedia data.
## How to use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model = AutoModelForCausalLM.from_pretrained("sbintuitions/tiny-lm", torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("sbintuitions/tiny-lm", use_fast=False)
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
print(generator("Hello", max_length=30, do_sample=True, top_k=100))
```
## Model architecture
A 4-layer, 512-hidden-size transformer-based language model.
## Training
The model was trained on English Wikipedia and Japanese Wikipedia to optimize a traditional language modelling objective for 25B tokens.
## License
[MIT License](https://huggingface.co/sbintuitions/tiny-lm/resolve/main/LICENSE)
|
MBZUAI/LaMini-Flan-T5-248M | MBZUAI | "2023-04-28T12:08:23Z" | 5,234 | 62 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"instruction fine-tuning",
"en",
"arxiv:2304.14402",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2023-04-10T17:37:18Z" | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
- instruction fine-tuning
model-index:
- name: flan-t5-small-distil-v2
results: []
language:
- en
pipeline_tag: text2text-generation
widget:
- text: >-
how can I become more healthy?
example_title: example
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<p align="center" width="100%">
<a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
# LaMini-Flan-T5-248M
[]()
This model is one of our LaMini-LM model series in paper "[LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions](https://github.com/mbzuai-nlp/lamini-lm)". This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini-lm/).
You can view other models of LaMini-LM series as follows. Models with ✩ are those with the best overall performance given their size/architecture, hence we recommend using them. More details can be seen in our paper.
<table>
<thead>
<tr>
<th>Base model</th>
<th colspan="4">LaMini-LM series (#parameters)</th>
</tr>
</thead>
<tbody>
<tr>
<td>T5</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-61m" target="_blank" rel="noopener noreferrer">LaMini-T5-61M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-223m" target="_blank" rel="noopener noreferrer">LaMini-T5-223M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-t5-738m" target="_blank" rel="noopener noreferrer">LaMini-T5-738M</a></td>
<td></td>
</tr>
<tr>
<td>Flan-T5</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-77m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-77M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-248m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-248M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-783m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-783M</a>✩</td>
<td></td>
</tr>
<tr>
<td>Cerebras-GPT</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-111m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-111M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-256m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-256M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-590m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-590M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-1.3B</a></td>
</tr>
<tr>
<td>GPT-2</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-124m" target="_blank" rel="noopener noreferrer">LaMini-GPT-124M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-774m" target="_blank" rel="noopener noreferrer">LaMini-GPT-774M</a>✩</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-gpt-1.5b" target="_blank" rel="noopener noreferrer">LaMini-GPT-1.5B</a>✩</td>
<td></td>
</tr>
<tr>
<td>GPT-Neo</td>
<td><a href="https://huggingface.co/MBZUAI/lamini-neo-125m" target="_blank" rel="noopener noreferrer">LaMini-Neo-125M</a></td>
<td><a href="https://huggingface.co/MBZUAI/lamini-neo-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Neo-1.3B</a></td>
<td></td>
<td></td>
</tr>
<tr>
<td>GPT-J</td>
<td colspan="4">coming soon</td>
</tr>
<tr>
<td>LLaMA</td>
<td colspan="4">coming soon</td>
</tr>
</tbody>
</table>
## Use
### Intended use
We recommend using the model to response to human instructions written in natural language.
We now show you how to load and use our model using HuggingFace `pipeline()`.
```python
# pip install -q transformers
from transformers import pipeline
checkpoint = "{model_name}"
model = pipeline('text2text-generation', model = checkpoint)
input_prompt = 'Please let me know your thoughts on the given place and why you think it deserves to be visited: \n"Barcelona, Spain"'
generated_text = model(input_prompt, max_length=512, do_sample=True)[0]['generated_text']
print("Response", generated_text)
```
## Training Procedure
<p align="center" width="100%">
<a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini-pipeline.drawio.png" alt="Title" style="width: 100%; min-width: 250px; display: block; margin: auto;"></a>
</p>
We initialize with [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) and fine-tune it on our [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 248M.
### Training Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
## Evaluation
We conducted two sets of evaluations: automatic evaluation on downstream NLP tasks and human evaluation on user-oriented instructions. For more detail, please refer to our [paper]().
## Limitations
More information needed
# Citation
```bibtex
@article{lamini-lm,
author = {Minghao Wu and
Abdul Waheed and
Chiyu Zhang and
Muhammad Abdul-Mageed and
Alham Fikri Aji
},
title = {LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions},
journal = {CoRR},
volume = {abs/2304.14402},
year = {2023},
url = {https://arxiv.org/abs/2304.14402},
eprinttype = {arXiv},
eprint = {2304.14402}
}
``` |
mradermacher/Dr.Samantha-8B-i1-GGUF | mradermacher | "2024-06-05T08:45:24Z" | 5,234 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"medical",
"en",
"dataset:cognitivecomputations/samantha-data",
"dataset:ruslanmv/ai-medical-dataset",
"base_model:sethuiyer/Dr.Samantha-8B",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-06-03T12:16:06Z" | ---
base_model: sethuiyer/Dr.Samantha-8B
datasets:
- cognitivecomputations/samantha-data
- ruslanmv/ai-medical-dataset
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- mergekit
- merge
- medical
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/sethuiyer/Dr.Samantha-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Dr.Samantha-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-i1-GGUF/resolve/main/Dr.Samantha-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-i1-GGUF/resolve/main/Dr.Samantha-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-i1-GGUF/resolve/main/Dr.Samantha-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-i1-GGUF/resolve/main/Dr.Samantha-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-i1-GGUF/resolve/main/Dr.Samantha-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-i1-GGUF/resolve/main/Dr.Samantha-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-i1-GGUF/resolve/main/Dr.Samantha-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-i1-GGUF/resolve/main/Dr.Samantha-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-i1-GGUF/resolve/main/Dr.Samantha-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-i1-GGUF/resolve/main/Dr.Samantha-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-i1-GGUF/resolve/main/Dr.Samantha-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-i1-GGUF/resolve/main/Dr.Samantha-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-i1-GGUF/resolve/main/Dr.Samantha-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-i1-GGUF/resolve/main/Dr.Samantha-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-i1-GGUF/resolve/main/Dr.Samantha-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-i1-GGUF/resolve/main/Dr.Samantha-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-i1-GGUF/resolve/main/Dr.Samantha-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-i1-GGUF/resolve/main/Dr.Samantha-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-i1-GGUF/resolve/main/Dr.Samantha-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-i1-GGUF/resolve/main/Dr.Samantha-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-i1-GGUF/resolve/main/Dr.Samantha-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mradermacher/MythoMist-7b-i1-GGUF | mradermacher | "2024-06-06T21:48:38Z" | 5,233 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Gryphe/MythoMist-7b",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-06-06T01:31:50Z" | ---
base_model: Gryphe/MythoMist-7b
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Gryphe/MythoMist-7b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MythoMist-7b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF/resolve/main/MythoMist-7b.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF/resolve/main/MythoMist-7b.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF/resolve/main/MythoMist-7b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF/resolve/main/MythoMist-7b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF/resolve/main/MythoMist-7b.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF/resolve/main/MythoMist-7b.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF/resolve/main/MythoMist-7b.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF/resolve/main/MythoMist-7b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF/resolve/main/MythoMist-7b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF/resolve/main/MythoMist-7b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF/resolve/main/MythoMist-7b.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF/resolve/main/MythoMist-7b.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF/resolve/main/MythoMist-7b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF/resolve/main/MythoMist-7b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF/resolve/main/MythoMist-7b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF/resolve/main/MythoMist-7b.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF/resolve/main/MythoMist-7b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF/resolve/main/MythoMist-7b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF/resolve/main/MythoMist-7b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF/resolve/main/MythoMist-7b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF/resolve/main/MythoMist-7b.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mradermacher/BanglaLLama-3-8b-BnWiki-Base-i1-GGUF | mradermacher | "2024-06-09T20:04:04Z" | 5,226 | 0 | transformers | [
"transformers",
"gguf",
"bangla",
"large language model",
"bn",
"en",
"dataset:wikimedia/wikipedia",
"base_model:BanglaLLM/BanglaLLama-3-8b-BnWiki-Base",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-06-09T17:22:39Z" | ---
base_model: BanglaLLM/BanglaLLama-3-8b-BnWiki-Base
datasets:
- wikimedia/wikipedia
language:
- bn
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- bangla
- large language model
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/BanglaLLM/BanglaLLama-3-8b-BnWiki-Base
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/BanglaLLama-3-8b-BnWiki-Base-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3-8b-BnWiki-Base-i1-GGUF/resolve/main/BanglaLLama-3-8b-BnWiki-Base.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3-8b-BnWiki-Base-i1-GGUF/resolve/main/BanglaLLama-3-8b-BnWiki-Base.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3-8b-BnWiki-Base-i1-GGUF/resolve/main/BanglaLLama-3-8b-BnWiki-Base.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3-8b-BnWiki-Base-i1-GGUF/resolve/main/BanglaLLama-3-8b-BnWiki-Base.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3-8b-BnWiki-Base-i1-GGUF/resolve/main/BanglaLLama-3-8b-BnWiki-Base.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3-8b-BnWiki-Base-i1-GGUF/resolve/main/BanglaLLama-3-8b-BnWiki-Base.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3-8b-BnWiki-Base-i1-GGUF/resolve/main/BanglaLLama-3-8b-BnWiki-Base.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3-8b-BnWiki-Base-i1-GGUF/resolve/main/BanglaLLama-3-8b-BnWiki-Base.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3-8b-BnWiki-Base-i1-GGUF/resolve/main/BanglaLLama-3-8b-BnWiki-Base.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3-8b-BnWiki-Base-i1-GGUF/resolve/main/BanglaLLama-3-8b-BnWiki-Base.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3-8b-BnWiki-Base-i1-GGUF/resolve/main/BanglaLLama-3-8b-BnWiki-Base.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3-8b-BnWiki-Base-i1-GGUF/resolve/main/BanglaLLama-3-8b-BnWiki-Base.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3-8b-BnWiki-Base-i1-GGUF/resolve/main/BanglaLLama-3-8b-BnWiki-Base.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3-8b-BnWiki-Base-i1-GGUF/resolve/main/BanglaLLama-3-8b-BnWiki-Base.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3-8b-BnWiki-Base-i1-GGUF/resolve/main/BanglaLLama-3-8b-BnWiki-Base.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3-8b-BnWiki-Base-i1-GGUF/resolve/main/BanglaLLama-3-8b-BnWiki-Base.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3-8b-BnWiki-Base-i1-GGUF/resolve/main/BanglaLLama-3-8b-BnWiki-Base.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3-8b-BnWiki-Base-i1-GGUF/resolve/main/BanglaLLama-3-8b-BnWiki-Base.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3-8b-BnWiki-Base-i1-GGUF/resolve/main/BanglaLLama-3-8b-BnWiki-Base.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3-8b-BnWiki-Base-i1-GGUF/resolve/main/BanglaLLama-3-8b-BnWiki-Base.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/BanglaLLama-3-8b-BnWiki-Base-i1-GGUF/resolve/main/BanglaLLama-3-8b-BnWiki-Base.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mradermacher/AmberChat-i1-GGUF | mradermacher | "2024-06-18T00:48:32Z" | 5,223 | 0 | transformers | [
"transformers",
"gguf",
"nlp",
"llm",
"en",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:icybee/share_gpt_90k_v1",
"base_model:LLM360/AmberChat",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-17T23:38:46Z" | ---
base_model: LLM360/AmberChat
datasets:
- WizardLM/WizardLM_evol_instruct_V2_196k
- icybee/share_gpt_90k_v1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- nlp
- llm
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/LLM360/AmberChat
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/AmberChat-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AmberChat-i1-GGUF/resolve/main/AmberChat.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/AmberChat-i1-GGUF/resolve/main/AmberChat.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/AmberChat-i1-GGUF/resolve/main/AmberChat.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/AmberChat-i1-GGUF/resolve/main/AmberChat.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/AmberChat-i1-GGUF/resolve/main/AmberChat.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/AmberChat-i1-GGUF/resolve/main/AmberChat.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/AmberChat-i1-GGUF/resolve/main/AmberChat.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/AmberChat-i1-GGUF/resolve/main/AmberChat.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AmberChat-i1-GGUF/resolve/main/AmberChat.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/AmberChat-i1-GGUF/resolve/main/AmberChat.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/AmberChat-i1-GGUF/resolve/main/AmberChat.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/AmberChat-i1-GGUF/resolve/main/AmberChat.i1-IQ3_M.gguf) | i1-IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/AmberChat-i1-GGUF/resolve/main/AmberChat.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/AmberChat-i1-GGUF/resolve/main/AmberChat.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/AmberChat-i1-GGUF/resolve/main/AmberChat.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/AmberChat-i1-GGUF/resolve/main/AmberChat.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/AmberChat-i1-GGUF/resolve/main/AmberChat.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/AmberChat-i1-GGUF/resolve/main/AmberChat.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AmberChat-i1-GGUF/resolve/main/AmberChat.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/AmberChat-i1-GGUF/resolve/main/AmberChat.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/AmberChat-i1-GGUF/resolve/main/AmberChat.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
olm/olm-roberta-base-dec-2022 | olm | "2023-01-20T14:32:41Z" | 5,222 | 7 | transformers | [
"transformers",
"pytorch",
"tf",
"tensorboard",
"roberta",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-12-27T22:14:15Z" | ---
language: en
---
# OLM RoBERTa/BERT December 2022
This is a more up-to-date version of the [original BERT](https://huggingface.co/bert-base-cased) and [original RoBERTa](https://huggingface.co/roberta-base).
In addition to being more up-to-date, it also tends to perform better than the original BERT on standard benchmarks.
We think it is fair to directly compare our model to the original BERT because our model was trained with about the same level of compute as the original BERT, and the architecture of BERT and RoBERTa are basically the same.
The original RoBERTa takes an order of magnitude more compute, although our model is also not that different in performance from the original RoBERTa on many standard benchmarks.
Our model was trained on a cleaned December 2022 snapshot of Common Crawl and Wikipedia.
This model was created as part of the OLM project, which has the goal of continuously training and releasing models that are up-to-date and comparable in standard language model performance to their static counterparts.
This is important because we want our models to know about events like COVID or
a presidential election right after they happen.
## Intended uses
You can use the raw model for masked language modeling, but it's mostly intended to
be fine-tuned on a downstream task, such as sequence classification, token classification or question answering.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='olm/olm-roberta-base-dec-2022')
>>> unmasker("Hello I'm a <mask> model.")
[{'score': 0.04252663999795914,
'token': 631,
'token_str': ' new',
'sequence': "Hello I'm a new model."},
{'score': 0.034064881503582,
'token': 4750,
'token_str': ' female',
'sequence': "Hello I'm a female model."},
{'score': 0.03066524863243103,
'token': 932,
'token_str': ' business',
'sequence': "Hello I'm a business model."},
{'score': 0.029599128291010857,
'token': 10345,
'token_str': ' junior',
'sequence': "Hello I'm a junior model."},
{'score': 0.025790784507989883,
'token': 2219,
'token_str': ' human',
'sequence': "Hello I'm a human model."}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, RobertaModel
tokenizer = AutoTokenizer.from_pretrained('olm/olm-roberta-base-dec-2022')
model = RobertaModel.from_pretrained("olm/olm-roberta-base-dec-2022")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Dataset
The model and tokenizer were trained with this [December 2022 cleaned Common Crawl dataset](https://huggingface.co/datasets/olm/olm-CC-MAIN-2022-49-sampling-ratio-olm-0.15114822547) plus this [December 2022 cleaned Wikipedia dataset](https://huggingface.co/datasets/olm/olm-wikipedia-20221220).\
The tokenized version of these concatenated datasets is [here](https://huggingface.co/datasets/olm/olm-december-2022-tokenized-512).\
The datasets were created with this [repo](https://github.com/huggingface/olm-datasets).
## Training
The model was trained according to the OLM BERT/RoBERTa instructions at this [repo](https://github.com/huggingface/olm-training).
## Evaluation results
The model achieves the following results after tuning on GLUE tasks:
| Task | Metric | Original BERT | OLM RoBERTa Dec 2022 (Ours) |
|:-----|:---------|----------------:|----------------------------:|
|cola |mcc |**0.5889** |0.28067 |
|sst2 |acc |0.9181 |**0.9275** |
|mrpc |acc/f1 |**0.9182**/0.8923|0.8662/**0.9033** |
|stsb |pear/spear|0.8822/0.8794 |**0.8870**/**0.8857** |
|qqp |acc/f1 |0.9071/0.8748 |**0.9097**/**0.8791** |
|mnli |acc/acc_mm|0.8400/0.8410 |**0.8576**/**0.8621** |
|qnli |acc |0.9075 |**0.9192** |
|rte |acc |0.6296 |**0.6390** |
|wnli |acc |0.4000 |**0.4648** |
For both the original BERT and our model, we used the Hugging Face run_glue.py script [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification).
For both models, we used the default fine-tuning hyperparameters and we averaged the results over five training seeds. These are the results for the GLUE dev sets, which can be a bit different than the results for the test sets. |
h2oai/h2ogpt-oasst1-512-12b | h2oai | "2023-06-02T22:36:27Z" | 5,222 | 27 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"gpt",
"llm",
"large language model",
"open-source",
"en",
"dataset:h2oai/openassistant_oasst1_h2ogpt_graded",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-04-17T20:33:51Z" | ---
license: apache-2.0
language:
- en
library_name: transformers
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
tags:
- gpt
- llm
- large language model
- open-source
datasets:
- h2oai/openassistant_oasst1_h2ogpt_graded
---
# h2oGPT Model Card
## Summary
H2O.ai's `h2ogpt-oasst1-512-12b` is a 12 billion parameter instruction-following large language model licensed for commercial use.
- Base model: [EleutherAI/pythia-12b](https://huggingface.co/EleutherAI/pythia-12b)
- Fine-tuning dataset: [h2oai/openassistant_oasst1_h2ogpt_graded](https://huggingface.co/datasets/h2oai/openassistant_oasst1_h2ogpt_graded)
- Data-prep and fine-tuning code: [H2O.ai GitHub](https://github.com/h2oai/h2ogpt)
- Training logs: [zip](https://huggingface.co/h2oai/h2ogpt-oasst1-512-12b/blob/main/pythia-12b-deduped.h2oaiopenassistant_oasst1_h2ogpt_graded.3_epochs.2ccf687ea3f3f3775a501838e81c1a0066430455.4.zip)
## Chatbot
- Run your own chatbot: [H2O.ai GitHub](https://github.com/h2oai/h2ogpt)
[](https://github.com/h2oai/h2ogpt)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed.
```bash
pip install transformers==4.28.1
pip install accelerate==0.18.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(model="h2oai/h2ogpt-oasst1-512-12b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", prompt_type='human_bot')
res = generate_text("Why is drinking water so healthy?", max_new_tokens=100)
print(res[0]["generated_text"])
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/h2oai/h2ogpt-oasst1-512-12b/blob/main/h2oai_pipeline.py),
store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("h2oai/h2ogpt-oasst1-512-12b", padding_side="left")
model = AutoModelForCausalLM.from_pretrained("h2oai/h2ogpt-oasst1-512-12b", torch_dtype=torch.bfloat16, device_map="auto")
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer, prompt_type='human_bot')
res = generate_text("Why is drinking water so healthy?", max_new_tokens=100)
print(res[0]["generated_text"])
```
## Model Architecture
```
GPTNeoXForCausalLM(
(gpt_neox): GPTNeoXModel(
(embed_in): Embedding(50688, 5120)
(layers): ModuleList(
(0-35): 36 x GPTNeoXLayer(
(input_layernorm): LayerNorm((5120,), eps=1e-05, elementwise_affine=True)
(post_attention_layernorm): LayerNorm((5120,), eps=1e-05, elementwise_affine=True)
(attention): GPTNeoXAttention(
(rotary_emb): RotaryEmbedding()
(query_key_value): Linear(in_features=5120, out_features=15360, bias=True)
(dense): Linear(in_features=5120, out_features=5120, bias=True)
)
(mlp): GPTNeoXMLP(
(dense_h_to_4h): Linear(in_features=5120, out_features=20480, bias=True)
(dense_4h_to_h): Linear(in_features=20480, out_features=5120, bias=True)
(act): GELUActivation()
)
)
)
(final_layer_norm): LayerNorm((5120,), eps=1e-05, elementwise_affine=True)
)
(embed_out): Linear(in_features=5120, out_features=50688, bias=False)
)
```
## Model Configuration
```json
GPTNeoXConfig {
"_name_or_path": "h2oai/h2ogpt-oasst1-512-12b",
"architectures": [
"GPTNeoXForCausalLM"
],
"bos_token_id": 0,
"classifier_dropout": 0.1,
"custom_pipelines": {
"text-generation": {
"impl": "h2oai_pipeline.H2OTextGenerationPipeline",
"pt": "AutoModelForCausalLM"
}
},
"eos_token_id": 0,
"hidden_act": "gelu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 20480,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 2048,
"model_type": "gpt_neox",
"num_attention_heads": 40,
"num_hidden_layers": 36,
"rotary_emb_base": 10000,
"rotary_pct": 0.25,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.30.0.dev0",
"use_cache": true,
"use_parallel_residual": true,
"vocab_size": 50688
}
```
## Model Validation
Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
[eval source code](https://github.com/h2oai/h2ogpt/issues/125#issuecomment-1548239108)
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.3157|± |0.0136|
| | |acc_norm|0.3507|± |0.0139|
|arc_easy | 0|acc |0.6932|± |0.0095|
| | |acc_norm|0.6225|± |0.0099|
|boolq | 1|acc |0.6685|± |0.0082|
|hellaswag | 0|acc |0.5140|± |0.0050|
| | |acc_norm|0.6803|± |0.0047|
|openbookqa | 0|acc |0.2900|± |0.0203|
| | |acc_norm|0.3740|± |0.0217|
|piqa | 0|acc |0.7682|± |0.0098|
| | |acc_norm|0.7661|± |0.0099|
|winogrande | 0|acc |0.6369|± |0.0135|
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
cross-encoder/quora-roberta-large | cross-encoder | "2021-08-05T08:41:41Z" | 5,220 | 3 | transformers | [
"transformers",
"pytorch",
"jax",
"roberta",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
---
# Cross-Encoder for Quora Duplicate Questions Detection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates.
Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates.
## Usage and Performance
Pre-trained models can be used like this:
```
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name')
scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')])
```
You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class |
google/ddpm-celebahq-256 | google | "2022-07-21T15:00:31Z" | 5,220 | 37 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"arxiv:2006.11239",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | "2022-07-19T10:42:22Z" | ---
license: apache-2.0
tags:
- pytorch
- diffusers
- unconditional-image-generation
---
# Denoising Diffusion Probabilistic Models (DDPM)
**Paper**: [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239)
**Authors**: Jonathan Ho, Ajay Jain, Pieter Abbeel
**Abstract**:
*We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.*
## Inference
**DDPM** models can use *discrete noise schedulers* such as:
- [scheduling_ddpm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py)
- [scheduling_ddim](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py)
- [scheduling_pndm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py)
for inference. Note that while the *ddpm* scheduler yields the highest quality, it also takes the longest.
For a good trade-off between quality and inference speed you might want to consider the *ddim* or *pndm* schedulers instead.
See the following code:
```python
# !pip install diffusers
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = "google/ddpm-celebahq-256"
# load model and scheduler
ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference
# run pipeline in inference (sample random noise and denoise)
image = ddpm()["sample"]
# save image
image[0].save("ddpm_generated_image.png")
```
For more in-detail information, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb)
## Training
If you want to train your own model, please have a look at the [official training example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)
## Samples
1. 
2. 
3. 
4.  |
mradermacher/LlamaGramma-7b-i1-GGUF | mradermacher | "2024-06-10T13:36:10Z" | 5,219 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:Gryphe/CoEdit-Alpaca",
"base_model:Gryphe/LlamaGramma-7b",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T03:52:53Z" | ---
base_model: Gryphe/LlamaGramma-7b
datasets:
- Gryphe/CoEdit-Alpaca
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Gryphe/LlamaGramma-7b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/LlamaGramma-7b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LlamaGramma-7b-i1-GGUF/resolve/main/LlamaGramma-7b.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/LlamaGramma-7b-i1-GGUF/resolve/main/LlamaGramma-7b.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/LlamaGramma-7b-i1-GGUF/resolve/main/LlamaGramma-7b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/LlamaGramma-7b-i1-GGUF/resolve/main/LlamaGramma-7b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/LlamaGramma-7b-i1-GGUF/resolve/main/LlamaGramma-7b.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/LlamaGramma-7b-i1-GGUF/resolve/main/LlamaGramma-7b.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/LlamaGramma-7b-i1-GGUF/resolve/main/LlamaGramma-7b.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/LlamaGramma-7b-i1-GGUF/resolve/main/LlamaGramma-7b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LlamaGramma-7b-i1-GGUF/resolve/main/LlamaGramma-7b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/LlamaGramma-7b-i1-GGUF/resolve/main/LlamaGramma-7b.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LlamaGramma-7b-i1-GGUF/resolve/main/LlamaGramma-7b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/LlamaGramma-7b-i1-GGUF/resolve/main/LlamaGramma-7b.i1-IQ3_M.gguf) | i1-IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/LlamaGramma-7b-i1-GGUF/resolve/main/LlamaGramma-7b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/LlamaGramma-7b-i1-GGUF/resolve/main/LlamaGramma-7b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/LlamaGramma-7b-i1-GGUF/resolve/main/LlamaGramma-7b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/LlamaGramma-7b-i1-GGUF/resolve/main/LlamaGramma-7b.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/LlamaGramma-7b-i1-GGUF/resolve/main/LlamaGramma-7b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/LlamaGramma-7b-i1-GGUF/resolve/main/LlamaGramma-7b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LlamaGramma-7b-i1-GGUF/resolve/main/LlamaGramma-7b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/LlamaGramma-7b-i1-GGUF/resolve/main/LlamaGramma-7b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/LlamaGramma-7b-i1-GGUF/resolve/main/LlamaGramma-7b.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
Mathoufle13/reverse_maker_llama8_4bit | Mathoufle13 | "2024-07-01T15:07:54Z" | 5,219 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-01T14:51:40Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** Mathoufle13
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Qwen/Qwen2-72B-Instruct-GGUF | Qwen | "2024-06-17T16:49:09Z" | 5,217 | 9 | null | [
"gguf",
"instruct",
"chat",
"text-generation",
"en",
"license:other",
"region:us"
] | text-generation | "2024-06-06T10:54:52Z" | ---
language:
- en
pipeline_tag: text-generation
tags:
- instruct
- chat
license: other
---
# Qwen2-72B-Instruct-GGUF
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 72B Qwen2 model.
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
In this repo, we provide quantized models in the GGUF formats, including `q5_0`, `q5_k_m`, `q6_k` and `q8_0`.
## Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
## Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
## Requirements
We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and install it following the official guide. We follow the latest version of llama.cpp.
In the following demonstration, we assume that you are running commands under the repository `llama.cpp`.
## How to use
Cloning the repo may be inefficient, and thus you can manually download the GGUF file that you need or use `huggingface-cli` (`pip install huggingface_hub`) as shown below:
```shell
huggingface-cli download Qwen/Qwen2-72B-Instruct-GGUF qwen2-72b-instruct-q4_0.gguf --local-dir . --local-dir-use-symlinks False
```
However, for large files, we split them into multiple segments due to the limitation of 50G for a single file to be uploaded.
Specifically, for the split files, they share a prefix, with a suffix indicating its index. For examples, the `q5_k_m` GGUF files are:
```
qwen2-72b-instruct-q5_k_m-00001-of-00002.gguf
qwen2-72b-instruct-q5_k_m-00002-of-00002.gguf
```
They share the prefix of `qwen2-72b-instruct-q5_k_m`, but have their own suffix for indexing respectively, say `-00001-of-00002`.
To use the split GGUF files, you need to merge them first with the command `llama-gguf-split` as shown below:
```bash
./llama-gguf-split --merge qwen2-72b-instruct-q5_k_m-00001-of-00002.gguf qwen2-72b-instruct-q5_k_m.gguf
```
With the upgrade of APIs of llama.cpp, `llama-gguf-split` is equivalent to the previous `gguf-split`.
For the arguments of this command, the first is the path to the first split GGUF file, and the second is the path to the output GGUF file.
To run Qwen2, you can use `llama-cli` (the previous `main`) or `llama-server` (the previous `server`).
We recommend using the `llama-server` as it is simple and compatible with OpenAI API. For example:
```bash
./llama-server -m qwen2-72b-instruct-q4_0.gguf -ngl 80 -fa
```
(Note: `-ngl 80` refers to offloading 80 layers to GPUs, and `-fa` refers to the use of flash attention.)
Then it is easy to access the deployed service with OpenAI API:
```python
import openai
client = openai.OpenAI(
base_url="http://localhost:8080/v1", # "http://<Your api-server IP>:port"
api_key = "sk-no-key-required"
)
completion = client.chat.completions.create(
model="qwen",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "tell me something about michael jordan"}
]
)
print(completion.choices[0].message.content)
```
If you choose to use `llama-cli`, pay attention to the removal of `-cml` for the ChatML template. Instead you should use `--in-prefix` and `--in-suffix` to tackle this problem.
```bash
./llama-cli -m qwen2-72b-instruct-q4_0.gguf \
-n 512 -co -i -if -f prompts/chat-with-qwen.txt \
--in-prefix "<|im_start|>user\n" \
--in-suffix "<|im_end|>\n<|im_start|>assistant\n" \
-ngl 80 -fa
```
## Evaluation
We implement perplexity evaluation using wikitext following the practice of `llama.cpp` with `./llama-perplexity` (the previous `./perplexity`).
In the following we report the PPL of GGUF models of different sizes and different quantization levels.
|Size | fp16 | q8_0 | q6_k | q5_k_m | q5_0 | q4_k_m | q4_0 | q3_k_m | q2_k | iq1_m |
|--------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|
|0.5B | 15.11 | 15.13 | 15.14 | 15.24 | 15.40 | 15.36 | 16.28 | 15.70 | 16.74 | - |
|1.5B | 10.43 | 10.43 | 10.45 | 10.50 | 10.56 | 10.61 | 10.79 | 11.08 | 13.04 | - |
|7B | 7.93 | 7.94 | 7.96 | 7.97 | 7.98 | 8.02 | 8.19 | 8.20 | 10.58 | - |
|57B-A14B| 6.81 | 6.81 | 6.83 | 6.84 | 6.89 | 6.99 | 7.02 | 7.43 | - | - |
|72B | 5.58 | 5.58 | 5.59 | 5.59 | 5.60 | 5.61 | 5.66 | 5.68 | 5.91 | 6.75 |
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen2,
title={Qwen2 Technical Report},
year={2024}
}
``` |
RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf | RichardErkhov | "2024-06-30T05:21:50Z" | 5,216 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-30T04:51:45Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Fox-1-1.6B-Instruct-v0.1 - GGUF
- Model creator: https://huggingface.co/tensoropera/
- Original model: https://huggingface.co/tensoropera/Fox-1-1.6B-Instruct-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Fox-1-1.6B-Instruct-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf/blob/main/Fox-1-1.6B-Instruct-v0.1.Q2_K.gguf) | Q2_K | 0.8GB |
| [Fox-1-1.6B-Instruct-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf/blob/main/Fox-1-1.6B-Instruct-v0.1.IQ3_XS.gguf) | IQ3_XS | 0.84GB |
| [Fox-1-1.6B-Instruct-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf/blob/main/Fox-1-1.6B-Instruct-v0.1.IQ3_S.gguf) | IQ3_S | 0.87GB |
| [Fox-1-1.6B-Instruct-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf/blob/main/Fox-1-1.6B-Instruct-v0.1.Q3_K_S.gguf) | Q3_K_S | 0.86GB |
| [Fox-1-1.6B-Instruct-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf/blob/main/Fox-1-1.6B-Instruct-v0.1.IQ3_M.gguf) | IQ3_M | 0.89GB |
| [Fox-1-1.6B-Instruct-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf/blob/main/Fox-1-1.6B-Instruct-v0.1.Q3_K.gguf) | Q3_K | 0.92GB |
| [Fox-1-1.6B-Instruct-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf/blob/main/Fox-1-1.6B-Instruct-v0.1.Q3_K_M.gguf) | Q3_K_M | 0.92GB |
| [Fox-1-1.6B-Instruct-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf/blob/main/Fox-1-1.6B-Instruct-v0.1.Q3_K_L.gguf) | Q3_K_L | 0.97GB |
| [Fox-1-1.6B-Instruct-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf/blob/main/Fox-1-1.6B-Instruct-v0.1.IQ4_XS.gguf) | IQ4_XS | 0.98GB |
| [Fox-1-1.6B-Instruct-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf/blob/main/Fox-1-1.6B-Instruct-v0.1.Q4_0.gguf) | Q4_0 | 1.0GB |
| [Fox-1-1.6B-Instruct-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf/blob/main/Fox-1-1.6B-Instruct-v0.1.IQ4_NL.gguf) | IQ4_NL | 1.01GB |
| [Fox-1-1.6B-Instruct-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf/blob/main/Fox-1-1.6B-Instruct-v0.1.Q4_K_S.gguf) | Q4_K_S | 1.01GB |
| [Fox-1-1.6B-Instruct-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf/blob/main/Fox-1-1.6B-Instruct-v0.1.Q4_K.gguf) | Q4_K | 1.04GB |
| [Fox-1-1.6B-Instruct-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf/blob/main/Fox-1-1.6B-Instruct-v0.1.Q4_K_M.gguf) | Q4_K_M | 1.04GB |
| [Fox-1-1.6B-Instruct-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf/blob/main/Fox-1-1.6B-Instruct-v0.1.Q4_1.gguf) | Q4_1 | 1.07GB |
| [Fox-1-1.6B-Instruct-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf/blob/main/Fox-1-1.6B-Instruct-v0.1.Q5_0.gguf) | Q5_0 | 1.14GB |
| [Fox-1-1.6B-Instruct-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf/blob/main/Fox-1-1.6B-Instruct-v0.1.Q5_K_S.gguf) | Q5_K_S | 1.14GB |
| [Fox-1-1.6B-Instruct-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf/blob/main/Fox-1-1.6B-Instruct-v0.1.Q5_K.gguf) | Q5_K | 1.16GB |
| [Fox-1-1.6B-Instruct-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf/blob/main/Fox-1-1.6B-Instruct-v0.1.Q5_K_M.gguf) | Q5_K_M | 1.16GB |
| [Fox-1-1.6B-Instruct-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf/blob/main/Fox-1-1.6B-Instruct-v0.1.Q5_1.gguf) | Q5_1 | 1.2GB |
| [Fox-1-1.6B-Instruct-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf/blob/main/Fox-1-1.6B-Instruct-v0.1.Q6_K.gguf) | Q6_K | 1.28GB |
| [Fox-1-1.6B-Instruct-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/tensoropera_-_Fox-1-1.6B-Instruct-v0.1-gguf/blob/main/Fox-1-1.6B-Instruct-v0.1.Q8_0.gguf) | Q8_0 | 1.65GB |
Original model description:
---
license: apache-2.0
language:
- en
---
## Model Card for Fox-1-1.6B-Instruct
> [!IMPORTANT]
> This model is an instruction tuned model which requires alignment before it can be used in production. We will release
> the chat version soon.
Fox-1 is a decoder-only transformer-based small language model (SLM) with 1.6B total parameters developed
by [TensorOpera AI](https://tensoropera.ai/). The model was pre-trained with a 3-stage data curriculum on 3 trillion
tokens of text and code data in 8K sequence length. Fox-1 uses Grouped Query Attention (GQA) with 4 key-value heads and
16 attention heads for faster inference.
Fox-1-Instruct-v0.1 is an instruction-tuned (SFT) version of Fox-1-1.6B that has an 8K native context length. The model
was finetuned with 5B tokens of instruction following and multi-turn conversation data.
For the full details of this model please read
our [release blog post](https://blog.tensoropera.ai/tensoropera-unveils-fox-foundation-model-a-pioneering-open-source-slm-leading-the-way-against-tech-giants).
## Getting-Started
The model and a live inference endpoint are available on
the [TensorOpera AI Platform](https://tensoropera.ai/models/1228?owner=tensoropera).
For detailed deployment instructions, refer to
the [Step-by-Step Guide](https://blog.tensoropera.ai/how-to/how-to-deploy-fox-1-on-tensoropera-ai-a-step-by-step-guide-2/)
on how to deploy Fox-1-Instruct on the [TensorOpera AI Platform](https://tensoropera.ai/).
## Benchmarks
We evaluated Fox-1 on ARC Challenge (25-shot), HellaSwag (10-shot), TruthfulQA (0-shot), MMLU (5-shot),
Winogrande (5-shot), and GSM8k (5-shot). We follow the Open LLM Leaderboard's evaluation setup and report the average
score of the 6 benchmarks. The model was evaluated on a machine with 8*H100 GPUs.
| | Fox-1-1.6B-Instruct-v0.1 | Fox-1-1.6B | Qwen1.5-1.8B-Chat | Gemma-2B-It | OpenELM-1.1B-Instruct |
|---------------|--------------------------|------------|-------------------|-------------|-----------------------|
| GSM8k | 39.20% | 36.39% | 18.20% | 4.47% | 0.91% |
| MMLU | 44.99% | 43.05% | 45.77% | 37.70% | 25.70% |
| ARC Challenge | 43.60% | 41.21% | 38.99% | 43.34% | 40.36% |
| HellaSwag | 63.39% | 62.82% | 60.31% | 62.72% | 71.67% |
| TruthfulQA | 44.12% | 38.66% | 40.57% | 45.86% | 45.96% |
| Winogrande | 62.67% | 60.62% | 59.51% | 61.33% | 61.96% |
| Average | 49.66% | 47.13% | 43.89% | 42.57% | 41.09% |
|
unionai/Phi-3-mini-128k-instruct-news-headlines-gguf | unionai | "2024-06-11T19:01:46Z" | 5,214 | 0 | null | [
"gguf",
"pytorch",
"causal-lm",
"llama2",
"code llama",
"fine-tuning",
"flyte llama",
"flyte repo dataset",
"en",
"license:apache-2.0",
"region:us"
] | null | "2024-06-03T17:34:47Z" | ---
language:
- en
license: apache-2.0
tags:
- pytorch
- causal-lm
- llama2
- code llama
- fine-tuning
- flyte llama
- flyte repo dataset
---
# Phi-3-mini-128k-instruct fine-tuned on news headlines |
Himitsui/Kaiju-11B-GGUF | Himitsui | "2024-02-13T12:55:47Z" | 5,208 | 14 | null | [
"gguf",
"region:us"
] | null | "2024-02-13T12:26:49Z" | Included in this repo is the GGUF Quants for Kaiju-11B
(ノ≧∀≦)ノ ‥…━━━━━━━━━━━━━★ ||| ╲/\╭[ ᴼᴼ ౪ ᴼᴼ]╮/\╱\
Hiya! This is an experiment using Gryphe's [MergeMonster](https://github.com/Gryphe/MergeMonster).
I decided to try and reduce what the community calls 'GPT-isms' or GPT Slop, Solar is a good model but does have fair share of positivity bias and 'slop' in roleplays. I used my friend [Sao](https://huggingface.co/Sao10K)'s models as bases as they are pretty popular, along with Kuromitsu and the popular Instruct-Uncensored tune.
Alpaca Format should be fine as it is universal, Vicuna Format should work too. Universal-Light preset in SillyTavern is pretty nice too. :)
💜 I hope this model may be useful to you 💜
***
Merge Details Below:
<details><summary>See Merge Config</summary>
```
-----------------------------------------------------------------------------------------------------
| Type | Phrase | Context | Raw Prob* | Used Prob** | Change |
-----------------------------------------------------------------------------------------------------
| BAD | anticipation | Her body quivers with | 9.99850% | 119.98% | -54.02% |
| BAD | anticipation | The atmosphere is thic.. | 8.82392% | 105.89% | -32.13% |
| BAD | unwavering | Filled with an | 0.09003% | 1.08% | -0.06% |
| BAD | determination | Her eyes were filled w.. | 0.19863% | 2.38% | -0.26% |
| BAD | determination | Her stubbornness only .. | 7.17110% | 86.05% | -39.86% |
| BAD | whisper | Her voice barely above.. | 96.55492% | 1158.66% | -8.91% |
| BAD | spine | shivers down her | 85.57597% | 1026.91% | -66.19% |
| BAD | sends shivers | The thrill of the act | 0.00230% | 0.03% | -0.00% |
| BAD | ministrations | She moans and twitches.. | 1.35264% | 16.23% | -10.49% |
| BAD | legs | wraps her | 2.45741% | 29.49% | -10.58% |
| BAD | imposing figure | He had an | 0.00356% | 0.04% | +0.00% |
| BAD | shared challenges | Their bond strengthene.. | 0.10075% | 1.21% | -0.03% |
| BAD | bond | forged a | 1.78930% | 21.47% | -9.07% |
| BAD | bond | an unspoken | 4.33001% | 51.96% | -28.17% |
| BAD | enhance our expe.. | I'm excited to see how | 0.00000% | 0.00% | +0.00% |
| BAD | sense of vulnera.. | create a | 0.00003% | 0.00% | -0.00% |
| BAD | dimensions of in.. | explore new | 0.00047% | 0.01% | -0.00% |
| BAD | deepening our co.. | while | 0.00003% | 0.00% | -0.00% |
| BAD | shared experiences | through | 0.00469% | 0.06% | -0.00% |
| BAD | societal expecta.. | that transcend | 0.00170% | 0.02% | -0.00% |
| BAD | conventional bou.. | that defy | 0.03593% | 0.43% | +0.04% |
| BAD | conventional bou.. | and defy | 0.00410% | 0.05% | +0.01% |
| BAD | open communication | an environment | 0.00000% | 0.00% | +0.00% |
| BAD | emotional vulner.. | an environment | 0.00000% | 0.00% | +0.00% |
| BAD | heightens our co.. | touch and the anticipa.. | 0.00000% | 0.00% | +0.00% |
| BAD | sensations you'r.. | I'm enjoying | 0.00000% | 0.00% | -0.00% |
| BAD | is truly arousing | attention to detail | 0.00000% | 0.00% | +0.00% |
| BAD | is truly arousing | way you explore my body | 0.00001% | 0.00% | +0.00% |
| BAD | challenge presen.. | my resolve unwavering .. | 0.00000% | 0.00% | +0.00% |
| BAD | humble vessel | surrendering to the ex.. | 0.00000% | 0.00% | +0.00% |
| BAD | bond | cherishing the unique | 1.37498% | 16.50% | +1.21% |
| BAD | bond | special | 0.05834% | 0.70% | +0.01% |
| BAD | grows stronger w.. | bond | 0.00000% | 0.00% | +0.00% |
| BAD | that cannot be b.. | bond | 0.00000% | 0.00% | -0.00% |
| BAD | becomes unbreaka.. | bond | 0.00000% | 0.00% | -0.00% |
| BAD | grew stronger wi.. | bond | 0.00000% | 0.00% | +0.00% |
| GOOD | The apple is in .. | Question: If I'm in th.. | 78.38934% | 78.39% | -10.79% |
------------------------------------------------------------------------------------------------------
| Totals | 298.32% | 2717.54% | -269.30% |
------------------------------------------------------------------------------------------------------
```
* = Unweighted, raw probability - ** = Probability after weight adjustments
```
-------- MERGE COMPOSITION ---------
Fimbulvetr-11B-v2-Test-14: 0.50
KuroMitsu-11B: 0.18
Fimbulvetr-10.7B-v1: 0.17
SOLAR-10.7B-Instruct-v1.0-uncensored: 0.10
Solstice-11B-v1: 0.05
```
</details><br> |
Habana/vit | Habana | "2023-07-25T21:36:05Z" | 5,205 | 0 | null | [
"optimum_habana",
"license:apache-2.0",
"region:us"
] | null | "2022-08-05T22:23:55Z" | ---
license: apache-2.0
---
[Optimum Habana](https://github.com/huggingface/optimum-habana) is the interface between the Hugging Face Transformers and Diffusers libraries and Habana's Gaudi processor (HPU).
It provides a set of tools enabling easy and fast model loading, training and inference on single- and multi-HPU settings for different downstream tasks.
Learn more about how to take advantage of the power of Habana HPUs to train and deploy Transformers and Diffusers models at [hf.co/hardware/habana](https://huggingface.co/hardware/habana).
## ViT model HPU configuration
This model only contains the `GaudiConfig` file for running the [ViT](https://huggingface.co/google/vit-base-patch16-224-in21k) model on Habana's Gaudi processors (HPU).
**This model contains no model weights, only a GaudiConfig.**
This enables to specify:
- `use_fused_adam`: whether to use Habana's custom AdamW implementation
- `use_fused_clip_norm`: whether to use Habana's fused gradient norm clipping operator
- `use_torch_autocast`: whether to use Torch Autocast for managing mixed precision
## Usage
The model is instantiated the same way as in the Transformers library.
The only difference is that there are a few new training arguments specific to HPUs.\
It is strongly recommended to train this model doing bf16 mixed-precision training for optimal performance and accuracy.
[Here](https://github.com/huggingface/optimum-habana/blob/main/examples/image-classification/run_image_classification.py) is an image classification example script to fine-tune a model. You can run it with ViT with the following command:
```bash
python run_image_classification.py \
--model_name_or_path google/vit-base-patch16-224-in21k \
--dataset_name cifar10 \
--output_dir /tmp/outputs/ \
--remove_unused_columns False \
--do_train \
--do_eval \
--learning_rate 2e-5 \
--num_train_epochs 5 \
--per_device_train_batch_size 64 \
--per_device_eval_batch_size 64 \
--evaluation_strategy epoch \
--save_strategy epoch \
--load_best_model_at_end True \
--save_total_limit 3 \
--seed 1337 \
--use_habana \
--use_lazy_mode \
--gaudi_config_name Habana/vit \
--throughput_warmup_steps 3 \
--bf16
```
Check the [documentation](https://huggingface.co/docs/optimum/habana/index) out for more advanced usage and examples.
|
Qwen/Qwen1.5-110B | Qwen | "2024-04-26T14:55:00Z" | 5,201 | 86 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"pretrained",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-25T07:30:56Z" | ---
license: other
license_name: tongyi-qianwen
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-110B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- pretrained
---
# Qwen1.5-110B
## Introduction
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
* 9 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B, 72B, and 110B dense models, and an MoE model of 14B with 2.7B activated;
* Significant performance improvement in Chat models;
* Multilingual support of both base and chat models;
* Stable support of 32K context length for models of all sizes
* No need of `trust_remote_code`.
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
## Model Details
Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B and 110B) and the mixture of SWA and full attention.
## Requirements
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'.
```
## Usage
We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
``` |
RichardErkhov/Fizzarolli_-_sappha-2b-v3-gguf | RichardErkhov | "2024-06-27T13:19:13Z" | 5,194 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-27T12:59:06Z" | Entry not found |
flaviagiammarino/medsam-vit-base | flaviagiammarino | "2023-07-13T15:43:56Z" | 5,190 | 6 | transformers | [
"transformers",
"pytorch",
"tf",
"sam",
"mask-generation",
"medical",
"vision",
"arxiv:2304.12306",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | mask-generation | "2023-07-11T07:37:57Z" | ---
license: apache-2.0
tags:
- medical
- vision
---
# Model Card for MedSAM
MedSAM is a fine-tuned version of [SAM](https://huggingface.co/docs/transformers/main/model_doc/sam) for the medical domain.
This repository is based on the paper, code and pre-trained model released by the authors in July 2023.
## Model Description
MedSAM was trained on a large-scale medical image segmentation dataset of 1,090,486 image-mask pairs collected from different publicly available sources.
The image-mask pairs cover 15 imaging modalities and over 30 cancer types.
MedSAM was initialized using the pre-trained SAM model with the ViT-Base backbone. The prompt encoder weights were frozen, while the image encoder and mask decoder weights were updated during training.
The training was performed for 100 epochs with a batch size of 160 using the AdamW optimizer with a learning rate of 10−4 and a weight decay of 0.01.
- **Repository:** [MedSAM Official GitHub Repository](https://github.com/bowang-lab/medsam)
- **Paper:** [Segment Anything in Medical Images](https://arxiv.org/abs/2304.12306v1)
## Usage
```python
import requests
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
from transformers import SamModel, SamProcessor
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
model = SamModel.from_pretrained("flaviagiammarino/medsam-vit-base").to(device)
processor = SamProcessor.from_pretrained("flaviagiammarino/medsam-vit-base")
img_url = "https://huggingface.co/flaviagiammarino/medsam-vit-base/resolve/main/scripts/input.png"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
input_boxes = [95., 255., 190., 350.]
inputs = processor(raw_image, input_boxes=[[input_boxes]], return_tensors="pt").to(device)
outputs = model(**inputs, multimask_output=False)
probs = processor.image_processor.post_process_masks(outputs.pred_masks.sigmoid().cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu(), binarize=False)
def show_mask(mask, ax, random_color):
if random_color:
color = np.concatenate([np.random.random(3), np.array([0.6])], axis=0)
else:
color = np.array([251/255, 252/255, 30/255, 0.6])
h, w = mask.shape[-2:]
mask_image = mask.reshape(h, w, 1) * color.reshape(1, 1, -1)
ax.imshow(mask_image)
def show_box(box, ax):
x0, y0 = box[0], box[1]
w, h = box[2] - box[0], box[3] - box[1]
ax.add_patch(plt.Rectangle((x0, y0), w, h, edgecolor="blue", facecolor=(0, 0, 0, 0), lw=2))
fig, ax = plt.subplots(1, 2, figsize=(10, 5))
ax[0].imshow(np.array(raw_image))
show_box(input_boxes, ax[0])
ax[0].set_title("Input Image and Bounding Box")
ax[0].axis("off")
ax[1].imshow(np.array(raw_image))
show_mask(mask=probs[0] > 0.5, ax=ax[1], random_color=False)
show_box(input_boxes, ax[1])
ax[1].set_title("MedSAM Segmentation")
ax[1].axis("off")
plt.show()
```

## Additional Information
### Licensing Information
The authors have released the model code and pre-trained checkpoint under the [Apache License 2.0](https://github.com/bowang-lab/MedSAM/blob/main/LICENSE).
### Citation Information
```
@article{ma2023segment,
title={Segment anything in medical images},
author={Ma, Jun and Wang, Bo},
journal={arXiv preprint arXiv:2304.12306},
year={2023}
}
``` |
Jiayi-Pan/Tiny-Vicuna-1B | Jiayi-Pan | "2024-04-26T20:00:14Z" | 5,187 | 13 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-22T21:11:51Z" | ---
language:
- en
license: apache-2.0
model-index:
- name: Tiny-Vicuna-1B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 33.45
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Jiayi-Pan/Tiny-Vicuna-1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 55.92
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Jiayi-Pan/Tiny-Vicuna-1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Jiayi-Pan/Tiny-Vicuna-1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 33.82
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Jiayi-Pan/Tiny-Vicuna-1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.41
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Jiayi-Pan/Tiny-Vicuna-1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.52
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Jiayi-Pan/Tiny-Vicuna-1B
name: Open LLM Leaderboard
---
# Tiny Vicuna 1B
This model is a fine-tuned version of [TinyLlama](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T) on [WizardVicuna Dataset](https://github.com/melodysdreamj/WizardVicunaLM).
It should be fully compatible with Vicuna-v1.5 series.
This model is easy to iterate on for early experiments!
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Jiayi-Pan__Tiny-Vicuna-1B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |34.76|
|AI2 Reasoning Challenge (25-Shot)|33.45|
|HellaSwag (10-Shot) |55.92|
|MMLU (5-Shot) |25.45|
|TruthfulQA (0-shot) |33.82|
|Winogrande (5-shot) |58.41|
|GSM8k (5-shot) | 1.52|
|
facebook/galactica-30b | facebook | "2023-01-24T17:20:45Z" | 5,183 | 39 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"galactica",
"arxiv:1810.03993",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-11-16T14:46:22Z" | ---
license: cc-by-nc-4.0
tags:
- galactica
widget:
- text: "The Transformer architecture [START_REF]"
- text: "The Schwarzschild radius is defined as: \\["
- text: "A force of 0.6N is applied to an object, which accelerates at 3m/s. What is its mass? <work>"
- text: "Lecture 1: The Ising Model\n\n"
- text: "[START_I_SMILES]"
- text: "[START_AMINO]GHMQSITAGQKVISKHKNGRFYQCEVVRLTTETFYEVNFDDGSFSDNLYPEDIVSQDCLQFGPPAEGEVVQVRWTDGQVYGAKFVASHPIQMYQVEFEDGSQLVVKRDDVYTLDEELP[END_AMINO] ## Keywords"
inference: false
---

# GALACTICA 30 B (large)
Model card from the original [repo](https://github.com/paperswithcode/galai/blob/main/docs/model_card.md)
Following [Mitchell et al. (2018)](https://arxiv.org/abs/1810.03993), this model card provides information about the GALACTICA model, how it was trained, and the intended use cases. Full details about how the model was trained and evaluated can be found in the [release paper](https://galactica.org/paper.pdf).
## Model Details
The GALACTICA models are trained on a large-scale scientific corpus. The models are designed to perform scientific tasks, including but not limited to citation prediction, scientific QA, mathematical reasoning, summarization, document generation, molecular property prediction and entity extraction. The models were developed by the Papers with Code team at Meta AI to study the use of language models for the automatic organization of science. We train models with sizes ranging from 125M to 120B parameters. Below is a summary of the released models:
| Size | Parameters |
|:-----------:|:-----------:|
| `mini` | 125 M |
| `base` | 1.3 B |
| `standard` | 6.7 B |
| `large` | 30 B |
| `huge` | 120 B |
## Release Date
November 2022
## Model Type
Transformer based architecture in a decoder-only setup with a few modifications (see paper for more details).
## Paper & Demo
[Paper](https://galactica.org/paper.pdf) / [Demo](https://galactica.org)
## Model Use
The primary intended users of the GALACTICA models are researchers studying language models applied to the scientific domain. We also anticipate the model will be useful for developers who wish to build scientific tooling. However, we caution against production use without safeguards given the potential of language models to hallucinate.
The models are made available under a non-commercial CC BY-NC 4.0 license. More information about how to use the model can be found in the README.md of this repository.
## Training Data
The GALACTICA models are trained on 106 billion tokens of open-access scientific text and data. This includes papers, textbooks, scientific websites, encyclopedias, reference material, knowledge bases, and more. We tokenize different modalities to provide a natural langauge interface for different tasks. See the README.md for more information. See the paper for full information on the training data.
## How to use
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-30b")
model = OPTForCausalLM.from_pretrained("facebook/galactica-30b")
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-30b")
model = OPTForCausalLM.from_pretrained("facebook/galactica-30b", device_map="auto")
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-30b")
model = OPTForCausalLM.from_pretrained("facebook/galactica-30b", device_map="auto", torch_dtype=torch.float16)
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-30b")
model = OPTForCausalLM.from_pretrained("facebook/galactica-30b", device_map="auto", load_in_8bit=True)
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
## Performance and Limitations
The model outperforms several existing language models on a range of knowledge probes, reasoning, and knowledge-intensive scientific tasks. This also extends to general NLP tasks, where GALACTICA outperforms other open source general language models. That being said, we note a number of limitations in this section.
As with other language models, GALACTICA is often prone to hallucination - and training on a high-quality academic corpus does not prevent this, especially for less popular and less cited scientific concepts. There are no guarantees of truthful output when generating from the model. This extends to specific modalities such as citation prediction. While GALACTICA's citation behaviour approaches the ground truth citation behaviour with scale, the model continues to exhibit a popularity bias at larger scales.
In addition, we evaluated the model on several types of benchmarks related to stereotypes and toxicity. Overall, the model exhibits substantially lower toxicity rates compared to other large language models. That being said, the model continues to exhibit bias on certain measures (see the paper for details). So we recommend care when using the model for generations.
## Broader Implications
GALACTICA can potentially be used as a new way to discover academic literature. We also expect a lot of downstream use for application to particular domains, such as mathematics, biology, and chemistry. In the paper, we demonstrated several examples of the model acting as alternative to standard search tools. We expect a new generation of scientific tools to be built upon large language models such as GALACTICA.
We encourage researchers to investigate beneficial and new use cases for these models. That being said, it is important to be aware of the current limitations of large language models. Researchers should pay attention to common issues such as hallucination and biases that could emerge from using these models.
## Citation
```bibtex
@inproceedings{GALACTICA,
title={GALACTICA: A Large Language Model for Science},
author={Ross Taylor and Marcin Kardas and Guillem Cucurull and Thomas Scialom and Anthony Hartshorn and Elvis Saravia and Andrew Poulton and Viktor Kerkez and Robert Stojnic},
year={2022}
}
``` |
TheBloke/Synthia-70B-GGUF | TheBloke | "2023-09-27T12:46:22Z" | 5,182 | 7 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"en",
"arxiv:2306.02707",
"base_model:migtissera/Synthia-70B",
"license:llama2",
"text-generation-inference",
"region:us"
] | text-generation | "2023-08-26T12:20:37Z" | ---
language:
- en
license: llama2
library_name: transformers
model_name: Synthia 70B
base_model: migtissera/Synthia-70B
inference: false
model_creator: Migel Tissera
model_type: llama
pipeline_tag: text-generation
prompt_template: 'SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Synthia 70B - GGUF
- Model creator: [Migel Tissera](https://huggingface.co/migtissera)
- Original model: [Synthia 70B](https://huggingface.co/migtissera/Synthia-70B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Migel Tissera's Synthia 70B](https://huggingface.co/migtissera/Synthia-70B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Synthia-70B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Synthia-70B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Synthia-70B-GGUF)
* [Migel Tissera's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/migtissera/Synthia-70B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [synthia-70b.Q2_K.gguf](https://huggingface.co/TheBloke/Synthia-70B-GGUF/blob/main/synthia-70b.Q2_K.gguf) | Q2_K | 2 | 29.11 GB| 31.61 GB | smallest, significant quality loss - not recommended for most purposes |
| [synthia-70b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Synthia-70B-GGUF/blob/main/synthia-70b.Q3_K_S.gguf) | Q3_K_S | 3 | 29.75 GB| 32.25 GB | very small, high quality loss |
| [synthia-70b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Synthia-70B-GGUF/blob/main/synthia-70b.Q3_K_M.gguf) | Q3_K_M | 3 | 33.10 GB| 35.60 GB | very small, high quality loss |
| [synthia-70b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Synthia-70B-GGUF/blob/main/synthia-70b.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
| [synthia-70b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Synthia-70B-GGUF/blob/main/synthia-70b.Q4_K_S.gguf) | Q4_K_S | 4 | 38.99 GB| 41.49 GB | small, greater quality loss |
| [synthia-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Synthia-70B-GGUF/blob/main/synthia-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.38 GB| 43.88 GB | medium, balanced quality - recommended |
| [synthia-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Synthia-70B-GGUF/blob/main/synthia-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
| [synthia-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Synthia-70B-GGUF/blob/main/synthia-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
| synthia-70b.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss |
| synthia-70b.Q8_0.gguf | Q8_0 | 8 | 73.23 GB| 75.73 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
<details>
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
### q6_K
Please download:
* `synthia-70b.Q6_K.gguf-split-a`
* `synthia-70b.Q6_K.gguf-split-b`
### q8_0
Please download:
* `synthia-70b.Q8_0.gguf-split-a`
* `synthia-70b.Q8_0.gguf-split-b`
To join the files, do the following:
Linux and macOS:
```
cat synthia-70b.Q6_K.gguf-split-* > synthia-70b.Q6_K.gguf && rm synthia-70b.Q6_K.gguf-split-*
cat synthia-70b.Q8_0.gguf-split-* > synthia-70b.Q8_0.gguf && rm synthia-70b.Q8_0.gguf-split-*
```
Windows command line:
```
COPY /B synthia-70b.Q6_K.gguf-split-a + synthia-70b.Q6_K.gguf-split-b synthia-70b.Q6_K.gguf
del synthia-70b.Q6_K.gguf-split-a synthia-70b.Q6_K.gguf-split-b
COPY /B synthia-70b.Q8_0.gguf-split-a + synthia-70b.Q8_0.gguf-split-b synthia-70b.Q8_0.gguf
del synthia-70b.Q8_0.gguf-split-a synthia-70b.Q8_0.gguf-split-b
```
</details>
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Synthia-70B-GGUF and below it, a specific filename to download, such as: synthia-70b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Synthia-70B-GGUF synthia-70b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Synthia-70B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Synthia-70B-GGUF synthia-70b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m synthia-70b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "SYSTEM: {system_message}\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Synthia-70B-GGUF", model_file="synthia-70b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Migel Tissera's Synthia 70B
# Synthia-70B
SynthIA (Synthetic Intelligent Agent) is a LLama-2-70B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations.
<br>

<br>
<br>
#### License Disclaimer:
This model is bound by the license & usage restrictions of the original Llama-2 model, and comes with no warranty or gurantees of any kind.
<br>
## Evaluation
We evaluated Synthia-70B on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm|0.6945|
|*hellaswag*|acc_norm|0.8711|
|*mmlu*|acc_norm|0.6891|
|*truthfulqa_mc*|mc2|0.5979|
|**Total Average**|-|**0.7132**||
<br>
## Example Usage
### Here is prompt format:
```
SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually.
USER: How is a rocket launched from the surface of the earth to Low Earth Orbit?
ASSISTANT:
```
### Below shows a code example on how to use this model:
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "migtissera/Synthia-70B"
output_file_path = "./Synthia-70B-conversations.jsonl"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
conversation = f"SYSTEM: As a an AI superintelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
json_data = {"prompt": user_input, "answer": answer}
## Save your conversation
with open(output_file_path, "a") as output_file:
output_file.write(json.dumps(json_data) + "\n")
```
<br>
#### Limitations & Biases:
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
Exercise caution and cross-check information when necessary. This is an uncensored model.
<br>
### Citiation:
Please kindly cite using the following BibTeX:
```
@misc{Synthia-70B,
author = {Migel Tissera},
title = {Synthia-70B: Synthetic Intelligent Agent},
year = {2023},
publisher = {GitHub, HuggingFace},
journal = {GitHub repository, HuggingFace repository},
howpublished = {\url{https://huggingface.co/migtissera/Synthia-70B},
}
```
```
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@software{touvron2023llama,
title={LLaMA2: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
<!-- original-model-card end -->
|
mradermacher/Venomia-m7-i1-GGUF | mradermacher | "2024-06-05T08:43:13Z" | 5,181 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Sao10K/Venomia-m7",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-04T15:16:21Z" | ---
base_model: Sao10K/Venomia-m7
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Sao10K/Venomia-m7
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Venomia-m7-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-i1-GGUF/resolve/main/Venomia-m7.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-i1-GGUF/resolve/main/Venomia-m7.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-i1-GGUF/resolve/main/Venomia-m7.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-i1-GGUF/resolve/main/Venomia-m7.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-i1-GGUF/resolve/main/Venomia-m7.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-i1-GGUF/resolve/main/Venomia-m7.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-i1-GGUF/resolve/main/Venomia-m7.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-i1-GGUF/resolve/main/Venomia-m7.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-i1-GGUF/resolve/main/Venomia-m7.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-i1-GGUF/resolve/main/Venomia-m7.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-i1-GGUF/resolve/main/Venomia-m7.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-i1-GGUF/resolve/main/Venomia-m7.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-i1-GGUF/resolve/main/Venomia-m7.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-i1-GGUF/resolve/main/Venomia-m7.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-i1-GGUF/resolve/main/Venomia-m7.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-i1-GGUF/resolve/main/Venomia-m7.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-i1-GGUF/resolve/main/Venomia-m7.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-i1-GGUF/resolve/main/Venomia-m7.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-i1-GGUF/resolve/main/Venomia-m7.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-i1-GGUF/resolve/main/Venomia-m7.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Venomia-m7-i1-GGUF/resolve/main/Venomia-m7.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
sentence-transformers/bert-large-nli-mean-tokens | sentence-transformers | "2024-03-27T10:12:29Z" | 5,179 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
pipeline_tag: sentence-similarity
---
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/bert-large-nli-mean-tokens
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/bert-large-nli-mean-tokens')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/bert-large-nli-mean-tokens')
model = AutoModel.from_pretrained('sentence-transformers/bert-large-nli-mean-tokens')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/bert-large-nli-mean-tokens)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
SeaLLMs/SeaLLM-7B-v2 | SeaLLMs | "2024-04-15T02:17:00Z" | 5,177 | 62 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"multilingual",
"sea",
"conversational",
"en",
"zh",
"vi",
"id",
"th",
"ms",
"km",
"lo",
"my",
"tl",
"arxiv:2312.00738",
"arxiv:2205.11916",
"arxiv:2306.05179",
"arxiv:2306.05685",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-29T08:59:58Z" | ---
license: other
license_name: seallms
license_link: https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE
language:
- en
- zh
- vi
- id
- th
- ms
- km
- lo
- my
- tl
tags:
- multilingual
- sea
---
<p align="center">
<img src="seal_logo.png" width="200" />
</p>
# *SeaLLM-7B-v2* - Large Language Models for Southeast Asia
# <strong style="color: red">BIG NEWS: <a href="https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5">SeaLLM-7B-v2.5</a> is released with state-of-the-art performance in world knowledge and reasoning. SeaLLM-7B-v2 will begin deprecation.</strong>
<p align="center">
<a href="https://damo-nlp-sg.github.io/SeaLLMs/" target="_blank" rel="noopener">Technical Blog</a>
<a href="https://huggingface.co/SeaLLMs/SeaLLM-7B-v2" target="_blank" rel="noopener"> 🤗 Tech Memo</a>
<a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B" target="_blank" rel="noopener"> 🤗 DEMO</a>
<a href="https://github.com/DAMO-NLP-SG/SeaLLMs" target="_blank" rel="noopener">Github</a>
<a href="https://arxiv.org/pdf/2312.00738.pdf" target="_blank" rel="noopener">Technical Report</a>
</p>
We introduce [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2), the state-of-the-art multilingual LLM for Southeast Asian (SEA) languages 🇬🇧 🇨🇳 🇻🇳 🇮🇩 🇹🇭 🇲🇾 🇰🇭 🇱🇦 🇲🇲 🇵🇭. It is the most significant upgrade since [SeaLLM-13B](https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat), with half the size, outperforming performance across diverse multilingual tasks, from world knowledge, math reasoning, instruction following, etc.
### Highlights
* [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) achieves the **7B-SOTA** on the **Zero-shot CoT GSM8K** task with **78.2** score and outperforms GPT-3.5 in many GSM8K-translated tasks in SEA languages (🇨🇳 🇻🇳 🇮🇩 🇹🇭) as well as MGSM (🇨🇳 🇹🇭). It also surpasses GPT-3.5 in MATH CoT for Thai 🇹🇭.
* It scores competitively against GPT-3.5 in many zero-shot CoT commonsense benchmark, with **82.5, 68.3, 80.9** scores on Arc-C, Winogrande, and Hellaswag.
* It achieves **7.54** score on the 🇬🇧 **MT-bench**, it ranks 3rd place on the leaderboard for 7B category and is the most outperforming multilingual model.
* It scores **45.74** on the VMLU benchmark for Vietnamese 🇻🇳, and is the only open-source multilingual model that can be competitive to monolingual models ([Vistral-7B](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat)) of similar sizes.
### Release and DEMO
- DEMO: [SeaLLMs/SeaLLM-7B](https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B).
- Technical report: [Arxiv: SeaLLMs - Large Language Models for Southeast Asia](https://arxiv.org/pdf/2312.00738.pdf).
- Model weights:
- [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2).
- [SeaLLM-7B-v2-gguf](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf).
- [SeaLLM-7B-v2-GGUF (thanks Lonestriker)](https://huggingface.co/LoneStriker/SeaLLM-7B-v2-GGUF). NOTE: use [seallm.preset.json](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf/blob/main/seallm.preset.json) to work properly.
- Run locally:
- [LM-studio](https://lmstudio.ai/):
- [SeaLLM-7B-v2-q4_0](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.q4_0.gguf) and [SeaLLM-7B-v2-q8_0](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.q8_0.gguf).
- LM-studio requires this [seallm.preset.json](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf/blob/main/seallm.preset.json) to set chat template properly.
- [ollama](https://ollama.ai/) `ollama run nxphi47/seallm-7b-v2:q4_0`
- [MLX for Apple Silicon](https://github.com/ml-explore/mlx): [mlx-community/SeaLLM-7B-v2-4bit-mlx](https://huggingface.co/mlx-community/SeaLLM-7B-v2-4bit-mlx)
<blockquote style="color:red">
<p><strong style="color: red">Terms of Use and License</strong>:
By using our released weights, codes, and demos, you agree to and comply with the terms and conditions specified in our <a href="https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b/edit/main/LICENSE" target="_blank" rel="noopener">SeaLLMs Terms Of Use</a>.
</blockquote>
> **Disclaimer**:
> We must note that even though the weights, codes, and demos are released in an open manner, similar to other pre-trained language models, and despite our best efforts in red teaming and safety fine-tuning and enforcement, our models come with potential risks, including but not limited to inaccurate, misleading or potentially harmful generation.
> Developers and stakeholders should perform their own red teaming and provide related security measures before deployment, and they must abide by and comply with local governance and regulations.
> In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos.
> The logo was generated by DALL-E 3.
### What's new since SeaLLM-13B-v1 and SeaLLM-7B-v1?
* SeaLLM-7B-v2 is continue-pretrained from [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) and underwent carefully designed tuning with focus in reasoning.
## Evaluation
### Zero-shot CoT Multilingual Math Reasoning
[SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) achieves with **78.2** score on the GSM8K with zero-shot CoT reasoning, making it the **state of the art** in the realm of 7B models. It also outperforms GPT-3.5 in the same GSM8K benchmark as translated into SEA languages (🇨🇳 🇻🇳 🇮🇩 🇹🇭). [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) also surpasses GPT-3.5 on the Thai-translated MATH benchmark, with **22.4** vs 18.1 scores.

<details>
<summary>See details on English and translated GSM8K and MATH with zero-shot reasoning</summary>
<br>
| Model | GSM8K<br>en | MATH<br>en | GSM8K<br>zh | MATH<br>zh | GSM8K<br>vi | MATH<br>vi | GSM8K<br>id | MATH<br>id | GSM8K<br>th | MATH<br>th
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| GPT-3.5 | 80.8 | 34.1 | 48.2 | 21.5 | 55 | 26.5 | 64.3 | 26.4 | 35.8 | 18.1
| Qwen-14B-chat | 61.4 | 18.4 | 41.6 | 11.8 | 33.6 | 3.6 | 44.7 | 8.6 | 22 | 6
| Vistral-7b-chat | 48.2 | 12.5 | | | 48.7 | 3.1 | | | |
| Qwen1.5-7B-chat | 56.8 | 15.3 | 40 | 2.7 | 37.7 | 9 | 36.9 | 7.7 | 21.9 |
| SeaLLM-7B-v2 | 78.2 | 27.5 | 53.7 | 17.6 | 69.9 | 23.8 | 71.5 | 24.4 | 59.6 | 22.4
</details>
Baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json), [Vistral](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat)).
#### Zero-shot MGSM
[SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) also outperforms GPT-3.5 and Qwen-14B on the multilingual MGSM for Zh and Th.
| Model | MGSM-Zh | MGSM-Th
|-----| ----- | ---
| ChatGPT (reported) | 61.2 | 47.2
| Qwen-14B-chat | 59.6 | 28
| SeaLLM-7B-v2 | **64.8** | **62.4**
### Zero-shot Commonsense Reasoning
We compare [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) with ChatGPT and Mistral-7B-instruct on various zero-shot commonsense benchmarks (Arc-Challenge, Winogrande and Hellaswag). We use the 2-stage technique in [(Kojima et al., 2023)](https://arxiv.org/pdf/2205.11916.pdf) to grab the answer. Note that we **DID NOT** use "Let's think step-by-step" to invoke explicit CoT.
| 0-shot reasoning | Arc-Challenge | Winogrande | Hellaswag
|-----| ----- | --- | -- |
| ChatGPT (reported) | 84.6* | 66.8* | 72.0*
| ChatGPT (reproduced)| 84.1 | 63.1 | 79.5
| Mistral-7B-Instruct | 68.1 | 56.4 | 45.6
| Qwen1.5-7B-chat | 79.3 | 59.4 | 69.3
| SeaLLM-7B-v2 | 82.5 | 68.3 | 80.9
Baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json), [Mistral](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)).
### Multilingual World Knowledge
We evaluate models on 3 benchmarks following the recommended default setups: 5-shot MMLU for En, 3-shot [M3Exam](https://arxiv.org/pdf/2306.05179.pdf) (M3e) for En, Zh, Vi, Id, Th, and zero-shot [VMLU](https://vmlu.ai/) for Vi.
| Model | Langs | En<br>MMLU | En<br>M3e | Zh<br>M3e | Vi<br>M3e | Vi<br>VMLU | Id<br>M3e | Th<br>M3e
|-----| ----- | --- | -- | ----- | ---- | --- | --- | --- |
| GPT-3.5 | Multi | 68.90 | 75.46 | 60.20 | 58.64 | 46.32 | 49.27 | 37.41
| Vistral-7B-chat | Mono | 56.86 | 67.00 | 44.56 | 54.33 | 50.03 | 36.49 | 25.27
| Qwen1.5-7B-chat | Multi | 61.00 | 52.07 | 81.96 | 43.38 | 45.02 | 24.29 | 20.25
| SeaLLM-7B-v2 | Multi | 61.89 | 70.91 | 55.43 | 51.15 | 45.74 | 42.25 | 35.52
VMLU reproduce script [here](https://github.com/DAMO-NLP-SG/SeaLLMs/blob/main/evaluation/vmlu/vmlu_run.py). Lm-eval was used to evaluate MMLU.
0-shot VMLU scores for baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json)).
### MT-Bench
On the English [MT-bench](https://arxiv.org/abs/2306.05685) metric, SeaLLM-7B-v2 achieves **7.54** score on the MT-bench (3rd place on the leaderboard for 7B category), outperforms many 70B models and is arguably the only one that handles 10 SEA languages.
Refer to [mt_bench/seallm_7b_v2.jsonl](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2/blob/main/evaluation/mt_bench/seallm_7b_v2.jsonl) for the MT-bench predictions of SeaLLM-7B-v2, and [here](https://github.com/lm-sys/FastChat/issues/3013#issue-2118685341) to reproduce it.
| Model | Access | Langs | MT-Bench
| --- | --- | --- | --- |
| GPT-4-turbo | closed | multi | 9.32
| GPT-4-0613 | closed | multi | 9.18
| Mixtral-8x7b (46B) | open | multi | 8.3
| Starling-LM-7B-alpha | open | mono (en) | 8.0
| OpenChat-3.5-7B | open | mono (en) | 7.81
| **SeaLLM-7B-v2** | **open** | **multi (10+)** | **7.54**
| [Qwen-14B](https://huggingface.co/Qwen/Qwen-14B-Chat) | open | multi | 6.96
| [Llama-2-70B](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) | open | mono (en) | 6.86
| Mistral-7B-instuct | open | mono (en) | 6.84
### Sea-Bench
Similar to MT-Bench, [Sea-bench](https://huggingface.co/datasets/SeaLLMs/Sea-bench) is a set of categorized instruction test sets to measure models' ability as an assistant that is specifically focused on 9 SEA languages, including non-Latin low-resource languages.
As shown, the huge improvements come from math-reasoning, reaching GPT-3.5 level of performance.

Refer to [sea_bench/seallm_7b_v2.jsonl](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2/blob/main/evaluation/sea_bench/seallm_7b_v2.jsonl) for the Sea-bench predictions of SeaLLM-7B-v2.
### Usage
#### Instruction format
```python
prompt = """<|im_start|>system
You are a helpful assistant.</s><|im_start|>user
Hello world</s><|im_start|>assistant
Hi there, how can I help?</s>"""
# NOTE: previous commit has \n between </s> and <|im_start|>, that was incorrect!
# <|im_start|> is not a special token.
# Transformers chat_template should be consistent with vLLM format below.
# ! ENSURE 1 and only 1 bos `<s>` at the beginning of sequence
print(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt)))
'<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', '▁are', '▁a', '▁helpful', '▁assistant', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', '▁there', ',', '▁how', '▁can', '▁I', '▁help', '?', '</s>']
"""
```
#### Using transformers's chat_template
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
# use bfloat16 to ensure the best performance.
model = AutoModelForCausalLM.from_pretrained("SeaLLMs/SeaLLM-7B-v2", torch_dtype=torch.bfloat16, device_map=device)
tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLM-7B-v2")
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello world"},
{"role": "assistant", "content": "Hi there, how can I help you today?"},
{"role": "user", "content": "Explain general relativity in details."}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
print(tokenizer.convert_ids_to_tokens(encodeds[0]))
# ['<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', '▁are', '▁a', '▁helpful', '▁assistant', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', '▁there', ',', '▁how', '▁can', '▁I', '▁help', '▁you', '▁today', '?', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Ex', 'plain', '▁general', '▁rel', 'ativity', '▁in', '▁details', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>']
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True, pad_token_id=tokenizer.pad_token_id)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
#### Using vLLM
```python
from vllm import LLM, SamplingParams
TURN_TEMPLATE = "<|im_start|>{role}\n{content}</s>"
TURN_PREFIX = "<|im_start|>{role}\n"
# There is no \n between </s> and <|im_start|>.
def seallm_chat_convo_format(conversations, add_assistant_prefix: bool, system_prompt=None):
# conversations: list of dict with key `role` and `content` (openai format)
if conversations[0]['role'] != 'system' and system_prompt is not None:
conversations = [{"role": "system", "content": system_prompt}] + conversations
text = ''
for turn_id, turn in enumerate(conversations):
prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content'])
text += prompt
if add_assistant_prefix:
prompt = TURN_PREFIX.format(role='assistant')
text += prompt
return text
sparams = SamplingParams(temperature=0.1, max_tokens=1024, stop=['</s>', '<|im_start|>'])
llm = LLM("SeaLLMs/SeaLLM-7B-v2", dtype="bfloat16")
message = "Explain general relativity in details."
prompt = seallm_chat_convo_format(message, True)
gen = llm.generate(prompt, sampling_params)
print(gen[0].outputs[0].text)
```
#### Fine-tuning SeaLLM-7B-v2
Should follow the chat format and accurately mask out source tokens. Here is an example.
```python
conversations = [
{"role": "system", "content": "You are helful assistant."},
{"role": "user", "content": "Hello world."},
{"role": "assistant", "content": "Hi there, how can I help?"},
{"role": "user", "content": "Tell me a joke."},
{"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything."},
]
def seallm_7b_v2_tokenize_multi_turns(tokenizer, conversations, add_assistant_prefix=False):
"""
Inputs:
conversations: list of dict following openai format, eg
conversations = [
{"role": "system", "content": "You are helful assistant."},
{"role": "user", "content": "Hello world."},
{"role": "assistant", "content": "Hi there, how can I help?"},
{"role": "user", "content": "Tell me a joke."},
{"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything."},
]
add_assistant_prefix: whether to add assistant_prefix, only for inference decoding
Outputs:
tokenize_output_sample, {
"input_ids": ...
"token_type_ids": 1 if train and 0 if masked out (not train)
}
During training, need to create a labels, with masked-out tokens = -100 to avoid loss computations.
labels = sample['input_ids'].clone()
labels[sample['token_type_ids'] == 0] = -100
"""
TURN_TEMPLATE = "<|im_start|>{role}\n{content}</s>"
TURN_PREFIX = "<|im_start|>{role}\n"
sample = None
assistant_prefix_len = None
for turn_id, turn in enumerate(conversations):
prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content'])
turn_sample = tokenizer(
prompt, padding=False, truncation=False, verbose=False, add_special_tokens=False,
return_token_type_ids=True,
)
if turn['role'] == 'assistant':
if assistant_prefix_len is None:
assistant_prefix_len = len(tokenizer.encode(TURN_PREFIX.format(role=turn['role']), add_special_tokens=False))
turn_sample['token_type_ids'][assistant_prefix_len:] = [1] * (len(turn_sample['input_ids']) - assistant_prefix_len)
if sample is None:
sample = turn_sample
else:
for k in turn_sample.keys():
sample[k].extend(turn_sample[k])
if add_assistant_prefix:
assistant_prefix_sample = tokenizer(
TURN_PREFIX.format(role="assistant"), padding=False, truncation=False, verbose=False, add_special_tokens=False,
return_token_type_ids=True,
)
for k in sample.keys():
sample[k].extend(assistant_prefix_sample[k])
if tokenizer.add_bos_token:
sample['input_ids'] = [tokenizer.bos_token_id] + sample['input_ids']
sample['attention_mask'] = [1] + sample['attention_mask']
sample['token_type_ids'] = [sample['token_type_ids'][0]] + sample['token_type_ids']
return sample
# ! testing
sample = seallm_7b_v2_tokenize_multi_turns(tokenizer, conversations)
print(tokenizer.convert_ids_to_tokens(sample['input_ids']))
print(sample['token_type_ids'])
# ['<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', '▁are', '▁hel', 'ful', '▁assistant', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', '▁there', ',', '▁how', '▁can', '▁I', '▁help', '?', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Tell', '▁me', '▁a', '▁joke', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Why', '▁don', "'", 't', '▁scientists', '▁trust', '▁atoms', '?', '▁Because', '▁they', '▁make', '▁up', '▁everything', '.', '</s>']
# [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
```
## Acknowledgement to Our Linguists
We would like to express our special thanks to our professional and native linguists, Tantong Champaiboon, Nguyen Ngoc Yen Nhi and Tara Devina Putri, who helped build, evaluate, and fact-check our sampled pretraining and SFT dataset as well as evaluating our models across different aspects, especially safety.
## Citation
If you find our project useful, we hope you would kindly star our repo and cite our work as follows: Corresponding Author: [[email protected]](mailto:[email protected])
**Author list and order will change!**
* `*` and `^` are equal contributions.
```
@article{damonlpsg2023seallm,
author = {Xuan-Phi Nguyen*, Wenxuan Zhang*, Xin Li*, Mahani Aljunied*,
Zhiqiang Hu, Chenhui Shen^, Yew Ken Chia^, Xingxuan Li, Jianyu Wang,
Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen Yang,
Chaoqun Liu, Hang Zhang, Lidong Bing},
title = {SeaLLMs - Large Language Models for Southeast Asia},
year = 2023,
Eprint = {arXiv:2312.00738},
}
```
|
RichardErkhov/davzoku_-_frankencria-llama2-12.5b-v1.3-m.2-gguf | RichardErkhov | "2024-06-28T20:15:08Z" | 5,176 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-28T16:13:49Z" | Entry not found |
apple/OpenELM-270M-Instruct | apple | "2024-05-02T00:55:44Z" | 5,175 | 111 | transformers | [
"transformers",
"safetensors",
"openelm",
"text-generation",
"custom_code",
"arxiv:2404.14619",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-04-12T21:51:40Z" | ---
license: other
license_name: apple-sample-code-license
license_link: LICENSE
---
# OpenELM
*Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*
We introduce **OpenELM**, a family of **Open** **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters.
Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them.
## Usage
We have provided an example function to generate output from OpenELM models loaded via [HuggingFace Hub](https://huggingface.co/docs/hub/) in `generate_openelm.py`.
You can try the model by running the following command:
```
python generate_openelm.py --model apple/OpenELM-270M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2
```
Please refer to [this link](https://huggingface.co/docs/hub/security-tokens) to obtain your hugging face access token.
Additional arguments to the hugging face generate function can be passed via `generate_kwargs`. As an example, to speedup the inference, you can try [lookup token speculative generation](https://huggingface.co/docs/transformers/generation_strategies) by passing the `prompt_lookup_num_tokens` argument as follows:
```
python generate_openelm.py --model apple/OpenELM-270M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10
```
Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example:
```
python generate_openelm.py --model apple/OpenELM-270M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL]
```
## Main Results
### Zero-Shot
| **Model Size** | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA** | **SciQ** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 26.45 | 45.08 | **53.98** | 46.71 | 69.75 | **84.70** | **53.91** | 54.37 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56 | **52.07** | **70.78** | 84.40 | 52.72 | **55.11** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 27.56 | 48.06 | 55.78 | 53.97 | 72.31 | 87.20 | 58.01 | 57.56 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34** | **72.63** | **88.00** | **58.96** | **59.95** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 32.34 | **55.43** | 63.58 | 64.81 | **75.57** | **90.60** | 61.72 | 63.44 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23 | **70.00** | **71.20** | 75.03 | 89.30 | **62.75** | **65.50** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 35.58 | 59.89 | 67.40 | 72.44 | 78.24 | **92.70** | 65.51 | 67.39 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **39.42** | **61.74** | **68.17** | **76.36** | **79.00** | 92.50 | **66.85** | **69.15** |
### LLM360
| **Model Size** | **ARC-c** | **HellaSwag** | **MMLU** | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | 47.15 | 25.72 | **39.24** | **53.83** | 38.72 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58** | **26.70** | 38.72 | 53.20 | **40.54** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | 53.86 | **26.01** | 40.18 | 57.22 | 41.50 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31** | 25.41 | **40.48** | **58.33** | **43.41** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | 65.71 | **27.05** | 36.98 | 63.22 | 45.93 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83** | 25.65 | **45.95** | **64.72** | **49.94** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | 73.28 | **26.76** | 34.98 | 67.25 | 48.90 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | **76.87** | 24.80 | **38.76** | **67.96** | **51.22** |
### OpenLLM Leaderboard
| **Model Size** | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU** | **PIQA** | **RACE** | **TruthfulQA** | **WinoGrande** | **Average** |
|-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------|
| [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | **66.79** | 47.15 | 25.72 | 69.75 | 30.91 | **39.24** | **53.83** | 45.13 |
| [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01 | **51.58** | **26.70** | **70.78** | 33.78 | 38.72 | 53.20 | **46.66** |
| [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | **68.63** | 53.86 | **26.01** | 72.31 | 33.11 | 40.18 | 57.22 | 47.69 |
| [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44 | **59.31** | 25.41 | **72.63** | **36.84** | **40.48** | **58.33** | **49.25** |
| [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | **71.74** | 65.71 | **27.05** | **75.57** | 36.46 | 36.98 | 63.22 | 51.68 |
| [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02 | **71.83** | 25.65 | 75.03 | **39.43** | **45.95** | **64.72** | **54.40** |
| [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | **73.29** | 73.28 | **26.76** | 78.24 | **38.76** | 34.98 | 67.25 | 54.35 |
| [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | 72.33 | **76.87** | 24.80 | **79.00** | 38.47 | **38.76** | **67.96** | **55.73** |
See the technical report for more results and comparison.
## Evaluation
### Setup
Install the following dependencies:
```bash
# install public lm-eval-harness
harness_repo="public-lm-eval-harness"
git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo}
cd ${harness_repo}
# use main branch on 03-15-2024, SHA is dc90fec
git checkout dc90fec
pip install -e .
cd ..
# 66d6242 is the main branch on 2024-04-01
pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242
pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0
```
### Evaluate OpenELM
```bash
# OpenELM-270M-Instruct
hf_model=apple/OpenELM-270M-Instruct
# this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True
tokenizer=meta-llama/Llama-2-7b-hf
add_bos_token=True
batch_size=1
mkdir lm_eval_output
shot=0
task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=5
task=mmlu,winogrande
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=25
task=arc_challenge,crows_pairs_english
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
shot=10
task=hellaswag
lm_eval --model hf \
--model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \
--tasks ${task} \
--device cuda:0 \
--num_fewshot ${shot} \
--output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
--batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
```
## Bias, Risks, and Limitations
The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements.
## Citation
If you find our work useful, please cite:
```BibTex
@article{mehtaOpenELMEfficientLanguage2024,
title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}},
shorttitle = {{OpenELM}},
url = {https://arxiv.org/abs/2404.14619v1},
language = {en},
urldate = {2024-04-24},
journal = {arXiv.org},
author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad},
month = apr,
year = {2024},
}
@inproceedings{mehta2022cvnets,
author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad},
title = {CVNets: High Performance Library for Computer Vision},
year = {2022},
booktitle = {Proceedings of the 30th ACM International Conference on Multimedia},
series = {MM '22}
}
```
|
redponike/Llama-3-Instruct-8B-SPPO-Iter3-GGUF | redponike | "2024-06-26T16:41:34Z" | 5,175 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-26T14:12:55Z" | GGUF quants of [UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3)
I modified the tokenizer parameters to get properly working GGUFs. |
mradermacher/Samantha-Qwen2-7B-i1-GGUF | mradermacher | "2024-06-17T18:15:04Z" | 5,173 | 1 | transformers | [
"transformers",
"gguf",
"en",
"zh",
"dataset:macadeliccc/opus_samantha",
"dataset:HuggingfaceH4/ultrachat_200k",
"dataset:teknium/OpenHermes-2.5",
"dataset:Sao10K/Claude-3-Opus-Instruct-15K",
"base_model:macadeliccc/Samantha-Qwen2-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-17T15:40:25Z" | ---
base_model: macadeliccc/Samantha-Qwen2-7B
datasets:
- macadeliccc/opus_samantha
- HuggingfaceH4/ultrachat_200k
- teknium/OpenHermes-2.5
- Sao10K/Claude-3-Opus-Instruct-15K
language:
- en
- zh
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/macadeliccc/Samantha-Qwen2-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Samantha-Qwen2-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-i1-GGUF/resolve/main/Samantha-Qwen2-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-i1-GGUF/resolve/main/Samantha-Qwen2-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-i1-GGUF/resolve/main/Samantha-Qwen2-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-i1-GGUF/resolve/main/Samantha-Qwen2-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-i1-GGUF/resolve/main/Samantha-Qwen2-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-i1-GGUF/resolve/main/Samantha-Qwen2-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-i1-GGUF/resolve/main/Samantha-Qwen2-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-i1-GGUF/resolve/main/Samantha-Qwen2-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-i1-GGUF/resolve/main/Samantha-Qwen2-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-i1-GGUF/resolve/main/Samantha-Qwen2-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-i1-GGUF/resolve/main/Samantha-Qwen2-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-i1-GGUF/resolve/main/Samantha-Qwen2-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-i1-GGUF/resolve/main/Samantha-Qwen2-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-i1-GGUF/resolve/main/Samantha-Qwen2-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-i1-GGUF/resolve/main/Samantha-Qwen2-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-i1-GGUF/resolve/main/Samantha-Qwen2-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-i1-GGUF/resolve/main/Samantha-Qwen2-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-i1-GGUF/resolve/main/Samantha-Qwen2-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-i1-GGUF/resolve/main/Samantha-Qwen2-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-i1-GGUF/resolve/main/Samantha-Qwen2-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-Qwen2-7B-i1-GGUF/resolve/main/Samantha-Qwen2-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
DILAB-HYU/KoQuality-Polyglot-5.8b | DILAB-HYU | "2023-11-05T11:49:45Z" | 5,170 | 2 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"polyglot-ko",
"gpt-neox",
"KoQuality",
"ko",
"dataset:DILAB-HYU/KoQuality",
"base_model:EleutherAI/polyglot-ko-5.8b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-24T14:07:52Z" | ---
language:
- ko
license: apache-2.0
tags:
- generated_from_trainer
- polyglot-ko
- gpt-neox
- KoQuality
datasets:
- DILAB-HYU/KoQuality
pipeline_tag: text-generation
base_model: EleutherAI/polyglot-ko-5.8b
model-index:
- name: KoAlpaca-Polyglot-5.8B
results: []
---
# **KoQuality-Polyglot-5.8b**
KoQuality-Polyglot-5.8b is a fine-tuned iteration of the [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) model, specifically trained on the [KoQuality dataset](https://huggingface.co/datasets/DILAB-HYU/KoQuality). Notably, when excluding models employing COT datasets, KoQuality-Polyglot-5.8b exhibits exceptional performance in same size models, even though it operates with a relatively small dataset.
## Open Ko-LLM LeaderBoard
<img src="https://cdn-uploads.huggingface.co/production/uploads/6152b4b9ecf3ca6ab820e325/iYzR_mdvkcjnVquho0Y9R.png" width= "1000px" title="하얀 강아지">
Our approach centers around leveraging high-quality instruction datasets to deepen our understanding of commands, all the while preserving the performance of the Pre-trained Language Model (PLM). Compared to alternative models, we have achieved this with minimal learning, **utilizing only 1% of the dataset, which equates to 4006 instructions**.
## Overall Average accuracy score of the KoBEST datasets
We use [KoBEST benchmark](https://huggingface.co/datasets/skt/kobest_v1) datasets(BoolQ, COPA, HellaSwag, SentiNeg, WiC) to compare the performance of our best model and other models accuracy. Our model outperforms other models in the average accuracy score of the KoBEST datasets.
<img src="https://cdn-uploads.huggingface.co/production/uploads/650fecfd247f564485f8fbcf/t5x4PphoNb-tW3iCzXXHT.png" width= "500px">
| Model | 0-shot | 1-shot | 2-shot | 5-shot | 10-shot
| --- | --- | --- | --- | --- | --- |
| polyglot-ko-5.8b | 0.4734 | 0.5929 | 0.6120 | 0.6388 | 0.6295
| koalpcaca-polyglot-5.8b | 0.4731 | 0.5284 | 0.5721 | 0.6054 | 0.6042
| kullm-polyglot-5.8b | 0.4415 | 0.6030 | 0.5849 | 0.6252 | 0.6451
| koquality-polyglot-5.8b | 0.4530 | 0.6050 | 0.6351 | 0.6420 | 0.6457
## Evaluation results
### COPA (F1)
<img src="https://cdn-uploads.huggingface.co/production/uploads/650fecfd247f564485f8fbcf/QAie0x99S8-KEKvK0I_uZ.png" width= "500px">
### BoolQ (F1)
<img src="https://cdn-uploads.huggingface.co/production/uploads/650fecfd247f564485f8fbcf/CtEWEQ5BBS05V9cDWA7kp.png" width= "500px">
### HellaSwag (F1)
<img src="https://cdn-uploads.huggingface.co/production/uploads/650fecfd247f564485f8fbcf/cHws6qWkDlTfs5GVcQvtN.png" width= "500px">
### SentiNeg (F1)
<img src="https://cdn-uploads.huggingface.co/production/uploads/650fecfd247f564485f8fbcf/VEG15XXOIbzJyQAusLa4B.png" width= "500px">
### WiC (F1)
<img src="https://cdn-uploads.huggingface.co/production/uploads/650fecfd247f564485f8fbcf/hV-uADJiydkVQOyYysej9.png" width= "500px">
## Training hyperparameters
- learning_rate: 5e-5
- train_batch_size: 4
- seed: 42
- distributed_type: multi-GPU (A100 80G) + No offloading
- num_devices: 4
- gradient_accumulation_steps: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
## Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.11.0
- deepspeed 0.9.5
## Citation
```
@misc{2023koqaulity,
title = {KoQuality: Curation of High-quality Instruction Data for Korean Language Models},
author = {Na, Yohan and Kim, Dahye and Chae, Dong-Kyu},
journal={Proceedings of the 35th Annual Conference on Human and Cognitive Language Technology (HCLT 2023)},
pages={306-311},
year = {2023},
}
```
More details can be found here: [github.com/nayohan/KoQuality](https://github.com/nayohan/KoQuality)
<br> |
TheBloke/WizardLM-30B-Uncensored-GGUF | TheBloke | "2023-09-27T12:52:39Z" | 5,169 | 11 | transformers | [
"transformers",
"gguf",
"llama",
"uncensored",
"dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered",
"base_model:ehartford/WizardLM-30B-Uncensored",
"license:other",
"text-generation-inference",
"region:us"
] | null | "2023-09-19T23:15:29Z" | ---
license: other
tags:
- uncensored
datasets:
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
model_name: Wizardlm 30B Uncensored
base_model: ehartford/WizardLM-30B-Uncensored
inference: false
model_creator: Eric Hartford
model_type: llama
prompt_template: '{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Wizardlm 30B Uncensored - GGUF
- Model creator: [Eric Hartford](https://huggingface.co/ehartford)
- Original model: [Wizardlm 30B Uncensored](https://huggingface.co/ehartford/WizardLM-30B-Uncensored)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Eric Hartford's Wizardlm 30B Uncensored](https://huggingface.co/ehartford/WizardLM-30B-Uncensored).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/WizardLM-30B-uncensored-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WizardLM-30B-uncensored-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardLM-30B-uncensored-GGUF)
* [Eric Hartford's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/WizardLM-30B-Uncensored)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: WizardLM
```
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [WizardLM-30B-Uncensored.Q2_K.gguf](https://huggingface.co/TheBloke/WizardLM-30B-uncensored-GGUF/blob/main/WizardLM-30B-Uncensored.Q2_K.gguf) | Q2_K | 2 | 13.50 GB| 16.00 GB | smallest, significant quality loss - not recommended for most purposes |
| [WizardLM-30B-Uncensored.Q3_K_S.gguf](https://huggingface.co/TheBloke/WizardLM-30B-uncensored-GGUF/blob/main/WizardLM-30B-Uncensored.Q3_K_S.gguf) | Q3_K_S | 3 | 14.06 GB| 16.56 GB | very small, high quality loss |
| [WizardLM-30B-Uncensored.Q3_K_M.gguf](https://huggingface.co/TheBloke/WizardLM-30B-uncensored-GGUF/blob/main/WizardLM-30B-Uncensored.Q3_K_M.gguf) | Q3_K_M | 3 | 15.76 GB| 18.26 GB | very small, high quality loss |
| [WizardLM-30B-Uncensored.Q3_K_L.gguf](https://huggingface.co/TheBloke/WizardLM-30B-uncensored-GGUF/blob/main/WizardLM-30B-Uncensored.Q3_K_L.gguf) | Q3_K_L | 3 | 17.28 GB| 19.78 GB | small, substantial quality loss |
| [WizardLM-30B-Uncensored.Q4_0.gguf](https://huggingface.co/TheBloke/WizardLM-30B-uncensored-GGUF/blob/main/WizardLM-30B-Uncensored.Q4_0.gguf) | Q4_0 | 4 | 18.36 GB| 20.86 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [WizardLM-30B-Uncensored.Q4_K_S.gguf](https://huggingface.co/TheBloke/WizardLM-30B-uncensored-GGUF/blob/main/WizardLM-30B-Uncensored.Q4_K_S.gguf) | Q4_K_S | 4 | 18.44 GB| 20.94 GB | small, greater quality loss |
| [WizardLM-30B-Uncensored.Q4_K_M.gguf](https://huggingface.co/TheBloke/WizardLM-30B-uncensored-GGUF/blob/main/WizardLM-30B-Uncensored.Q4_K_M.gguf) | Q4_K_M | 4 | 19.62 GB| 22.12 GB | medium, balanced quality - recommended |
| [WizardLM-30B-Uncensored.Q5_0.gguf](https://huggingface.co/TheBloke/WizardLM-30B-uncensored-GGUF/blob/main/WizardLM-30B-Uncensored.Q5_0.gguf) | Q5_0 | 5 | 22.40 GB| 24.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [WizardLM-30B-Uncensored.Q5_K_S.gguf](https://huggingface.co/TheBloke/WizardLM-30B-uncensored-GGUF/blob/main/WizardLM-30B-Uncensored.Q5_K_S.gguf) | Q5_K_S | 5 | 22.40 GB| 24.90 GB | large, low quality loss - recommended |
| [WizardLM-30B-Uncensored.Q5_K_M.gguf](https://huggingface.co/TheBloke/WizardLM-30B-uncensored-GGUF/blob/main/WizardLM-30B-Uncensored.Q5_K_M.gguf) | Q5_K_M | 5 | 23.05 GB| 25.55 GB | large, very low quality loss - recommended |
| [WizardLM-30B-Uncensored.Q6_K.gguf](https://huggingface.co/TheBloke/WizardLM-30B-uncensored-GGUF/blob/main/WizardLM-30B-Uncensored.Q6_K.gguf) | Q6_K | 6 | 26.69 GB| 29.19 GB | very large, extremely low quality loss |
| [WizardLM-30B-Uncensored.Q8_0.gguf](https://huggingface.co/TheBloke/WizardLM-30B-uncensored-GGUF/blob/main/WizardLM-30B-Uncensored.Q8_0.gguf) | Q8_0 | 8 | 34.57 GB| 37.07 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/WizardLM-30B-uncensored-GGUF and below it, a specific filename to download, such as: WizardLM-30B-Uncensored.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/WizardLM-30B-uncensored-GGUF WizardLM-30B-Uncensored.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/WizardLM-30B-uncensored-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WizardLM-30B-uncensored-GGUF WizardLM-30B-Uncensored.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m WizardLM-30B-Uncensored.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/WizardLM-30B-uncensored-GGUF", model_file="WizardLM-30B-Uncensored.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Eric Hartford's Wizardlm 30B Uncensored
This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
Shout out to the open source AI/ML community, and everyone who helped me out.
Note:
An uncensored model has no guardrails.
You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
Publishing anything this model generates is the same as publishing it yourself.
You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
<!-- original-model-card end -->
|
deepset/bert-base-uncased-squad2 | deepset | "2023-03-24T14:15:37Z" | 5,168 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:squad_v2",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-03-02T23:29:05Z" | ---
language: en
license: cc-by-4.0
datasets:
- squad_v2
model-index:
- name: deepset/bert-base-uncased-squad2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- type: exact_match
value: 75.6529
name: Exact Match
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTY2YmQ0ZDFjMjRlZWRiZWQ2YWQ4MTM0ODkyYTQ0NmYwMzBlNWViZWQ0ODFhMGJmMmY4ZGYwOTQyMDAyZGNjYyIsInZlcnNpb24iOjF9.UyqonQTsCB0BW86LfPy17kLt3a4r3wMeh04MDam5t_UhElp6N02YpiKOqcb1ethNHjAR0WGyxrcV3TI4d-wFAQ
- type: f1
value: 78.6191
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWRkZWVjMDU2YTcxYWVkZTU1YmUzY2FkNWI5NDJkM2YwMjFmMmE0Njc3MjI5N2Q0NDdhZDNkZWNjMWE5YTRmZiIsInZlcnNpb24iOjF9.ol0Zacd9ZryXazXjgVssGFYG4s5FzbhGGaj1ZEDLVN2ziyzx23bo4GH9PSuGTFxRK2BO5_dxvDupLRqJOF59Bg
---
# bert-base-uncased for QA
## Overview
**Language model:** bert-base-uncased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Infrastructure**: 1x Tesla v100
## Hyperparameters
```
batch_size = 32
n_epochs = 3
base_LM_model = "bert-base-uncased"
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
```
## Performance
```
"exact": 73.67977764676156
"f1": 77.87647139308865
```
## Authors
- Timo Möller: `timo.moeller [at] deepset.ai`
- Julian Risch: `julian.risch [at] deepset.ai`
- Malte Pietsch: `malte.pietsch [at] deepset.ai`
- Michel Bartels: `michel.bartels [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs) |
timm/inception_v4.tf_in1k | timm | "2023-05-10T01:04:54Z" | 5,168 | 3 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1602.07261",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-25T21:31:36Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for inception_v4.tf_in1k
A Inception-v4 image classification model. Trained on ImageNet-1k paper authors. Ported from Tensorflow via Cadene's pretrained-models.pytorch.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 42.7
- GMACs: 12.3
- Activations (M): 15.1
- Image size: 299 x 299
- **Papers:**
- https://arxiv.org/abs/1602.07261: https://arxiv.org/abs/1602.07261
- **Original:**
- https://github.com/tensorflow/models
- https://github.com/Cadene/pretrained-models.pytorch
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('inception_v4.tf_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'inception_v4.tf_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 147, 147])
# torch.Size([1, 160, 73, 73])
# torch.Size([1, 384, 35, 35])
# torch.Size([1, 1024, 17, 17])
# torch.Size([1, 1536, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'inception_v4.tf_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1536, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{Szegedy2016Inceptionv4IA,
title={Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning},
author={Christian Szegedy and Sergey Ioffe and Vincent Vanhoucke and Alexander A. Alemi},
journal={ArXiv},
year={2016},
volume={abs/1602.07261}
}
```
|
RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf | RichardErkhov | "2024-06-26T13:45:38Z" | 5,168 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-26T12:40:46Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
WildWest-Variant3-7B - GGUF
- Model creator: https://huggingface.co/BarryFutureman/
- Original model: https://huggingface.co/BarryFutureman/WildWest-Variant3-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [WildWest-Variant3-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf/blob/main/WildWest-Variant3-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [WildWest-Variant3-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf/blob/main/WildWest-Variant3-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [WildWest-Variant3-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf/blob/main/WildWest-Variant3-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [WildWest-Variant3-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf/blob/main/WildWest-Variant3-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [WildWest-Variant3-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf/blob/main/WildWest-Variant3-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [WildWest-Variant3-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf/blob/main/WildWest-Variant3-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [WildWest-Variant3-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf/blob/main/WildWest-Variant3-7B.Q3_K_M.gguf) | Q3_K_M | 0.36GB |
| [WildWest-Variant3-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf/blob/main/WildWest-Variant3-7B.Q3_K_L.gguf) | Q3_K_L | 0.0GB |
| [WildWest-Variant3-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf/blob/main/WildWest-Variant3-7B.IQ4_XS.gguf) | IQ4_XS | 0.0GB |
| [WildWest-Variant3-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf/blob/main/WildWest-Variant3-7B.Q4_0.gguf) | Q4_0 | 0.0GB |
| [WildWest-Variant3-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf/blob/main/WildWest-Variant3-7B.IQ4_NL.gguf) | IQ4_NL | 0.0GB |
| [WildWest-Variant3-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf/blob/main/WildWest-Variant3-7B.Q4_K_S.gguf) | Q4_K_S | 0.0GB |
| [WildWest-Variant3-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf/blob/main/WildWest-Variant3-7B.Q4_K.gguf) | Q4_K | 0.0GB |
| [WildWest-Variant3-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf/blob/main/WildWest-Variant3-7B.Q4_K_M.gguf) | Q4_K_M | 0.0GB |
| [WildWest-Variant3-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf/blob/main/WildWest-Variant3-7B.Q4_1.gguf) | Q4_1 | 0.0GB |
| [WildWest-Variant3-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf/blob/main/WildWest-Variant3-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [WildWest-Variant3-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf/blob/main/WildWest-Variant3-7B.Q5_K_S.gguf) | Q5_K_S | 0.78GB |
| [WildWest-Variant3-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf/blob/main/WildWest-Variant3-7B.Q5_K.gguf) | Q5_K | 0.28GB |
| [WildWest-Variant3-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf/blob/main/WildWest-Variant3-7B.Q5_K_M.gguf) | Q5_K_M | 0.06GB |
| [WildWest-Variant3-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf/blob/main/WildWest-Variant3-7B.Q5_1.gguf) | Q5_1 | 0.01GB |
| [WildWest-Variant3-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf/blob/main/WildWest-Variant3-7B.Q6_K.gguf) | Q6_K | 0.0GB |
| [WildWest-Variant3-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/BarryFutureman_-_WildWest-Variant3-7B-gguf/blob/main/WildWest-Variant3-7B.Q8_0.gguf) | Q8_0 | 0.0GB |
Original model description:
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- merge
---
# WildWest-Variant3-7B
Based on a merge of the following models using mergekit
* [BarryFutureman/NeuralTurdusVariant1-7B](https://huggingface.co/BarryFutureman/NeuralTurdusVariant1-7B)
* [mlabonne/NeuralDaredevil-7B](https://huggingface.co/udkai/Turdus)
* [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
* [PetroGPT/Severus-7B-DPO](https://huggingface.co/PetroGPT/Severus-7B-DPO)
|
guillaumekln/faster-whisper-base | guillaumekln | "2023-05-12T18:57:32Z" | 5,167 | 9 | ctranslate2 | [
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"license:mit",
"region:us"
] | automatic-speech-recognition | "2023-03-23T10:19:37Z" | ---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper base model for CTranslate2
This repository contains the conversion of [openai/whisper-base](https://huggingface.co/openai/whisper-base) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("base")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-base --output_dir faster-whisper-base \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-base).**
|
czearing/article-title-generator | czearing | "2022-06-28T20:08:16Z" | 5,166 | 18 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2022-06-28T19:44:19Z" | ---
license: mit
---
## Article Title Generator
The model is based on the T5 language model and trained using a large collection of Medium articles.
## Usage
Example code:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("czearing/article-title-generator")
model = AutoModel.from_pretrained("czearing/article-title-generator")
```
## License
MIT
|
DAMO-NLP-SG/VideoLLaMA2-7B-Base | DAMO-NLP-SG | "2024-06-17T09:17:26Z" | 5,165 | 2 | transformers | [
"transformers",
"mistral",
"text-generation",
"multimodal large language model",
"large video-language model",
"visual-question-answering",
"en",
"dataset:OpenGVLab/VideoChat2-IT",
"dataset:Lin-Chen/ShareGPT4V",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"arxiv:2406.07476",
"arxiv:2306.02858",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | visual-question-answering | "2024-06-11T13:19:34Z" | ---
license: apache-2.0
datasets:
- OpenGVLab/VideoChat2-IT
- Lin-Chen/ShareGPT4V
- liuhaotian/LLaVA-Instruct-150K
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: visual-question-answering
tags:
- multimodal large language model
- large video-language model
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63913b120cf6b11c487ca31d/ROs4bHIp4zJ7g7vzgUycu.png" width="150" style="margin-bottom: 0.2;"/>
<p>
<h3 align="center"><a href="https://arxiv.org/abs/2406.07476">VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs</a></h3>
<h5 align="center"> If you like our project, please give us a star ⭐ on <a href="https://github.com/DAMO-NLP-SG/VideoLLaMA2">Github</a> for the latest update. </h2>
<p align="center"><video src="https://cdn-uploads.huggingface.co/production/uploads/63913b120cf6b11c487ca31d/Wj7GuqQ0CB9JRoPo6_GoH.webm" width="800"></p>
## 📰 News
* **[2024.06.12]** Release model weights and the first version of the technical report of VideoLLaMA 2.
* **[2024.06.03]** Release training, evaluation, and serving codes of VideoLLaMA 2.
## 🌎 Model Zoo
| Model Name | Type | Visual Encoder | Language Decoder | # Training Frames |
|:-------------------|:--------------:|:----------------|:------------------|:----------------------:|
| [VideoLLaMA2-7B-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-Base) (This checkpoint) | Base | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 8 |
| [VideoLLaMA2-7B](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B) | Chat | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 8 |
| [VideoLLaMA2-7B-16F-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-16F-Base) | Base | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 16 |
| [VideoLLaMA2-7B-16F](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-16F) | Chat | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 16 |
## 🚀 Main Results
### Multi-Choice Video QA & Video Captioning
<p><img src="https://github.com/DAMO-NLP-SG/VideoLLaMA2/assets/18526640/9cc4a5ae-d850-4eef-bd51-83688b94698e" width="800" "/></p>
### Open-Ended Video QA
<p><img src="https://github.com/DAMO-NLP-SG/VideoLLaMA2/assets/18526640/2ed7aa53-db56-4829-8375-85aefbc5120a" width="800" "/></p>
## 🤖 Inference with VideoLLaMA2
```python
import torch
import transformers
import sys
sys.path.append('./')
from videollama2.conversation import conv_templates, SeparatorStyle
from videollama2.constants import DEFAULT_MMODAL_TOKEN, MMODAL_TOKEN_INDEX
from videollama2.mm_utils import get_model_name_from_path, tokenizer_MMODAL_token, KeywordsStoppingCriteria, process_video, process_image
from videollama2.model.builder import load_pretrained_model
def inference():
# Video Inference
paths = ['assets/cat_and_chicken.mp4']
questions = ['What animals are in the video, what are they doing, and how does the video feel?']
# Reply:
# The video features a kitten and a baby chick playing together. The kitten is seen laying on the floor while the baby chick hops around. The two animals interact playfully with each other, and the video has a cute and heartwarming feel to it.
modal_list = ['video']
# Video Inference
paths = ['assets/sora.mp4']
questions = ['Please describe this video.']
# Reply:
# The video features a series of colorful kites flying in the sky. The kites are first seen flying over trees, and then they are shown flying in the sky. The kites come in various shapes and colors, including red, green, blue, and yellow. The video captures the kites soaring gracefully through the air, with some kites flying higher than others. The sky is clear and blue, and the trees below are lush and green. The kites are the main focus of the video, and their vibrant colors and intricate designs are highlighted against the backdrop of the sky and trees. Overall, the video showcases the beauty and artistry of kite-flying, and it is a delight to watch the kites dance and glide through the air.
modal_list = ['video']
# Image Inference
paths = ['assets/sora.png']
questions = ['What is the woman wearing, what is she doing, and how does the image feel?']
# Reply:
# The woman in the image is wearing a black coat and sunglasses, and she is walking down a rain-soaked city street. The image feels vibrant and lively, with the bright city lights reflecting off the wet pavement, creating a visually appealing atmosphere. The woman's presence adds a sense of style and confidence to the scene, as she navigates the bustling urban environment.
modal_list = ['image']
# 1. Initialize the model.
model_path = 'DAMO-NLP-SG/VideoLLaMA2-7B-Base'
model_name = get_model_name_from_path(model_path)
tokenizer, model, processor, context_len = load_pretrained_model(model_path, None, model_name)
model = model.to('cuda:0')
conv_mode = 'llama_2'
# 2. Visual preprocess (load & transform image or video).
if modal_list[0] == 'video':
tensor = process_video(paths[0], processor, model.config.image_aspect_ratio).to(dtype=torch.float16, device='cuda', non_blocking=True)
default_mm_token = DEFAULT_MMODAL_TOKEN["VIDEO"]
modal_token_index = MMODAL_TOKEN_INDEX["VIDEO"]
else:
tensor = process_image(paths[0], processor, model.config.image_aspect_ratio)[0].to(dtype=torch.float16, device='cuda', non_blocking=True)
default_mm_token = DEFAULT_MMODAL_TOKEN["IMAGE"]
modal_token_index = MMODAL_TOKEN_INDEX["IMAGE"]
tensor = [tensor]
# 3. Text preprocess (tag process & generate prompt).
question = default_mm_token + "\n" + questions[0]
conv = conv_templates[conv_mode].copy()
conv.append_message(conv.roles[0], question)
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
input_ids = tokenizer_MMODAL_token(prompt, tokenizer, modal_token_index, return_tensors='pt').unsqueeze(0).to('cuda:0')
# 4. Generate a response according to visual signals and prompts.
stop_str = conv.sep if conv.sep_style in [SeparatorStyle.SINGLE] else conv.sep2
# keywords = ["<s>", "</s>"]
keywords = [stop_str]
stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids)
with torch.inference_mode():
output_ids = model.generate(
input_ids,
images_or_videos=tensor,
modal_list=modal_list,
do_sample=True,
temperature=0.2,
max_new_tokens=1024,
use_cache=True,
stopping_criteria=[stopping_criteria],
)
outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
print(outputs[0])
if __name__ == "__main__":
inference()
```
## Citation
If you find VideoLLaMA useful for your research and applications, please cite using this BibTeX:
```bibtex
@article{damonlpsg2024videollama2,
title={VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs},
author={Cheng, Zesen and Leng, Sicong and Zhang, Hang and Xin, Yifei and Li, Xin and Chen, Guanzheng and Zhu, Yongxin and Zhang, Wenqi and Luo, Ziyang and Zhao, Deli and Bing, Lidong},
journal={arXiv preprint arXiv:2406.07476},
year={2024},
url = {https://arxiv.org/abs/2406.07476}
}
@article{damonlpsg2023videollama,
title = {Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding},
author = {Zhang, Hang and Li, Xin and Bing, Lidong},
journal = {arXiv preprint arXiv:2306.02858},
year = {2023},
url = {https://arxiv.org/abs/2306.02858}
}
```
|
Mathoufle13/maker.V1 | Mathoufle13 | "2024-07-01T09:05:29Z" | 5,158 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | "2024-07-01T08:37:57Z" | 1289.4068 seconds used for training.
21.49 minutes used for training.
Peak reserved memory = 9.545 GB.
Peak reserved memory for training = 4.018 GB.
Peak reserved memory % of max memory = 43.058 %.
Peak reserved memory for training % of max memory = 18.125 %.
args = TrainingArguments(
per_device_train_batch_size = 2,
gradient_accumulation_steps = 4,
warmup_steps = 10, # Augmenté le nombre de steps de warmup
max_steps = 200, # Augmenté le nombre total de steps
learning_rate = 1e-4, # Réduit le taux d'apprentissage
fp16 = not torch.cuda.is_bf16_supported(),
bf16 = torch.cuda.is_bf16_supported(),
logging_steps = 1,
optim = "adamw_8bit",
weight_decay = 0.01,
lr_scheduler_type = "linear",
seed = 42,
output_dir = "outputs",
==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1
\\ /| Num examples = 399 | Num Epochs = 4
O^O/ \_/ \ Batch size per device = 2 | Gradient Accumulation steps = 4
\ / Total batch size = 8 | Total steps = 200
"-____-" Number of trainable parameters = 20,971,520
[200/200 21:17, Epoch 4/4]
Step Training Loss
1 2.027900
2 2.008700
3 1.946100
4 1.924700
5 1.995000
6 1.999000
7 1.870100
8 1.891400
9 1.807600
10 1.723200
11 1.665100
12 1.541000
13 1.509100
14 1.416600
15 1.398600
16 1.233200
17 1.172100
18 1.272100
19 1.146000
20 1.179000
21 1.206400
22 1.095400
23 0.937300
24 1.214300
25 1.040200
26 1.183400
27 1.033900
28 0.953100
29 0.935700
30 0.962200
31 0.908900
32 0.924900
33 0.931000
34 1.011300
35 0.951900
36 0.936000
37 0.903000
38 0.906900
39 0.945700
40 0.827000
41 0.931800
42 0.919600
43 0.926900
44 0.932900
45 0.872700
46 0.795200
47 0.888700
48 0.956800
49 1.004200
50 0.859500
51 0.802500
52 0.855400
53 0.885500
54 1.026600
55 0.844100
56 0.879800
57 0.797400
58 0.885300
59 0.842800
60 0.861600
61 0.789100
62 0.861600
63 0.856700
64 0.929200
65 0.782500
66 0.713600
67 0.781000
68 0.765100
69 0.784700
70 0.869500
71 0.742900
72 0.787900
73 0.750800
74 0.931700
75 0.713000
76 0.832100
77 0.928300
78 0.777600
79 0.694000
80 0.835400
81 0.822000
82 0.754600
83 0.813400
84 0.868800
85 0.732400
86 0.803700
87 0.694400
88 0.771300
89 0.864400
90 0.646700
91 0.690800
92 0.695000
93 0.732300
94 0.766900
95 0.864100
96 0.867200
97 0.774300
98 0.797700
99 0.772100
100 0.906700
101 0.693400
102 0.685500
103 0.712200
104 0.678400
105 0.761900
106 0.705300
107 0.775700
108 0.627600
109 0.599300
110 0.615100
111 0.618200
112 0.668700
113 0.699900
114 0.577000
115 0.711600
116 0.692900
117 0.585400
118 0.646400
119 0.569200
120 0.752300
121 0.745000
122 0.690100
123 0.744700
124 0.665800
125 0.866100
126 0.707400
127 0.679300
128 0.591400
129 0.655100
130 0.734000
131 0.637900
132 0.733900
133 0.652500
134 0.685400
135 0.641300
136 0.608200
137 0.754100
138 0.753700
139 0.671000
140 0.767200
141 0.668700
142 0.630300
143 0.734700
144 0.767700
145 0.722200
146 0.694400
147 0.710100
148 0.696300
149 0.612600
150 0.670400
151 0.512900
152 0.675100
153 0.579900
154 0.622900
155 0.652500
156 0.649200
157 0.546700
158 0.521600
159 0.522200
160 0.589400
161 0.552600
162 0.630700
163 0.595600
164 0.614300
165 0.489400
166 0.634500
167 0.620800
168 0.618600
169 0.637900
170 0.553900
171 0.656000
172 0.644000
173 0.694300
174 0.608900
175 0.673000
176 0.612500
177 0.654200
178 0.639200
179 0.599100
180 0.642100
181 0.529700
182 0.614000
183 0.582900
184 0.765100
185 0.502700
186 0.564300
187 0.740200
188 0.636100
189 0.638800
190 0.560100
191 0.620000
192 0.712800
193 0.531000
194 0.591600
195 0.608600
196 0.671800
197 0.572900
198 0.600900
199 0.586800
200 0.545900
---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** Mathoufle13
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MaziyarPanahi/Meta-Llama-3-70B-Instruct-GPTQ | MaziyarPanahi | "2024-04-19T07:07:49Z" | 5,154 | 17 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"finetuned",
"quantized",
"4-bit",
"gptq",
"facebook",
"meta",
"pytorch",
"llama-3",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct"
] | text-generation | "2024-04-19T02:21:38Z" | ---
license_name: llama3
tags:
- finetuned
- quantized
- 4-bit
- gptq
- transformers
- safetensors
- llama
- text-generation
- facebook
- meta
- pytorch
- llama-3
- conversational
- en
- license:other
- autotrain_compatible
- endpoints_compatible
- has_space
- text-generation-inference
- region:us
model_name: Meta-Llama-3-70B-Instruct-GPTQ
base_model: meta-llama/Meta-Llama-3-70B-Instruct
inference: false
model_creator: meta-llama
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# Description
[MaziyarPanahi/Meta-Llama-3-70B-Instruct-GPTQ](https://huggingface.co/MaziyarPanahi/Meta-Llama-3-70B-Instruct-GPTQ) is a quantized (GPTQ) version of [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)
## How to use
### Install the necessary packages
```
pip install --upgrade accelerate auto-gptq transformers
```
### Example Python code
```python
from transformers import AutoTokenizer, pipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import torch
model_id = "MaziyarPanahi/Meta-Llama-3-70B-Instruct-GPTQ"
quantize_config = BaseQuantizeConfig(
bits=4,
group_size=128,
desc_act=False
)
model = AutoGPTQForCausalLM.from_quantized(
model_id,
use_safetensors=True,
device="cuda:0",
quantize_config=quantize_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.1
)
outputs = pipe("What is a large language model?")
print(outputs[0]["generated_text"])
``` |
duyntnet/Llama-3-8B-Synthia-v3.5-imatrix-GGUF | duyntnet | "2024-06-06T02:38:19Z" | 5,154 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"Llama-3-8B-Synthia-v3.5",
"text-generation",
"en",
"license:other",
"region:us"
] | text-generation | "2024-06-05T22:46:49Z" | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Llama-3-8B-Synthia-v3.5
---
Quantizations of https://huggingface.co/migtissera/Llama-3-8B-Synthia-v3.5
# From original readme
## Sample code to run inference
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "/home/migel/Tess-2.0-Llama-3-8B"
output_file_path = "/home/migel/conversations.jsonl"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_4bit=False,
trust_remote_code=False,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
pad_token_id=tokenizer.eos_token_id,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
conversation = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are Synthia, a helful, female AI assitant. You always provide detailed answers without hesitation.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n"""
while True:
user_input = input("You: ")
llm_prompt = f"{conversation}{user_input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n"
json_data = {"prompt": user_input, "answer": answer}
with open(output_file_path, "a") as output_file:
output_file.write(json.dumps(json_data) + "\n")
``` |
AlekseyElygin/Starling-LM-7B-beta-GGUF | AlekseyElygin | "2024-06-27T06:33:52Z" | 5,153 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/starling-lm-7b-beta-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-26T13:41:59Z" | ---
base_model: unsloth/starling-lm-7b-beta-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** AlekseyElygin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/starling-lm-7b-beta-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-i1-GGUF | mradermacher | "2024-06-05T08:44:41Z" | 5,152 | 0 | transformers | [
"transformers",
"gguf",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"en",
"dataset:arcee-ai/MyAlee-Education-Instructions-V2",
"base_model:arcee-ai/MyAlee-Mistral-Instruct-v2-32k-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-04T05:51:15Z" | ---
base_model: arcee-ai/MyAlee-Mistral-Instruct-v2-32k-v2
datasets:
- arcee-ai/MyAlee-Education-Instructions-V2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/arcee-ai/MyAlee-Mistral-Instruct-v2-32k-v2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-i1-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v2.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-i1-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v2.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-i1-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-i1-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-i1-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v2.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-i1-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v2.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-i1-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v2.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-i1-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-i1-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-i1-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-i1-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v2.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-i1-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v2.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-i1-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-i1-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-i1-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-i1-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v2.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-i1-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-i1-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-i1-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-i1-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MyAlee-Mistral-Instruct-v2-32k-v2-i1-GGUF/resolve/main/MyAlee-Mistral-Instruct-v2-32k-v2.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mradermacher/Pantheon-RP-1.0-8b-Llama-3-i1-GGUF | mradermacher | "2024-06-06T21:48:10Z" | 5,150 | 1 | transformers | [
"transformers",
"gguf",
"Llama-3",
"instruct",
"finetune",
"chatml",
"axolotl",
"roleplay",
"en",
"base_model:Gryphe/Pantheon-RP-1.0-8b-Llama-3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-06T10:59:59Z" | ---
base_model: Gryphe/Pantheon-RP-1.0-8b-Llama-3
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Llama-3
- instruct
- finetune
- chatml
- axolotl
- roleplay
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Gryphe/Pantheon-RP-1.0-8b-Llama-3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Pantheon-RP-1.0-8b-Llama-3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Pantheon-RP-1.0-8b-Llama-3-i1-GGUF/resolve/main/Pantheon-RP-1.0-8b-Llama-3.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-RP-1.0-8b-Llama-3-i1-GGUF/resolve/main/Pantheon-RP-1.0-8b-Llama-3.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-RP-1.0-8b-Llama-3-i1-GGUF/resolve/main/Pantheon-RP-1.0-8b-Llama-3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-RP-1.0-8b-Llama-3-i1-GGUF/resolve/main/Pantheon-RP-1.0-8b-Llama-3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-RP-1.0-8b-Llama-3-i1-GGUF/resolve/main/Pantheon-RP-1.0-8b-Llama-3.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-RP-1.0-8b-Llama-3-i1-GGUF/resolve/main/Pantheon-RP-1.0-8b-Llama-3.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-RP-1.0-8b-Llama-3-i1-GGUF/resolve/main/Pantheon-RP-1.0-8b-Llama-3.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-RP-1.0-8b-Llama-3-i1-GGUF/resolve/main/Pantheon-RP-1.0-8b-Llama-3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-RP-1.0-8b-Llama-3-i1-GGUF/resolve/main/Pantheon-RP-1.0-8b-Llama-3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-RP-1.0-8b-Llama-3-i1-GGUF/resolve/main/Pantheon-RP-1.0-8b-Llama-3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-RP-1.0-8b-Llama-3-i1-GGUF/resolve/main/Pantheon-RP-1.0-8b-Llama-3.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-RP-1.0-8b-Llama-3-i1-GGUF/resolve/main/Pantheon-RP-1.0-8b-Llama-3.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-RP-1.0-8b-Llama-3-i1-GGUF/resolve/main/Pantheon-RP-1.0-8b-Llama-3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-RP-1.0-8b-Llama-3-i1-GGUF/resolve/main/Pantheon-RP-1.0-8b-Llama-3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-RP-1.0-8b-Llama-3-i1-GGUF/resolve/main/Pantheon-RP-1.0-8b-Llama-3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-RP-1.0-8b-Llama-3-i1-GGUF/resolve/main/Pantheon-RP-1.0-8b-Llama-3.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-RP-1.0-8b-Llama-3-i1-GGUF/resolve/main/Pantheon-RP-1.0-8b-Llama-3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-RP-1.0-8b-Llama-3-i1-GGUF/resolve/main/Pantheon-RP-1.0-8b-Llama-3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-RP-1.0-8b-Llama-3-i1-GGUF/resolve/main/Pantheon-RP-1.0-8b-Llama-3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-RP-1.0-8b-Llama-3-i1-GGUF/resolve/main/Pantheon-RP-1.0-8b-Llama-3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Pantheon-RP-1.0-8b-Llama-3-i1-GGUF/resolve/main/Pantheon-RP-1.0-8b-Llama-3.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
TeeZee/Bielik-SOLAR-LIKE-10.7B-Instruct-v0.1 | TeeZee | "2024-04-11T18:36:37Z" | 5,146 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-10T19:00:33Z" | ---
license: cc-by-nc-4.0
---
### TeeZee/Bielik-SOLAR-LIKE-10.7B-Instruct-v0.1 ###
Precise recipe used by Upstage to create [SOLAR](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) was applied to https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1
*(just merge, no finetuning)
### Results ###
- model is still coherent in Polish language, even without finetuning after merge
- instruct mode works in ooba without issues
- model is censored and aligned
- seems that this model scores highest amongst all versions of original Bielik models, further finetunig should improve results even more.

- on dedicated to Polish speaking LLM leaderboards, its 2nd, just behind instruct version used for this merge, and thats to be expected when applying DUS merge - very small quality loss.
[Polish LLMs leaderboards](https://huggingface.co/spaces/speakleash/open_pl_llm_leaderboard)
- overall it seems like a good base for further finetunig in Polish language.
|
cointegrated/rubert-base-cased-nli-twoway | cointegrated | "2023-10-06T11:57:41Z" | 5,144 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"rubert",
"russian",
"nli",
"rte",
"zero-shot-classification",
"ru",
"dataset:cointegrated/nli-rus-translated-v2021",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | "2022-03-02T23:29:05Z" | ---
language: ru
pipeline_tag: zero-shot-classification
tags:
- rubert
- russian
- nli
- rte
- zero-shot-classification
widget:
- text: Я хочу поехать в Австралию
candidate_labels: спорт,путешествия,музыка,кино,книги,наука,политика
hypothesis_template: Тема текста - {}.
datasets:
- cointegrated/nli-rus-translated-v2021
---
# RuBERT for NLI (natural language inference)
This is the [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) fine-tuned to predict the logical relationship between two short texts: entailment or not entailment.
For more details, see the card for a similar model: https://huggingface.co/cointegrated/rubert-base-cased-nli-threeway |
mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF | mradermacher | "2024-06-20T16:04:41Z" | 5,144 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:deepseek-ai/DeepSeek-Coder-V2-Base",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-06-18T15:32:17Z" | ---
base_model: deepseek-ai/DeepSeek-Coder-V2-Base
language:
- en
library_name: transformers
license: other
license_link: LICENSE
license_name: deepseek-license
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Base
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ1_S.gguf) | i1-IQ1_S | 47.5 | for the desperate |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ1_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ1_M.gguf.part2of2) | i1-IQ1_M | 52.8 | mostly desperate |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ2_XXS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ2_XXS.gguf.part2of2) | i1-IQ2_XXS | 61.6 | |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ2_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ2_XS.gguf.part2of2) | i1-IQ2_XS | 68.8 | |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ2_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ2_S.gguf.part2of2) | i1-IQ2_S | 70.0 | |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ2_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ2_M.gguf.part2of2) | i1-IQ2_M | 77.0 | |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q2_K.gguf.part2of2) | i1-Q2_K | 86.0 | IQ3_XXS probably better |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ3_XXS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ3_XXS.gguf.part2of2) | i1-IQ3_XXS | 90.9 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 96.4 | |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ3_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ3_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ3_S.gguf.part3of3) | i1-IQ3_S | 101.8 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q3_K_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q3_K_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q3_K_S.gguf.part3of3) | i1-Q3_K_S | 101.8 | IQ3_XS probably better |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ3_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ3_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ3_M.gguf.part3of3) | i1-IQ3_M | 103.5 | |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q3_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q3_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q3_K_M.gguf.part3of3) | i1-Q3_K_M | 112.8 | IQ3_S probably better |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q3_K_L.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q3_K_L.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q3_K_L.gguf.part3of3) | i1-Q3_K_L | 122.5 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ4_XS.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ4_XS.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-IQ4_XS.gguf.part3of3) | i1-IQ4_XS | 125.7 | |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q4_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q4_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q4_0.gguf.part3of3) | i1-Q4_0 | 133.5 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q4_K_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q4_K_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q4_K_S.gguf.part3of3) | i1-Q4_K_S | 134.0 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q4_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q4_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q4_K_M.gguf.part3of3) | i1-Q4_K_M | 142.6 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q5_K_S.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q5_K_S.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q5_K_S.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q5_K_S.gguf.part4of4) | i1-Q5_K_S | 162.4 | |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q5_K_M.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q5_K_M.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q5_K_M.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q5_K_M.gguf.part4of4) | i1-Q5_K_M | 167.3 | |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q6_K.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q6_K.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q6_K.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Base-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Base.i1-Q6_K.gguf.part4of4) | i1-Q6_K | 193.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mradermacher/NeuralKuno-7B-slerp-i1-GGUF | mradermacher | "2024-06-16T14:19:08Z" | 5,143 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:WesPro/NeuralKuno-7B-slerp",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T09:53:06Z" | ---
base_model: WesPro/NeuralKuno-7B-slerp
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/WesPro/NeuralKuno-7B-slerp
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/NeuralKuno-7B-slerp-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuralKuno-7B-slerp-i1-GGUF/resolve/main/NeuralKuno-7B-slerp.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuno-7B-slerp-i1-GGUF/resolve/main/NeuralKuno-7B-slerp.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuno-7B-slerp-i1-GGUF/resolve/main/NeuralKuno-7B-slerp.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuno-7B-slerp-i1-GGUF/resolve/main/NeuralKuno-7B-slerp.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuno-7B-slerp-i1-GGUF/resolve/main/NeuralKuno-7B-slerp.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuno-7B-slerp-i1-GGUF/resolve/main/NeuralKuno-7B-slerp.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuno-7B-slerp-i1-GGUF/resolve/main/NeuralKuno-7B-slerp.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuno-7B-slerp-i1-GGUF/resolve/main/NeuralKuno-7B-slerp.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuno-7B-slerp-i1-GGUF/resolve/main/NeuralKuno-7B-slerp.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuno-7B-slerp-i1-GGUF/resolve/main/NeuralKuno-7B-slerp.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuno-7B-slerp-i1-GGUF/resolve/main/NeuralKuno-7B-slerp.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuno-7B-slerp-i1-GGUF/resolve/main/NeuralKuno-7B-slerp.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuno-7B-slerp-i1-GGUF/resolve/main/NeuralKuno-7B-slerp.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuno-7B-slerp-i1-GGUF/resolve/main/NeuralKuno-7B-slerp.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuno-7B-slerp-i1-GGUF/resolve/main/NeuralKuno-7B-slerp.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuno-7B-slerp-i1-GGUF/resolve/main/NeuralKuno-7B-slerp.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuno-7B-slerp-i1-GGUF/resolve/main/NeuralKuno-7B-slerp.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuno-7B-slerp-i1-GGUF/resolve/main/NeuralKuno-7B-slerp.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuno-7B-slerp-i1-GGUF/resolve/main/NeuralKuno-7B-slerp.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuno-7B-slerp-i1-GGUF/resolve/main/NeuralKuno-7B-slerp.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKuno-7B-slerp-i1-GGUF/resolve/main/NeuralKuno-7B-slerp.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mlabonne/Daredevil-8B-abliterated | mlabonne | "2024-05-29T14:23:30Z" | 5,142 | 24 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-26T14:32:53Z" | ---
library_name: transformers
license: other
---
# Daredevil-8B-abliterated

Abliterated version of [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) using [failspy](https://huggingface.co/failspy)'s notebook.
It based on the technique described in the blog post "[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)".
Thanks to Andy Arditi, Oscar Balcells Obeso, Aaquib111, Wes Gurnee, Neel Nanda, and failspy.
## 🔎 Applications
This is an uncensored model. You can use it for any application that doesn't require alignment, like role-playing.
Tested on LM Studio using the "Llama 3" preset.
## ⚡ Quantization
* **GGUF**: https://huggingface.co/mlabonne/Daredevil-8B-abliterated-GGUF
## 🏆 Evaluation
### Open LLM Leaderboard
Daredevil-8B-abliterated is the second best-performing 8B model on the Open LLM Leaderboard in terms of MMLU score (27 May 24).

### Nous
Evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
| [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) [📄](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | 55.87 | 44.13 | 73.52 | 59.05 | 46.77 |
| [**mlabonne/Daredevil-8B-abliterated**](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) [📄](https://gist.github.com/mlabonne/32cdd8460804662c856bcb2a20acd49e) | **55.06** | **43.29** | **73.33** | **57.47** | **46.17** |
| [mlabonne/Llama-3-8B-Instruct-abliterated-dpomix](https://huggingface.co/mlabonne/Llama-3-8B-Instruct-abliterated-dpomix) [📄](https://gist.github.com/mlabonne/d711548df70e2c04771cc68ab33fe2b9) | 52.26 | 41.6 | 69.95 | 54.22 | 43.26 |
| [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [📄](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 |
| [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3) [📄](https://gist.github.com/mlabonne/f46cce0262443365e4cce2b6fa7507fc) | 51.21 | 40.23 | 69.5 | 52.44 | 42.69 |
| [mlabonne/OrpoLlama-3-8B](https://huggingface.co/mlabonne/OrpoLlama-3-8B) [📄](https://gist.github.com/mlabonne/22896a1ae164859931cc8f4858c97f6f) | 48.63 | 34.17 | 70.59 | 52.39 | 37.36 |
| [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [📄](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 |
## 🌳 Model family tree

## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/Daredevil-8B-abliterated"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
mradermacher/Trion-M-7b-i1-GGUF | mradermacher | "2024-06-10T23:52:18Z" | 5,141 | 0 | transformers | [
"transformers",
"gguf",
"Mistral",
"en",
"base_model:BlueNipples/Trion-M-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T17:49:36Z" | ---
base_model: BlueNipples/Trion-M-7b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Mistral
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/BlueNipples/Trion-M-7b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Trion-M-7b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-i1-GGUF/resolve/main/Trion-M-7b.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-i1-GGUF/resolve/main/Trion-M-7b.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-i1-GGUF/resolve/main/Trion-M-7b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-i1-GGUF/resolve/main/Trion-M-7b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-i1-GGUF/resolve/main/Trion-M-7b.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-i1-GGUF/resolve/main/Trion-M-7b.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-i1-GGUF/resolve/main/Trion-M-7b.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-i1-GGUF/resolve/main/Trion-M-7b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-i1-GGUF/resolve/main/Trion-M-7b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-i1-GGUF/resolve/main/Trion-M-7b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-i1-GGUF/resolve/main/Trion-M-7b.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-i1-GGUF/resolve/main/Trion-M-7b.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-i1-GGUF/resolve/main/Trion-M-7b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-i1-GGUF/resolve/main/Trion-M-7b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-i1-GGUF/resolve/main/Trion-M-7b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-i1-GGUF/resolve/main/Trion-M-7b.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-i1-GGUF/resolve/main/Trion-M-7b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-i1-GGUF/resolve/main/Trion-M-7b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-i1-GGUF/resolve/main/Trion-M-7b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-i1-GGUF/resolve/main/Trion-M-7b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Trion-M-7b-i1-GGUF/resolve/main/Trion-M-7b.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
timm/convnext_nano.in12k_ft_in1k | timm | "2024-02-10T23:27:13Z" | 5,139 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-12k",
"arxiv:2201.03545",
"license:apache-2.0",
"region:us"
] | image-classification | "2022-12-13T07:12:21Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
- imagenet-12k
---
# Model card for convnext_nano.in12k_ft_in1k
A ConvNeXt image classification model. Pretrained in `timm` on ImageNet-12k (a 11821 class subset of full ImageNet-22k) and fine-tuned on ImageNet-1k by Ross Wightman.
ImageNet-12k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program.
Fine-tuning performed on 8x GPU [Lambda Labs](https://lambdalabs.com/) cloud instances.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 15.6
- GMACs: 2.5
- Activations (M): 8.4
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- **Original:** https://github.com/huggingface/pytorch-image-models
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-12k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convnext_nano.in12k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_nano.in12k_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 80, 56, 56])
# torch.Size([1, 160, 28, 28])
# torch.Size([1, 320, 14, 14])
# torch.Size([1, 640, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_nano.in12k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 640, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
| model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
| [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
| [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
| [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
| [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
| [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
| [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
| [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
| [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
| [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
| [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
| [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
| [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
| [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
| [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
| [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
| [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
| [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
| [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
| [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
| [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
| [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
| [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
| [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
| [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
| [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
| [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
| [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
| [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
| [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
| [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
| [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
| [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
| [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
| [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
| [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
| [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
| [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
| [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
| [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
| [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
| [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
|
redponike/Prox-Llama-3-8B-abliterated-GGUF | redponike | "2024-06-21T06:28:39Z" | 5,135 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-20T19:10:33Z" | GGUF quants of [openvoid/Prox-Llama-3-8B-abliterated](https://huggingface.co/openvoid/Prox-Llama-3-8B-abliterated) |
andersonbcdefg/bge-small-4096 | andersonbcdefg | "2023-11-02T05:58:37Z" | 5,134 | 10 | transformers | [
"transformers",
"pytorch",
"onnx",
"bert",
"feature-extraction",
"mteb",
"model-index",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | "2023-10-29T00:52:52Z" | ---
tags:
- mteb
model-index:
- name: andersonbcdefg/bge-small-4096
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 68.74626865671641
- type: ap
value: 31.113961861085855
- type: f1
value: 62.628656720790275
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 81.30347499999999
- type: ap
value: 76.05639977935193
- type: f1
value: 81.23180016825499
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 38.566
- type: f1
value: 38.014543974125615
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.445
- type: map_at_10
value: 44.157999999999994
- type: map_at_100
value: 45.169
- type: map_at_1000
value: 45.178000000000004
- type: map_at_3
value: 39.545
- type: map_at_5
value: 42.233
- type: mrr_at_1
value: 29.445
- type: mrr_at_10
value: 44.157999999999994
- type: mrr_at_100
value: 45.169
- type: mrr_at_1000
value: 45.178000000000004
- type: mrr_at_3
value: 39.545
- type: mrr_at_5
value: 42.233
- type: ndcg_at_1
value: 29.445
- type: ndcg_at_10
value: 52.446000000000005
- type: ndcg_at_100
value: 56.782
- type: ndcg_at_1000
value: 56.989999999999995
- type: ndcg_at_3
value: 42.935
- type: ndcg_at_5
value: 47.833999999999996
- type: precision_at_1
value: 29.445
- type: precision_at_10
value: 7.8950000000000005
- type: precision_at_100
value: 0.979
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 17.591
- type: precision_at_5
value: 12.959000000000001
- type: recall_at_1
value: 29.445
- type: recall_at_10
value: 78.947
- type: recall_at_100
value: 97.937
- type: recall_at_1000
value: 99.502
- type: recall_at_3
value: 52.774
- type: recall_at_5
value: 64.794
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 43.85187820924144
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 29.5939502757938
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 58.539409343284674
- type: mrr
value: 71.58982983775228
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 82.31440765254087
- type: cos_sim_spearman
value: 81.59884723689632
- type: euclidean_pearson
value: 80.65818473893147
- type: euclidean_spearman
value: 81.40004752638717
- type: manhattan_pearson
value: 80.52256901536644
- type: manhattan_spearman
value: 80.57292024599603
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 79.98376623376623
- type: f1
value: 79.91981901371503
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.79541356345093
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 26.760513681350375
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.794
- type: map_at_10
value: 33.361000000000004
- type: map_at_100
value: 34.86
- type: map_at_1000
value: 35.0
- type: map_at_3
value: 30.579
- type: map_at_5
value: 31.996000000000002
- type: mrr_at_1
value: 30.186
- type: mrr_at_10
value: 39.681
- type: mrr_at_100
value: 40.616
- type: mrr_at_1000
value: 40.669
- type: mrr_at_3
value: 37.244
- type: mrr_at_5
value: 38.588
- type: ndcg_at_1
value: 30.186
- type: ndcg_at_10
value: 39.34
- type: ndcg_at_100
value: 45.266
- type: ndcg_at_1000
value: 47.9
- type: ndcg_at_3
value: 35.164
- type: ndcg_at_5
value: 36.854
- type: precision_at_1
value: 30.186
- type: precision_at_10
value: 7.639
- type: precision_at_100
value: 1.328
- type: precision_at_1000
value: 0.183
- type: precision_at_3
value: 17.31
- type: precision_at_5
value: 12.275
- type: recall_at_1
value: 23.794
- type: recall_at_10
value: 50.463
- type: recall_at_100
value: 75.268
- type: recall_at_1000
value: 93.138
- type: recall_at_3
value: 37.797
- type: recall_at_5
value: 42.985
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.968999999999998
- type: map_at_10
value: 23.846999999999998
- type: map_at_100
value: 24.712999999999997
- type: map_at_1000
value: 24.833
- type: map_at_3
value: 22.024
- type: map_at_5
value: 23.087
- type: mrr_at_1
value: 22.038
- type: mrr_at_10
value: 27.808
- type: mrr_at_100
value: 28.532999999999998
- type: mrr_at_1000
value: 28.604000000000003
- type: mrr_at_3
value: 26.029999999999998
- type: mrr_at_5
value: 27.122
- type: ndcg_at_1
value: 22.038
- type: ndcg_at_10
value: 27.559
- type: ndcg_at_100
value: 31.541999999999998
- type: ndcg_at_1000
value: 34.343
- type: ndcg_at_3
value: 24.585
- type: ndcg_at_5
value: 26.026
- type: precision_at_1
value: 22.038
- type: precision_at_10
value: 5.019
- type: precision_at_100
value: 0.8920000000000001
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 11.423
- type: precision_at_5
value: 8.28
- type: recall_at_1
value: 17.968999999999998
- type: recall_at_10
value: 34.583000000000006
- type: recall_at_100
value: 51.849000000000004
- type: recall_at_1000
value: 70.832
- type: recall_at_3
value: 26.057000000000002
- type: recall_at_5
value: 29.816
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.183999999999997
- type: map_at_10
value: 40.245
- type: map_at_100
value: 41.324
- type: map_at_1000
value: 41.402
- type: map_at_3
value: 37.395
- type: map_at_5
value: 38.964999999999996
- type: mrr_at_1
value: 33.981
- type: mrr_at_10
value: 43.471
- type: mrr_at_100
value: 44.303
- type: mrr_at_1000
value: 44.352999999999994
- type: mrr_at_3
value: 41.149
- type: mrr_at_5
value: 42.466
- type: ndcg_at_1
value: 33.981
- type: ndcg_at_10
value: 45.776
- type: ndcg_at_100
value: 50.441
- type: ndcg_at_1000
value: 52.16
- type: ndcg_at_3
value: 40.756
- type: ndcg_at_5
value: 43.132
- type: precision_at_1
value: 33.981
- type: precision_at_10
value: 7.617999999999999
- type: precision_at_100
value: 1.083
- type: precision_at_1000
value: 0.129
- type: precision_at_3
value: 18.558
- type: precision_at_5
value: 12.915
- type: recall_at_1
value: 29.183999999999997
- type: recall_at_10
value: 59.114
- type: recall_at_100
value: 79.549
- type: recall_at_1000
value: 91.925
- type: recall_at_3
value: 45.551
- type: recall_at_5
value: 51.38399999999999
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.286
- type: map_at_10
value: 27.143
- type: map_at_100
value: 28.107
- type: map_at_1000
value: 28.212
- type: map_at_3
value: 25.149
- type: map_at_5
value: 26.179999999999996
- type: mrr_at_1
value: 22.034000000000002
- type: mrr_at_10
value: 28.875
- type: mrr_at_100
value: 29.785
- type: mrr_at_1000
value: 29.876
- type: mrr_at_3
value: 27.023999999999997
- type: mrr_at_5
value: 28.058
- type: ndcg_at_1
value: 22.034000000000002
- type: ndcg_at_10
value: 31.148999999999997
- type: ndcg_at_100
value: 35.936
- type: ndcg_at_1000
value: 38.682
- type: ndcg_at_3
value: 27.230999999999998
- type: ndcg_at_5
value: 29.034
- type: precision_at_1
value: 22.034000000000002
- type: precision_at_10
value: 4.836
- type: precision_at_100
value: 0.754
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 11.562999999999999
- type: precision_at_5
value: 8.068
- type: recall_at_1
value: 20.286
- type: recall_at_10
value: 41.827999999999996
- type: recall_at_100
value: 63.922000000000004
- type: recall_at_1000
value: 84.639
- type: recall_at_3
value: 31.227
- type: recall_at_5
value: 35.546
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.488
- type: map_at_10
value: 18.595
- type: map_at_100
value: 19.783
- type: map_at_1000
value: 19.918
- type: map_at_3
value: 16.274
- type: map_at_5
value: 17.558
- type: mrr_at_1
value: 16.791
- type: mrr_at_10
value: 22.53
- type: mrr_at_100
value: 23.651
- type: mrr_at_1000
value: 23.738999999999997
- type: mrr_at_3
value: 20.232
- type: mrr_at_5
value: 21.644
- type: ndcg_at_1
value: 16.791
- type: ndcg_at_10
value: 22.672
- type: ndcg_at_100
value: 28.663
- type: ndcg_at_1000
value: 31.954
- type: ndcg_at_3
value: 18.372
- type: ndcg_at_5
value: 20.47
- type: precision_at_1
value: 16.791
- type: precision_at_10
value: 4.2540000000000004
- type: precision_at_100
value: 0.8370000000000001
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 8.706
- type: precision_at_5
value: 6.666999999999999
- type: recall_at_1
value: 13.488
- type: recall_at_10
value: 31.451
- type: recall_at_100
value: 58.085
- type: recall_at_1000
value: 81.792
- type: recall_at_3
value: 19.811
- type: recall_at_5
value: 24.973
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.436
- type: map_at_10
value: 29.105999999999998
- type: map_at_100
value: 30.442000000000004
- type: map_at_1000
value: 30.567
- type: map_at_3
value: 26.430999999999997
- type: map_at_5
value: 27.866000000000003
- type: mrr_at_1
value: 26.083000000000002
- type: mrr_at_10
value: 33.975
- type: mrr_at_100
value: 35.014
- type: mrr_at_1000
value: 35.07
- type: mrr_at_3
value: 31.649
- type: mrr_at_5
value: 32.944
- type: ndcg_at_1
value: 26.083000000000002
- type: ndcg_at_10
value: 34.229
- type: ndcg_at_100
value: 40.439
- type: ndcg_at_1000
value: 43.081
- type: ndcg_at_3
value: 29.64
- type: ndcg_at_5
value: 31.704
- type: precision_at_1
value: 26.083000000000002
- type: precision_at_10
value: 6.246
- type: precision_at_100
value: 1.1199999999999999
- type: precision_at_1000
value: 0.155
- type: precision_at_3
value: 13.858999999999998
- type: precision_at_5
value: 10.01
- type: recall_at_1
value: 21.436
- type: recall_at_10
value: 44.938
- type: recall_at_100
value: 72.029
- type: recall_at_1000
value: 90.009
- type: recall_at_3
value: 31.954
- type: recall_at_5
value: 37.303
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.217
- type: map_at_10
value: 25.16
- type: map_at_100
value: 26.490000000000002
- type: map_at_1000
value: 26.619
- type: map_at_3
value: 22.926
- type: map_at_5
value: 24.251
- type: mrr_at_1
value: 22.831000000000003
- type: mrr_at_10
value: 30.009000000000004
- type: mrr_at_100
value: 31.045
- type: mrr_at_1000
value: 31.122
- type: mrr_at_3
value: 28.025
- type: mrr_at_5
value: 29.07
- type: ndcg_at_1
value: 22.831000000000003
- type: ndcg_at_10
value: 29.664
- type: ndcg_at_100
value: 35.900999999999996
- type: ndcg_at_1000
value: 38.932
- type: ndcg_at_3
value: 26.051000000000002
- type: ndcg_at_5
value: 27.741
- type: precision_at_1
value: 22.831000000000003
- type: precision_at_10
value: 5.479
- type: precision_at_100
value: 1.027
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 12.481
- type: precision_at_5
value: 8.973
- type: recall_at_1
value: 18.217
- type: recall_at_10
value: 38.336
- type: recall_at_100
value: 65.854
- type: recall_at_1000
value: 87.498
- type: recall_at_3
value: 28.158
- type: recall_at_5
value: 32.841
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.100666666666665
- type: map_at_10
value: 26.22883333333333
- type: map_at_100
value: 27.34241666666667
- type: map_at_1000
value: 27.468416666666666
- type: map_at_3
value: 23.953916666666668
- type: map_at_5
value: 25.20125
- type: mrr_at_1
value: 22.729249999999997
- type: mrr_at_10
value: 29.86491666666667
- type: mrr_at_100
value: 30.76925
- type: mrr_at_1000
value: 30.846333333333337
- type: mrr_at_3
value: 27.733999999999998
- type: mrr_at_5
value: 28.94058333333333
- type: ndcg_at_1
value: 22.729249999999997
- type: ndcg_at_10
value: 30.708250000000003
- type: ndcg_at_100
value: 35.89083333333333
- type: ndcg_at_1000
value: 38.75891666666666
- type: ndcg_at_3
value: 26.661083333333334
- type: ndcg_at_5
value: 28.54
- type: precision_at_1
value: 22.729249999999997
- type: precision_at_10
value: 5.433833333333333
- type: precision_at_100
value: 0.9486666666666665
- type: precision_at_1000
value: 0.13808333333333334
- type: precision_at_3
value: 12.292166666666668
- type: precision_at_5
value: 8.825
- type: recall_at_1
value: 19.100666666666665
- type: recall_at_10
value: 40.54208333333334
- type: recall_at_100
value: 63.67975
- type: recall_at_1000
value: 84.13574999999999
- type: recall_at_3
value: 29.311000000000003
- type: recall_at_5
value: 34.1105
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.762
- type: map_at_10
value: 23.905
- type: map_at_100
value: 24.663
- type: map_at_1000
value: 24.765
- type: map_at_3
value: 22.032
- type: map_at_5
value: 23.025000000000002
- type: mrr_at_1
value: 20.244999999999997
- type: mrr_at_10
value: 26.162999999999997
- type: mrr_at_100
value: 26.907999999999998
- type: mrr_at_1000
value: 26.987
- type: mrr_at_3
value: 24.361
- type: mrr_at_5
value: 25.326999999999998
- type: ndcg_at_1
value: 20.244999999999997
- type: ndcg_at_10
value: 27.577
- type: ndcg_at_100
value: 31.473000000000003
- type: ndcg_at_1000
value: 34.217999999999996
- type: ndcg_at_3
value: 24.092
- type: ndcg_at_5
value: 25.657000000000004
- type: precision_at_1
value: 20.244999999999997
- type: precision_at_10
value: 4.433
- type: precision_at_100
value: 0.692
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 10.634
- type: precision_at_5
value: 7.362
- type: recall_at_1
value: 17.762
- type: recall_at_10
value: 36.661
- type: recall_at_100
value: 54.581999999999994
- type: recall_at_1000
value: 75.28099999999999
- type: recall_at_3
value: 27.084999999999997
- type: recall_at_5
value: 31.064999999999998
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 12.998000000000001
- type: map_at_10
value: 18.926000000000002
- type: map_at_100
value: 19.836000000000002
- type: map_at_1000
value: 19.96
- type: map_at_3
value: 16.932
- type: map_at_5
value: 17.963
- type: mrr_at_1
value: 15.692
- type: mrr_at_10
value: 22.206
- type: mrr_at_100
value: 23.021
- type: mrr_at_1000
value: 23.108999999999998
- type: mrr_at_3
value: 20.114
- type: mrr_at_5
value: 21.241
- type: ndcg_at_1
value: 15.692
- type: ndcg_at_10
value: 22.997999999999998
- type: ndcg_at_100
value: 27.541
- type: ndcg_at_1000
value: 30.758000000000003
- type: ndcg_at_3
value: 19.117
- type: ndcg_at_5
value: 20.778
- type: precision_at_1
value: 15.692
- type: precision_at_10
value: 4.277
- type: precision_at_100
value: 0.774
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 9.027000000000001
- type: precision_at_5
value: 6.641
- type: recall_at_1
value: 12.998000000000001
- type: recall_at_10
value: 32.135999999999996
- type: recall_at_100
value: 52.937
- type: recall_at_1000
value: 76.348
- type: recall_at_3
value: 21.292
- type: recall_at_5
value: 25.439
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.219
- type: map_at_10
value: 27.306
- type: map_at_100
value: 28.337
- type: map_at_1000
value: 28.459
- type: map_at_3
value: 25.423000000000002
- type: map_at_5
value: 26.375999999999998
- type: mrr_at_1
value: 23.787
- type: mrr_at_10
value: 30.977
- type: mrr_at_100
value: 31.85
- type: mrr_at_1000
value: 31.939
- type: mrr_at_3
value: 29.073
- type: mrr_at_5
value: 30.095
- type: ndcg_at_1
value: 23.787
- type: ndcg_at_10
value: 31.615
- type: ndcg_at_100
value: 36.641
- type: ndcg_at_1000
value: 39.707
- type: ndcg_at_3
value: 27.994000000000003
- type: ndcg_at_5
value: 29.508000000000003
- type: precision_at_1
value: 23.787
- type: precision_at_10
value: 5.271
- type: precision_at_100
value: 0.865
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 12.748999999999999
- type: precision_at_5
value: 8.806
- type: recall_at_1
value: 20.219
- type: recall_at_10
value: 41.108
- type: recall_at_100
value: 63.596
- type: recall_at_1000
value: 85.54899999999999
- type: recall_at_3
value: 31.129
- type: recall_at_5
value: 34.845
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.949
- type: map_at_10
value: 26.629
- type: map_at_100
value: 28.006999999999998
- type: map_at_1000
value: 28.221
- type: map_at_3
value: 24.099999999999998
- type: map_at_5
value: 25.487
- type: mrr_at_1
value: 24.111
- type: mrr_at_10
value: 30.592000000000002
- type: mrr_at_100
value: 31.448999999999998
- type: mrr_at_1000
value: 31.538
- type: mrr_at_3
value: 28.128999999999998
- type: mrr_at_5
value: 29.503
- type: ndcg_at_1
value: 24.111
- type: ndcg_at_10
value: 31.373
- type: ndcg_at_100
value: 36.897999999999996
- type: ndcg_at_1000
value: 40.288000000000004
- type: ndcg_at_3
value: 26.895000000000003
- type: ndcg_at_5
value: 29.009
- type: precision_at_1
value: 24.111
- type: precision_at_10
value: 6.067
- type: precision_at_100
value: 1.269
- type: precision_at_1000
value: 0.22
- type: precision_at_3
value: 12.385
- type: precision_at_5
value: 9.249
- type: recall_at_1
value: 19.949
- type: recall_at_10
value: 40.394000000000005
- type: recall_at_100
value: 65.812
- type: recall_at_1000
value: 88.247
- type: recall_at_3
value: 28.116000000000003
- type: recall_at_5
value: 33.4
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.905999999999999
- type: map_at_10
value: 20.523
- type: map_at_100
value: 21.547
- type: map_at_1000
value: 21.665
- type: map_at_3
value: 18.182000000000002
- type: map_at_5
value: 19.661
- type: mrr_at_1
value: 14.972
- type: mrr_at_10
value: 22.092
- type: mrr_at_100
value: 23.055999999999997
- type: mrr_at_1000
value: 23.150000000000002
- type: mrr_at_3
value: 19.778000000000002
- type: mrr_at_5
value: 21.229
- type: ndcg_at_1
value: 14.972
- type: ndcg_at_10
value: 24.547
- type: ndcg_at_100
value: 29.948999999999998
- type: ndcg_at_1000
value: 33.084
- type: ndcg_at_3
value: 20.036
- type: ndcg_at_5
value: 22.567
- type: precision_at_1
value: 14.972
- type: precision_at_10
value: 4.067
- type: precision_at_100
value: 0.743
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 8.811
- type: precision_at_5
value: 6.654
- type: recall_at_1
value: 13.905999999999999
- type: recall_at_10
value: 35.493
- type: recall_at_100
value: 60.67399999999999
- type: recall_at_1000
value: 84.371
- type: recall_at_3
value: 23.555
- type: recall_at_5
value: 29.729
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 7.529
- type: map_at_10
value: 12.794
- type: map_at_100
value: 14.315
- type: map_at_1000
value: 14.523
- type: map_at_3
value: 10.367999999999999
- type: map_at_5
value: 11.546
- type: mrr_at_1
value: 16.872999999999998
- type: mrr_at_10
value: 25.709
- type: mrr_at_100
value: 26.907999999999998
- type: mrr_at_1000
value: 26.962000000000003
- type: mrr_at_3
value: 22.486
- type: mrr_at_5
value: 24.245
- type: ndcg_at_1
value: 16.872999999999998
- type: ndcg_at_10
value: 19.005
- type: ndcg_at_100
value: 25.990999999999996
- type: ndcg_at_1000
value: 29.955
- type: ndcg_at_3
value: 14.573
- type: ndcg_at_5
value: 16.118
- type: precision_at_1
value: 16.872999999999998
- type: precision_at_10
value: 6.235
- type: precision_at_100
value: 1.374
- type: precision_at_1000
value: 0.21
- type: precision_at_3
value: 10.793
- type: precision_at_5
value: 8.73
- type: recall_at_1
value: 7.529
- type: recall_at_10
value: 24.007
- type: recall_at_100
value: 48.742000000000004
- type: recall_at_1000
value: 71.35000000000001
- type: recall_at_3
value: 13.467
- type: recall_at_5
value: 17.502000000000002
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.614
- type: map_at_10
value: 11.42
- type: map_at_100
value: 15.873000000000001
- type: map_at_1000
value: 17.021
- type: map_at_3
value: 8.495
- type: map_at_5
value: 9.790000000000001
- type: mrr_at_1
value: 42.0
- type: mrr_at_10
value: 52.477
- type: mrr_at_100
value: 53.095000000000006
- type: mrr_at_1000
value: 53.135
- type: mrr_at_3
value: 49.833
- type: mrr_at_5
value: 51.183
- type: ndcg_at_1
value: 31.374999999999996
- type: ndcg_at_10
value: 25.27
- type: ndcg_at_100
value: 29.709999999999997
- type: ndcg_at_1000
value: 36.975
- type: ndcg_at_3
value: 27.688000000000002
- type: ndcg_at_5
value: 25.987
- type: precision_at_1
value: 42.0
- type: precision_at_10
value: 21.2
- type: precision_at_100
value: 7.053
- type: precision_at_1000
value: 1.512
- type: precision_at_3
value: 32.333
- type: precision_at_5
value: 26.6
- type: recall_at_1
value: 5.614
- type: recall_at_10
value: 16.112000000000002
- type: recall_at_100
value: 36.165000000000006
- type: recall_at_1000
value: 60.362
- type: recall_at_3
value: 9.761000000000001
- type: recall_at_5
value: 12.279
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 40.085
- type: f1
value: 35.53934111316537
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 34.185
- type: map_at_10
value: 44.491
- type: map_at_100
value: 45.204
- type: map_at_1000
value: 45.254
- type: map_at_3
value: 42.006
- type: map_at_5
value: 43.516
- type: mrr_at_1
value: 37.024
- type: mrr_at_10
value: 47.524
- type: mrr_at_100
value: 48.185
- type: mrr_at_1000
value: 48.227
- type: mrr_at_3
value: 45.086999999999996
- type: mrr_at_5
value: 46.575
- type: ndcg_at_1
value: 37.024
- type: ndcg_at_10
value: 50.126000000000005
- type: ndcg_at_100
value: 53.577
- type: ndcg_at_1000
value: 54.906
- type: ndcg_at_3
value: 45.25
- type: ndcg_at_5
value: 47.842
- type: precision_at_1
value: 37.024
- type: precision_at_10
value: 7.132
- type: precision_at_100
value: 0.898
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 18.767
- type: precision_at_5
value: 12.676000000000002
- type: recall_at_1
value: 34.185
- type: recall_at_10
value: 64.703
- type: recall_at_100
value: 80.58
- type: recall_at_1000
value: 90.742
- type: recall_at_3
value: 51.483000000000004
- type: recall_at_5
value: 57.775
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.358
- type: map_at_10
value: 16.391
- type: map_at_100
value: 17.698
- type: map_at_1000
value: 17.912
- type: map_at_3
value: 13.831
- type: map_at_5
value: 15.187000000000001
- type: mrr_at_1
value: 18.673000000000002
- type: mrr_at_10
value: 26.907999999999998
- type: mrr_at_100
value: 27.842
- type: mrr_at_1000
value: 27.933000000000003
- type: mrr_at_3
value: 24.486
- type: mrr_at_5
value: 25.766
- type: ndcg_at_1
value: 18.673000000000002
- type: ndcg_at_10
value: 22.137
- type: ndcg_at_100
value: 28.126
- type: ndcg_at_1000
value: 32.489000000000004
- type: ndcg_at_3
value: 18.723
- type: ndcg_at_5
value: 19.858
- type: precision_at_1
value: 18.673000000000002
- type: precision_at_10
value: 6.389
- type: precision_at_100
value: 1.262
- type: precision_at_1000
value: 0.202
- type: precision_at_3
value: 12.757
- type: precision_at_5
value: 9.753
- type: recall_at_1
value: 9.358
- type: recall_at_10
value: 28.605000000000004
- type: recall_at_100
value: 51.713
- type: recall_at_1000
value: 78.408
- type: recall_at_3
value: 17.674
- type: recall_at_5
value: 21.97
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.997999999999998
- type: map_at_10
value: 32.957
- type: map_at_100
value: 33.972
- type: map_at_1000
value: 34.072
- type: map_at_3
value: 30.44
- type: map_at_5
value: 31.869999999999997
- type: mrr_at_1
value: 45.995999999999995
- type: mrr_at_10
value: 54.473000000000006
- type: mrr_at_100
value: 55.103
- type: mrr_at_1000
value: 55.139
- type: mrr_at_3
value: 52.349999999999994
- type: mrr_at_5
value: 53.61900000000001
- type: ndcg_at_1
value: 45.995999999999995
- type: ndcg_at_10
value: 41.333
- type: ndcg_at_100
value: 45.635999999999996
- type: ndcg_at_1000
value: 47.847
- type: ndcg_at_3
value: 36.825
- type: ndcg_at_5
value: 39.099000000000004
- type: precision_at_1
value: 45.995999999999995
- type: precision_at_10
value: 9.020999999999999
- type: precision_at_100
value: 1.244
- type: precision_at_1000
value: 0.154
- type: precision_at_3
value: 23.34
- type: precision_at_5
value: 15.8
- type: recall_at_1
value: 22.997999999999998
- type: recall_at_10
value: 45.105000000000004
- type: recall_at_100
value: 62.188
- type: recall_at_1000
value: 76.907
- type: recall_at_3
value: 35.010000000000005
- type: recall_at_5
value: 39.5
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 80.0944
- type: ap
value: 74.43301569395831
- type: f1
value: 80.04407647044388
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 10.171
- type: map_at_10
value: 17.558
- type: map_at_100
value: 18.694
- type: map_at_1000
value: 18.787000000000003
- type: map_at_3
value: 14.826
- type: map_at_5
value: 16.249
- type: mrr_at_1
value: 10.473
- type: mrr_at_10
value: 17.967
- type: mrr_at_100
value: 19.089
- type: mrr_at_1000
value: 19.177
- type: mrr_at_3
value: 15.222
- type: mrr_at_5
value: 16.655
- type: ndcg_at_1
value: 10.473
- type: ndcg_at_10
value: 22.148
- type: ndcg_at_100
value: 28.028
- type: ndcg_at_1000
value: 30.659
- type: ndcg_at_3
value: 16.474
- type: ndcg_at_5
value: 19.017
- type: precision_at_1
value: 10.473
- type: precision_at_10
value: 3.7969999999999997
- type: precision_at_100
value: 0.6779999999999999
- type: precision_at_1000
value: 0.09
- type: precision_at_3
value: 7.187
- type: precision_at_5
value: 5.599
- type: recall_at_1
value: 10.171
- type: recall_at_10
value: 36.459
- type: recall_at_100
value: 64.512
- type: recall_at_1000
value: 85.27900000000001
- type: recall_at_3
value: 20.868000000000002
- type: recall_at_5
value: 26.933
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 90.35795713634292
- type: f1
value: 89.72064544336776
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 66.4546283629731
- type: f1
value: 49.487271168215095
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.58238063214527
- type: f1
value: 65.54281371907213
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.47343644922664
- type: f1
value: 72.80522894672785
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.53600917473176
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.04699774280647
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.984352865575797
- type: mrr
value: 32.02736001972659
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.666
- type: map_at_10
value: 10.066
- type: map_at_100
value: 12.794
- type: map_at_1000
value: 14.184
- type: map_at_3
value: 7.622
- type: map_at_5
value: 8.587
- type: mrr_at_1
value: 39.318999999999996
- type: mrr_at_10
value: 47.678
- type: mrr_at_100
value: 48.355
- type: mrr_at_1000
value: 48.400999999999996
- type: mrr_at_3
value: 45.82
- type: mrr_at_5
value: 46.656
- type: ndcg_at_1
value: 37.926
- type: ndcg_at_10
value: 29.049999999999997
- type: ndcg_at_100
value: 26.826
- type: ndcg_at_1000
value: 35.841
- type: ndcg_at_3
value: 33.513
- type: ndcg_at_5
value: 31.227
- type: precision_at_1
value: 39.318999999999996
- type: precision_at_10
value: 21.424000000000003
- type: precision_at_100
value: 7.231999999999999
- type: precision_at_1000
value: 2.012
- type: precision_at_3
value: 30.857
- type: precision_at_5
value: 26.378
- type: recall_at_1
value: 4.666
- type: recall_at_10
value: 13.898
- type: recall_at_100
value: 26.983
- type: recall_at_1000
value: 59.485
- type: recall_at_3
value: 8.953
- type: recall_at_5
value: 10.496
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.26
- type: map_at_10
value: 17.907999999999998
- type: map_at_100
value: 19.245
- type: map_at_1000
value: 19.339000000000002
- type: map_at_3
value: 14.634
- type: map_at_5
value: 16.386
- type: mrr_at_1
value: 10.574
- type: mrr_at_10
value: 19.438
- type: mrr_at_100
value: 20.638
- type: mrr_at_1000
value: 20.715
- type: mrr_at_3
value: 16.276
- type: mrr_at_5
value: 17.971999999999998
- type: ndcg_at_1
value: 10.574
- type: ndcg_at_10
value: 23.451
- type: ndcg_at_100
value: 29.982
- type: ndcg_at_1000
value: 32.449
- type: ndcg_at_3
value: 16.817
- type: ndcg_at_5
value: 19.867
- type: precision_at_1
value: 10.574
- type: precision_at_10
value: 4.609
- type: precision_at_100
value: 0.8330000000000001
- type: precision_at_1000
value: 0.107
- type: precision_at_3
value: 8.266
- type: precision_at_5
value: 6.6739999999999995
- type: recall_at_1
value: 9.26
- type: recall_at_10
value: 39.224
- type: recall_at_100
value: 69.107
- type: recall_at_1000
value: 87.908
- type: recall_at_3
value: 21.490000000000002
- type: recall_at_5
value: 28.560999999999996
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 65.655
- type: map_at_10
value: 79.199
- type: map_at_100
value: 79.937
- type: map_at_1000
value: 79.964
- type: map_at_3
value: 76.19399999999999
- type: map_at_5
value: 78.08800000000001
- type: mrr_at_1
value: 75.53999999999999
- type: mrr_at_10
value: 82.89
- type: mrr_at_100
value: 83.074
- type: mrr_at_1000
value: 83.077
- type: mrr_at_3
value: 81.577
- type: mrr_at_5
value: 82.452
- type: ndcg_at_1
value: 75.53999999999999
- type: ndcg_at_10
value: 83.62899999999999
- type: ndcg_at_100
value: 85.411
- type: ndcg_at_1000
value: 85.646
- type: ndcg_at_3
value: 80.23700000000001
- type: ndcg_at_5
value: 82.107
- type: precision_at_1
value: 75.53999999999999
- type: precision_at_10
value: 12.695
- type: precision_at_100
value: 1.493
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 34.983
- type: precision_at_5
value: 23.164
- type: recall_at_1
value: 65.655
- type: recall_at_10
value: 92.269
- type: recall_at_100
value: 98.598
- type: recall_at_1000
value: 99.815
- type: recall_at_3
value: 82.616
- type: recall_at_5
value: 87.75800000000001
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 43.67844919460687
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 54.32866004447611
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.238
- type: map_at_10
value: 8.539
- type: map_at_100
value: 10.267
- type: map_at_1000
value: 10.552999999999999
- type: map_at_3
value: 6.165
- type: map_at_5
value: 7.22
- type: mrr_at_1
value: 15.9
- type: mrr_at_10
value: 25.557999999999996
- type: mrr_at_100
value: 26.867
- type: mrr_at_1000
value: 26.939
- type: mrr_at_3
value: 22.633
- type: mrr_at_5
value: 24.233
- type: ndcg_at_1
value: 15.9
- type: ndcg_at_10
value: 14.954
- type: ndcg_at_100
value: 22.486
- type: ndcg_at_1000
value: 27.986
- type: ndcg_at_3
value: 14.069
- type: ndcg_at_5
value: 12.200999999999999
- type: precision_at_1
value: 15.9
- type: precision_at_10
value: 7.9399999999999995
- type: precision_at_100
value: 1.8929999999999998
- type: precision_at_1000
value: 0.32299999999999995
- type: precision_at_3
value: 13.5
- type: precision_at_5
value: 10.9
- type: recall_at_1
value: 3.238
- type: recall_at_10
value: 16.1
- type: recall_at_100
value: 38.427
- type: recall_at_1000
value: 65.498
- type: recall_at_3
value: 8.212
- type: recall_at_5
value: 11.032
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 80.7612029200118
- type: cos_sim_spearman
value: 74.17706899450974
- type: euclidean_pearson
value: 78.6240925347838
- type: euclidean_spearman
value: 74.22104652352341
- type: manhattan_pearson
value: 78.49956480878576
- type: manhattan_spearman
value: 74.0528957569391
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 80.0377294417705
- type: cos_sim_spearman
value: 72.19570903733732
- type: euclidean_pearson
value: 77.060604990743
- type: euclidean_spearman
value: 71.54251658956483
- type: manhattan_pearson
value: 77.28301977645965
- type: manhattan_spearman
value: 71.77449045278667
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 79.69841558517969
- type: cos_sim_spearman
value: 80.54022353649157
- type: euclidean_pearson
value: 80.03651743688496
- type: euclidean_spearman
value: 80.45116824930123
- type: manhattan_pearson
value: 79.89688370680031
- type: manhattan_spearman
value: 80.27208259746283
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 79.92235427443056
- type: cos_sim_spearman
value: 76.20243980748161
- type: euclidean_pearson
value: 79.28031963400572
- type: euclidean_spearman
value: 76.3568261868673
- type: manhattan_pearson
value: 79.24527845959733
- type: manhattan_spearman
value: 76.39886696744185
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 84.2762365324788
- type: cos_sim_spearman
value: 85.19929628214842
- type: euclidean_pearson
value: 84.82568872953075
- type: euclidean_spearman
value: 85.11039387706913
- type: manhattan_pearson
value: 84.72922084197847
- type: manhattan_spearman
value: 85.04448532444505
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 80.23256564746382
- type: cos_sim_spearman
value: 81.92968415429543
- type: euclidean_pearson
value: 81.12612888308936
- type: euclidean_spearman
value: 81.97396557448675
- type: manhattan_pearson
value: 81.15685601512081
- type: manhattan_spearman
value: 82.01929408689
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.35057935029289
- type: cos_sim_spearman
value: 86.60658025867397
- type: euclidean_pearson
value: 86.48666975508912
- type: euclidean_spearman
value: 86.70310223264862
- type: manhattan_pearson
value: 86.23959282751626
- type: manhattan_spearman
value: 86.48318896577922
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.15375299804011
- type: cos_sim_spearman
value: 65.4588500819246
- type: euclidean_pearson
value: 65.60180021985416
- type: euclidean_spearman
value: 65.55596512146833
- type: manhattan_pearson
value: 66.12421335157649
- type: manhattan_spearman
value: 66.05163838991123
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 81.82391915730462
- type: cos_sim_spearman
value: 81.93942545767499
- type: euclidean_pearson
value: 83.16752744889406
- type: euclidean_spearman
value: 82.31380947581034
- type: manhattan_pearson
value: 82.98915741609575
- type: manhattan_spearman
value: 82.16585239338073
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 77.19504204180527
- type: mrr
value: 92.85429983959396
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 49.528
- type: map_at_10
value: 57.62199999999999
- type: map_at_100
value: 58.544
- type: map_at_1000
value: 58.573
- type: map_at_3
value: 54.56999999999999
- type: map_at_5
value: 56.552
- type: mrr_at_1
value: 52.0
- type: mrr_at_10
value: 58.939
- type: mrr_at_100
value: 59.653
- type: mrr_at_1000
value: 59.68
- type: mrr_at_3
value: 56.389
- type: mrr_at_5
value: 57.989000000000004
- type: ndcg_at_1
value: 52.0
- type: ndcg_at_10
value: 61.964
- type: ndcg_at_100
value: 65.871
- type: ndcg_at_1000
value: 66.724
- type: ndcg_at_3
value: 56.621
- type: ndcg_at_5
value: 59.551
- type: precision_at_1
value: 52.0
- type: precision_at_10
value: 8.333
- type: precision_at_100
value: 1.04
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 21.778
- type: precision_at_5
value: 14.933
- type: recall_at_1
value: 49.528
- type: recall_at_10
value: 74.2
- type: recall_at_100
value: 91.5
- type: recall_at_1000
value: 98.333
- type: recall_at_3
value: 60.06700000000001
- type: recall_at_5
value: 67.133
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.81287128712871
- type: cos_sim_ap
value: 95.15039468118793
- type: cos_sim_f1
value: 90.48817312531455
- type: cos_sim_precision
value: 91.08409321175279
- type: cos_sim_recall
value: 89.9
- type: dot_accuracy
value: 99.78019801980199
- type: dot_ap
value: 93.60256835857994
- type: dot_f1
value: 88.73096446700508
- type: dot_precision
value: 90.10309278350516
- type: dot_recall
value: 87.4
- type: euclidean_accuracy
value: 99.81188118811882
- type: euclidean_ap
value: 95.15954231276913
- type: euclidean_f1
value: 90.48096192384769
- type: euclidean_precision
value: 90.66265060240963
- type: euclidean_recall
value: 90.3
- type: manhattan_accuracy
value: 99.81188118811882
- type: manhattan_ap
value: 95.17107000565468
- type: manhattan_f1
value: 90.5
- type: manhattan_precision
value: 90.5
- type: manhattan_recall
value: 90.5
- type: max_accuracy
value: 99.81287128712871
- type: max_ap
value: 95.17107000565468
- type: max_f1
value: 90.5
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 51.77488276525734
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.30657214418171
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 47.84571922992432
- type: mrr
value: 48.549107142857146
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.840750357585556
- type: cos_sim_spearman
value: 29.832953864936567
- type: dot_pearson
value: 30.499687946740657
- type: dot_spearman
value: 30.73436062481656
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.16999999999999998
- type: map_at_10
value: 1.014
- type: map_at_100
value: 5.623
- type: map_at_1000
value: 15.190999999999999
- type: map_at_3
value: 0.377
- type: map_at_5
value: 0.577
- type: mrr_at_1
value: 68.0
- type: mrr_at_10
value: 74.45
- type: mrr_at_100
value: 74.846
- type: mrr_at_1000
value: 74.846
- type: mrr_at_3
value: 71.333
- type: mrr_at_5
value: 73.533
- type: ndcg_at_1
value: 64.0
- type: ndcg_at_10
value: 47.52
- type: ndcg_at_100
value: 37.419999999999995
- type: ndcg_at_1000
value: 36.318
- type: ndcg_at_3
value: 51.13999999999999
- type: ndcg_at_5
value: 49.101
- type: precision_at_1
value: 68.0
- type: precision_at_10
value: 50.8
- type: precision_at_100
value: 39.160000000000004
- type: precision_at_1000
value: 16.948
- type: precision_at_3
value: 52.0
- type: precision_at_5
value: 51.6
- type: recall_at_1
value: 0.16999999999999998
- type: recall_at_10
value: 1.269
- type: recall_at_100
value: 8.937000000000001
- type: recall_at_1000
value: 35.036
- type: recall_at_3
value: 0.396
- type: recall_at_5
value: 0.6669999999999999
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.672
- type: map_at_10
value: 6.739000000000001
- type: map_at_100
value: 12.006
- type: map_at_1000
value: 13.474
- type: map_at_3
value: 2.617
- type: map_at_5
value: 4.329000000000001
- type: mrr_at_1
value: 20.408
- type: mrr_at_10
value: 30.764000000000003
- type: mrr_at_100
value: 32.457
- type: mrr_at_1000
value: 32.481
- type: mrr_at_3
value: 26.531
- type: mrr_at_5
value: 28.877999999999997
- type: ndcg_at_1
value: 18.367
- type: ndcg_at_10
value: 17.471999999999998
- type: ndcg_at_100
value: 29.341
- type: ndcg_at_1000
value: 41.005
- type: ndcg_at_3
value: 14.64
- type: ndcg_at_5
value: 17.039
- type: precision_at_1
value: 20.408
- type: precision_at_10
value: 17.551
- type: precision_at_100
value: 6.673
- type: precision_at_1000
value: 1.4160000000000001
- type: precision_at_3
value: 14.966
- type: precision_at_5
value: 18.776
- type: recall_at_1
value: 1.672
- type: recall_at_10
value: 12.795000000000002
- type: recall_at_100
value: 41.289
- type: recall_at_1000
value: 76.947
- type: recall_at_3
value: 3.334
- type: recall_at_5
value: 6.864000000000001
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.3424
- type: ap
value: 13.45149708639965
- type: f1
value: 53.278180518373574
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 57.60045274476513
- type: f1
value: 57.9395926195531
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 36.649067825169446
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.68599868868093
- type: cos_sim_ap
value: 65.7938550603812
- type: cos_sim_f1
value: 61.81946735800141
- type: cos_sim_precision
value: 55.85604770017035
- type: cos_sim_recall
value: 69.2084432717678
- type: dot_accuracy
value: 82.09453418370389
- type: dot_ap
value: 61.00867337905922
- type: dot_f1
value: 58.56196783349101
- type: dot_precision
value: 53.06472353193313
- type: dot_recall
value: 65.32981530343008
- type: euclidean_accuracy
value: 83.68599868868093
- type: euclidean_ap
value: 66.17065796133883
- type: euclidean_f1
value: 62.440610152538135
- type: euclidean_precision
value: 59.3393536121673
- type: euclidean_recall
value: 65.88390501319262
- type: manhattan_accuracy
value: 83.57870894677237
- type: manhattan_ap
value: 65.89925640001532
- type: manhattan_f1
value: 62.2255119664446
- type: manhattan_precision
value: 58.43373493975904
- type: manhattan_recall
value: 66.54353562005278
- type: max_accuracy
value: 83.68599868868093
- type: max_ap
value: 66.17065796133883
- type: max_f1
value: 62.440610152538135
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 87.68579966623976
- type: cos_sim_ap
value: 83.2666595805096
- type: cos_sim_f1
value: 75.11536297129996
- type: cos_sim_precision
value: 73.24943294065999
- type: cos_sim_recall
value: 77.07884200800738
- type: dot_accuracy
value: 86.76213761788334
- type: dot_ap
value: 80.85199640255004
- type: dot_f1
value: 73.27634898520165
- type: dot_precision
value: 71.70756872282409
- type: dot_recall
value: 74.91530643671081
- type: euclidean_accuracy
value: 87.79640625606395
- type: euclidean_ap
value: 83.52666327503474
- type: euclidean_f1
value: 75.37022886875523
- type: euclidean_precision
value: 71.4522249051397
- type: euclidean_recall
value: 79.74283954419464
- type: manhattan_accuracy
value: 87.80804905499282
- type: manhattan_ap
value: 83.4995899990913
- type: manhattan_f1
value: 75.44320420223242
- type: manhattan_precision
value: 71.68307223069458
- type: manhattan_recall
value: 79.6196489066831
- type: max_accuracy
value: 87.80804905499282
- type: max_ap
value: 83.52666327503474
- type: max_f1
value: 75.44320420223242
--- |
Rostlab/prot_bert_bfd_localization | Rostlab | "2021-05-18T22:05:26Z" | 5,133 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:04Z" | Entry not found |
hubertsiuzdak/snac_32khz | hubertsiuzdak | "2024-04-03T23:48:23Z" | 5,132 | 3 | transformers | [
"transformers",
"pytorch",
"audio",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-02-27T17:18:19Z" | ---
license: mit
tags:
- audio
---
# SNAC 🍿
Multi-**S**cale **N**eural **A**udio **C**odec (SNAC) compressess audio into discrete codes at a low bitrate.
👉 This model was primarily trained on music data, and its recommended use case is music (and SFX) generation. See below for other pretrained models.
🔗 GitHub repository: https://github.com/hubertsiuzdak/snac/
## Overview
SNAC encodes audio into hierarchical tokens similarly to SoundStream, EnCodec, and DAC. However, SNAC introduces a simple change where coarse tokens are sampled less frequently,
covering a broader time span.
This model compresses 32 kHz audio into discrete codes at a 1.9 kbps bitrate. It uses 4 RVQ levels with token rates of 10, 21, 42, and
83 Hz.
## Pretrained models
Currently, all models support only single audio channel (mono).
| Model | Bitrate | Sample Rate | Params | Recommended use case |
|-----------------------------------------------------------------------------|-----------|-------------|--------|--------------------------|
| [hubertsiuzdak/snac_24khz](https://huggingface.co/hubertsiuzdak/snac_24khz) | 0.98 kbps | 24 kHz | 19.8 M | 🗣️ Speech |
| hubertsiuzdak/snac_32khz (this model) | 1.9 kbps | 32 kHz | 54.5 M | 🎸 Music / Sound Effects |
| [hubertsiuzdak/snac_44khz](https://huggingface.co/hubertsiuzdak/snac_44khz) | 2.6 kbps | 44 kHz | 54.5 M | 🎸 Music / Sound Effects |
## Usage
Install it using:
```bash
pip install snac
```
To encode (and decode) audio with SNAC in Python, use the following code:
```python
import torch
from snac import SNAC
model = SNAC.from_pretrained("hubertsiuzdak/snac_32khz").eval().cuda()
audio = torch.randn(1, 1, 32000).cuda() # B, 1, T
with torch.inference_mode():
codes = model.encode(audio)
audio_hat = model.decode(codes)
```
You can also encode and reconstruct in a single call:
```python
with torch.inference_mode():
audio_hat, codes = model(audio)
```
⚠️ Note that `codes` is a list of token sequences of variable lengths, each corresponding to a different temporal
resolution.
```
>>> [code.shape[1] for code in codes]
[12, 24, 48, 96]
```
## Acknowledgements
Module definitions are adapted from the [Descript Audio Codec](https://github.com/descriptinc/descript-audio-codec). |
mradermacher/L3-Penumbral-Mind-RP-8B-i1-GGUF | mradermacher | "2024-06-13T21:06:02Z" | 5,132 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"not-for-all-audiences",
"rp",
"roleplay",
"role-play",
"en",
"base_model:Casual-Autopsy/L3-Penumbral-Mind-RP-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-06-12T13:18:04Z" | ---
base_model: Casual-Autopsy/L3-Penumbral-Mind-RP-8B
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- not-for-all-audiences
- rp
- roleplay
- role-play
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Casual-Autopsy/L3-Penumbral-Mind-RP-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3-Penumbral-Mind-RP-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-Penumbral-Mind-RP-8B-i1-GGUF/resolve/main/L3-Penumbral-Mind-RP-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-Penumbral-Mind-RP-8B-i1-GGUF/resolve/main/L3-Penumbral-Mind-RP-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-Penumbral-Mind-RP-8B-i1-GGUF/resolve/main/L3-Penumbral-Mind-RP-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Penumbral-Mind-RP-8B-i1-GGUF/resolve/main/L3-Penumbral-Mind-RP-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Penumbral-Mind-RP-8B-i1-GGUF/resolve/main/L3-Penumbral-Mind-RP-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Penumbral-Mind-RP-8B-i1-GGUF/resolve/main/L3-Penumbral-Mind-RP-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Penumbral-Mind-RP-8B-i1-GGUF/resolve/main/L3-Penumbral-Mind-RP-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-Penumbral-Mind-RP-8B-i1-GGUF/resolve/main/L3-Penumbral-Mind-RP-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-Penumbral-Mind-RP-8B-i1-GGUF/resolve/main/L3-Penumbral-Mind-RP-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Penumbral-Mind-RP-8B-i1-GGUF/resolve/main/L3-Penumbral-Mind-RP-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-Penumbral-Mind-RP-8B-i1-GGUF/resolve/main/L3-Penumbral-Mind-RP-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-Penumbral-Mind-RP-8B-i1-GGUF/resolve/main/L3-Penumbral-Mind-RP-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Penumbral-Mind-RP-8B-i1-GGUF/resolve/main/L3-Penumbral-Mind-RP-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-Penumbral-Mind-RP-8B-i1-GGUF/resolve/main/L3-Penumbral-Mind-RP-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-Penumbral-Mind-RP-8B-i1-GGUF/resolve/main/L3-Penumbral-Mind-RP-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Penumbral-Mind-RP-8B-i1-GGUF/resolve/main/L3-Penumbral-Mind-RP-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3-Penumbral-Mind-RP-8B-i1-GGUF/resolve/main/L3-Penumbral-Mind-RP-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3-Penumbral-Mind-RP-8B-i1-GGUF/resolve/main/L3-Penumbral-Mind-RP-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-Penumbral-Mind-RP-8B-i1-GGUF/resolve/main/L3-Penumbral-Mind-RP-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Penumbral-Mind-RP-8B-i1-GGUF/resolve/main/L3-Penumbral-Mind-RP-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Penumbral-Mind-RP-8B-i1-GGUF/resolve/main/L3-Penumbral-Mind-RP-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | Finnish-NLP | "2024-04-28T17:08:50Z" | 5,130 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-27T18:10:56Z" | ---
license: apache-2.0
language: fi
metrics:
- wer
- cer
tags:
- automatic-speech-recognition
- fi
- finnish
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xlsr-1b-finnish-lm-v2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: fi
metrics:
- name: Test WER
type: wer
value: 4.09
- name: Test CER
type: cer
value: 0.88
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: FLEURS ASR
type: google/fleurs
args: fi_fi
metrics:
- name: Test WER
type: wer
value: 12.11
- name: Test CER
type: cer
value: 5.65
---
# Wav2vec2-xls-r-1b for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) for Finnish ASR. The model has been fine-tuned with 275.6 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
This repository also includes Finnish KenLM language model used in the decoding phase with the acoustic model.
**Note**: this model is exactly the same as the [aapot/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/aapot/wav2vec2-xlsr-1b-finnish-lm-v2) model so that model has just been copied/moved to this `Finnish-NLP` Hugging Face organization.
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (1 billion parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
The Finnish KenLM language model used in the decoding phase has been trained with text data from the audio transcriptions and from a subset of Finnish Wikipedia. Thus, the decoder's language model may not generalize to very different language, for example to spoken daily language with dialects (because especially the Wikipedia contains mostly formal Finnish language). It may be beneficial to train your own KenLM language model for your domain language and use that in the decoding.
## Training data
This model was fine-tuned with 275.6 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:------------------------------------------------------------------------------------------------------------------------------ |:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.52 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 21.97 h | 7.97 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.74 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 82.73 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 1.95 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
For the KenLM language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by Hugging Face. Training data for the 5-gram KenLM were text transcriptions of the audio training data and 100k random samples of cleaned [Finnish Wikipedia](https://huggingface.co/datasets/wikipedia) (August 2021) dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-1b` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.7778 | 0.17 | 500 | 0.2851 | 0.3572 |
| 0.5506 | 0.34 | 1000 | 0.1595 | 0.2130 |
| 0.6569 | 0.5 | 1500 | 0.1458 | 0.2046 |
| 0.5997 | 0.67 | 2000 | 0.1374 | 0.1975 |
| 0.542 | 0.84 | 2500 | 0.1390 | 0.1956 |
| 0.4815 | 1.01 | 3000 | 0.1266 | 0.1813 |
| 0.6982 | 1.17 | 3500 | 0.1441 | 0.1965 |
| 0.4522 | 1.34 | 4000 | 0.1232 | 0.1822 |
| 0.4655 | 1.51 | 4500 | 0.1209 | 0.1702 |
| 0.4069 | 1.68 | 5000 | 0.1149 | 0.1688 |
| 0.4226 | 1.84 | 5500 | 0.1121 | 0.1560 |
| 0.3993 | 2.01 | 6000 | 0.1091 | 0.1557 |
| 0.406 | 2.18 | 6500 | 0.1115 | 0.1553 |
| 0.4098 | 2.35 | 7000 | 0.1144 | 0.1560 |
| 0.3995 | 2.51 | 7500 | 0.1028 | 0.1476 |
| 0.4101 | 2.68 | 8000 | 0.1129 | 0.1511 |
| 0.3636 | 2.85 | 8500 | 0.1025 | 0.1517 |
| 0.3534 | 3.02 | 9000 | 0.1068 | 0.1480 |
| 0.3836 | 3.18 | 9500 | 0.1072 | 0.1459 |
| 0.3531 | 3.35 | 10000 | 0.0928 | 0.1367 |
| 0.3649 | 3.52 | 10500 | 0.1042 | 0.1426 |
| 0.3645 | 3.69 | 11000 | 0.0979 | 0.1433 |
| 0.3685 | 3.85 | 11500 | 0.0947 | 0.1346 |
| 0.3325 | 4.02 | 12000 | 0.0991 | 0.1352 |
| 0.3497 | 4.19 | 12500 | 0.0919 | 0.1358 |
| 0.3303 | 4.36 | 13000 | 0.0888 | 0.1272 |
| 0.3323 | 4.52 | 13500 | 0.0888 | 0.1277 |
| 0.3452 | 4.69 | 14000 | 0.0894 | 0.1279 |
| 0.337 | 4.86 | 14500 | 0.0917 | 0.1289 |
| 0.3114 | 5.03 | 15000 | 0.0942 | 0.1313 |
| 0.3099 | 5.19 | 15500 | 0.0902 | 0.1239 |
| 0.3079 | 5.36 | 16000 | 0.0871 | 0.1256 |
| 0.3293 | 5.53 | 16500 | 0.0861 | 0.1263 |
| 0.3123 | 5.7 | 17000 | 0.0876 | 0.1203 |
| 0.3093 | 5.86 | 17500 | 0.0848 | 0.1226 |
| 0.2903 | 6.03 | 18000 | 0.0914 | 0.1221 |
| 0.297 | 6.2 | 18500 | 0.0841 | 0.1185 |
| 0.2797 | 6.37 | 19000 | 0.0858 | 0.1165 |
| 0.2878 | 6.53 | 19500 | 0.0874 | 0.1161 |
| 0.2974 | 6.7 | 20000 | 0.0835 | 0.1173 |
| 0.3051 | 6.87 | 20500 | 0.0835 | 0.1178 |
| 0.2941 | 7.04 | 21000 | 0.0852 | 0.1155 |
| 0.258 | 7.21 | 21500 | 0.0832 | 0.1132 |
| 0.2778 | 7.37 | 22000 | 0.0829 | 0.1110 |
| 0.2751 | 7.54 | 22500 | 0.0822 | 0.1069 |
| 0.2887 | 7.71 | 23000 | 0.0819 | 0.1103 |
| 0.2509 | 7.88 | 23500 | 0.0787 | 0.1055 |
| 0.2501 | 8.04 | 24000 | 0.0807 | 0.1076 |
| 0.2399 | 8.21 | 24500 | 0.0784 | 0.1052 |
| 0.2539 | 8.38 | 25000 | 0.0772 | 0.1075 |
| 0.248 | 8.55 | 25500 | 0.0772 | 0.1055 |
| 0.2689 | 8.71 | 26000 | 0.0763 | 0.1027 |
| 0.2855 | 8.88 | 26500 | 0.0756 | 0.1035 |
| 0.2421 | 9.05 | 27000 | 0.0771 | 0.0998 |
| 0.2497 | 9.22 | 27500 | 0.0756 | 0.0971 |
| 0.2367 | 9.38 | 28000 | 0.0741 | 0.0974 |
| 0.2473 | 9.55 | 28500 | 0.0739 | 0.0982 |
| 0.2396 | 9.72 | 29000 | 0.0756 | 0.0991 |
| 0.2602 | 9.89 | 29500 | 0.0737 | 0.0975 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0), [Common Voice 9.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0) and with the [FLEURS ASR Finnish test split](https://huggingface.co/datasets/google/fleurs).
This model's training data includes the training splits of Common Voice 7.0 but our newer `Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned` and `Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish` models include the Common Voice 9.0 so we ran tests for both Common Voice versions. Note: Common Voice doesn't seem to fully preserve the test split as fixed between the dataset versions so it is possible that some of the training examples of Common Voice 9.0 are in the test split of the Common Voice 7.0 and vice versa. Thus, Common Voice test result comparisons are not fully accurate between the models trained with different Common Voice versions but the comparison should still be meaningful enough.
### Common Voice 7.0 testing
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the fift row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts:
| | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------|
|Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |5.85 |13.52 |1.35 |2.44 |
|Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |4.13 |**9.66** |0.90 |1.66 |
|Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |8.16 |17.92 |1.97 |3.36 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |5.65 |13.11 |1.20 |2.23 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**4.09** |9.73 |**0.88** |**1.65** |
### Common Voice 9.0 testing
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 --dataset mozilla-foundation/common_voice_9_0 --config fi --split test
```
This model (the fift row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts:
| | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------|
|Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |5.93 |14.08 |1.40 |2.59 |
|Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |4.13 |9.83 |0.92 |1.71 |
|Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |7.42 |16.45 |1.79 |3.07 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |5.35 |13.00 |1.14 |2.20 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**3.72** |**8.96** |**0.80** |**1.52** |
### FLEURS ASR testing
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 --dataset google/fleurs --config fi_fi --split test
```
This model (the fift row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models and their parameter counts:
| | Model parameters | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-------------------------------------------------------|------------------|---------------|------------------|---------------|------------------|
|Finnish-NLP/wav2vec2-base-fi-voxpopuli-v2-finetuned | 95 million |13.99 |17.16 |6.07 |6.61 |
|Finnish-NLP/wav2vec2-large-uralic-voxpopuli-v2-finnish | 300 million |12.44 |**14.63** |5.77 |6.22 |
|Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm | 300 million |17.72 |23.30 |6.78 |7.67 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm | 1000 million |20.34 |16.67 |6.97 |6.35 |
|Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2 | 1000 million |**12.11** |14.89 |**5.65** |**6.06** |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 |
MIT/ast-finetuned-speech-commands-v2 | MIT | "2023-09-10T18:03:01Z" | 5,130 | 11 | transformers | [
"transformers",
"pytorch",
"safetensors",
"audio-spectrogram-transformer",
"audio-classification",
"dataset:speech_commands",
"arxiv:2104.01778",
"license:bsd-3-clause",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | "2022-11-14T19:11:22Z" | ---
license: bsd-3-clause
datasets:
- speech_commands
tags:
- audio-classification
model-index:
- name: MIT/ast-finetuned-speech-commands-v2
results:
- task:
type: audio-classification
dataset:
name: Speech Commands v2
type: speech_commands
metrics:
- type: accuracy
value: 98.12
---
# Audio Spectrogram Transformer (fine-tuned on Speech Commands v2)
Audio Spectrogram Transformer (AST) model fine-tuned on Speech Commands v2. It was introduced in the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Gong et al. and first released in [this repository](https://github.com/YuanGongND/ast).
Disclaimer: The team releasing Audio Spectrogram Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Audio Spectrogram Transformer is equivalent to [ViT](https://huggingface.co/docs/transformers/model_doc/vit), but applied on audio. Audio is first turned into an image (as a spectrogram), after which a Vision Transformer is applied. The model gets state-of-the-art results on several audio classification benchmarks.
## Usage
You can use the raw model for classifying audio into one of the Speech Commands v2 classes. See the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/audio-spectrogram-transformer) for more info. |
oliverguhr/spelling-correction-english-base | oliverguhr | "2023-12-18T08:46:53Z" | 5,128 | 64 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"onnx",
"safetensors",
"bart",
"text2text-generation",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-05-23T15:26:21Z" | ---
language:
- en
license: mit
widget:
- text: "lets do a comparsion"
example_title: "1"
- text: "Their going to be here so0n"
example_title: "2"
- text: "ze shop is cloed due to covid 19"
example_title: "3"
metrics:
- cer
---
This is an experimental model that should fix your typos and punctuation.
If you like to run your own experiments or train for a different language, have a look at [the code](https://github.com/oliverguhr/spelling).
## Model description
This is a proof of concept spelling correction model for English.
## Intended uses & limitations
This project is work in progress, be aware that the model can produce artefacts.
You can test the model using the pipeline-interface:
```python
from transformers import pipeline
fix_spelling = pipeline("text2text-generation",model="oliverguhr/spelling-correction-english-base")
print(fix_spelling("lets do a comparsion",max_length=2048))
```
|
moussaKam/frugalscore_tiny_bert-base_bert-score | moussaKam | "2022-02-01T10:50:21Z" | 5,120 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2110.08559",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | |
DiscoResearch/Llama3-German-8B | DiscoResearch | "2024-05-29T11:35:49Z" | 5,119 | 33 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"de",
"arxiv:2404.10830",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-23T16:36:25Z" | ---
license: llama3
language:
- de
library_name: transformers
---
# Llama3-German-8B (version 0.1)
Llama3-German-8B-v0.1 is a large language model based on [Meta's Llama3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B). It is specialized for the German language through continuous pretraining on 65 billion high-quality tokens, similar to previous [LeoLM](https://huggingface.co/LeoLM) or [Occiglot](https://huggingface.co/collections/occiglot/occiglot-eu5-7b-v01-65dbed502a6348b052695e01) models.
Llama3 itself was trained on 15T tokens, of which only <1T were multilingual, resulting in suboptimal performance in German with reduced linguistic capabilities and frequent grammatical errors, motivating the necessity for continued pretraining. Benchmark results on our model show minimal degradation in English performance, despite the absence of replay during training. Importantly, Llama3-German-8B-v0.1 demonstrates strong improvements in German, particularly on the Hellaswag benchmark, which measures linguistic understanding and general reasoning.
[DiscoResearch/Llama3-German-8B-v0.1](https://huggingface.co/collections/DiscoResearch/discoleo-8b-llama3-for-german-6650527496c0fafefd4c9729) is the result of a joint effort between [DiscoResearch](https://huggingface.co/DiscoResearch) and [Occiglot](https://huggingface.co/occiglot) with support from the [DFKI](https://www.dfki.de/web/) (German Research Center for Artificial Intelligence) and [hessian.Ai](https://hessian.ai). Occiglot kindly handled data preprocessing, filtering, and deduplication as part of their latest [dataset release](https://huggingface.co/datasets/occiglot/occiglot-fineweb-v0.5), as well as sharing their compute allocation at hessian.Ai's 42 Supercomputer.
## How to use
This is a base model and should probably be subject to finetuning before use. See our [collection](https://huggingface.co/collections/DiscoResearch/discoleo-8b-llama3-for-german-6650527496c0fafefd4c9729) for various finetuned and long-context versions.
## Model Training and Hyperparameters
The model was trained on 128 GPUs on [hessian.Ai 42](hessian.ai) for ~60 hours. See detailed hyperparameters below.
| Parameter | Value |
|-------------------|-----------------------------------|
| Sequence Length | 8192 tokens |
| Learning Rate | 1.5e-5 to 1.5e-6 (cosine schedule)|
| Batch Size | 4194304 (512*8192) tokens |
| Micro Batch Size | 4*8192 tokens |
| Training Steps | 15500 |
| Warmup Steps | 155 (1%) |
| Weight Decay | 0.05 |
| Optimizer | AdamW |
## Data Collection and Preprocessing
For pre-training, we used 65B German tokens from the [occiglot-fineweb-0.5](https://huggingface.co/datasets/occiglot/occiglot-fineweb-v0.5) dataset.
The data comprises multiple curated datasets from [LLM-Datasets](https://github.com/malteos/llm-datasets) as well as 12 [Common-Crawl](https://commoncrawl.org) releases that were processed with [OSCAR's Ungoliant pipeline](https://github.com/oscar-project/ungoliant).
All data was further filtered with a set of language-specific filters based on [Huggingface's fine-web](https://github.com/huggingface/datatrove/blob/main/examples/fineweb.py) and globally deduplicated.
For more information please refer to the [dataset card](https://huggingface.co/datasets/occiglot/occiglot-fineweb-v0.5) and corresponding [blog-post](https://occiglot.eu/posts/occiglot-fineweb/).
## Evaluation and Results
We evaluated the model using a suite of common English Benchmarks and their German counterparts with [GermanBench](https://github.com/bjoernpl/GermanBenchmark).
The following figure shows the benchmark results in comparison to the base model [meta-llama/Meta-Llama3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) and two different hyperparameter configurations.
We swept different learning rates to identify a well-working setup. The final released model is the 1.5e-5 lr version.

Find the detailed benchmark scores for the base and long-context models in this table.
| Model | truthful_qa_de | truthfulqa_mc | arc_challenge | arc_challenge_de | hellaswag | hellaswag_de | MMLU | MMLU-DE | mean |
|--------------------------------------|----------------|---------------|---------------|------------------|-----------|--------------|--------|---------|------------|
| DiscoResearch/Llama3-German-8B | **0.49499** | 0.44838 | 0.55802 | **0.49829** | 0.79924 | **0.65395** | 0.62240| **0.54413** | **0.57743** |
| DiscoResearch/Llama3-German-8B-32k | 0.48920 | **0.45138** | 0.54437 | 0.49232 | 0.79078 | 0.64310 | 0.58774| 0.47971 | 0.55982 |
| meta-llama/Meta-Llama-3-8B-Instruct | 0.47498 | 0.43923 | **0.59642** | 0.47952 | **0.82025**| 0.60008 | **0.66658**| 0.53541 | 0.57656 |
## Long-Context Extension
In addition to the base model, we release a long-context version of Llama3-German-8B ([DiscoResearch/Llama3-German-8B-32k](https://huggingface.co/DiscoResearch/Llama3-German-8B-32k) capable of processing context lengths up to 65k tokens. This variant was trained on an additional 100 million tokens at 32k context length, using a rope_theta value of `1.5e6` and a learning rate of `1.5e-5` with a batch size of `256*8192` tokens and otherwise equal hyperparameters to the base model.
## Instruction Tuning
We also provide an instruction-tuned version: [DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1), utilizing the DiscoLM German dataset for fine-tuning (also available as a long-context model at [DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1)).
Find more details in the respective model cards. Also check out our experimental merge ([DiscoResearch/Llama3-DiscoLeo-8B-DARE-Experimental](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-8B-DARE-Experimental)) between [meta-llama/Meta-Llama3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) and our finetuned model in an attempt to keep the extraordinary capabilities of Llama3-Instruct and add exceptional German skills.
## Document Packing
We employed a more intelligent document packing strategy based on the ["Fewer Truncations Improve Language Modeling" paper by Ding et al.](https://arxiv.org/abs/2404.10830v2), using the first-fit-decreasing algorithm to pack documents into batches without truncation.
We packed our data in chunks of 10000 documents for more efficient processing while maintaining >99% packing efficiency. Documents longer than the sequence length are split into chunks of sequence length.
This approach results in overall higher benchmark scores when training on the same data with equal hyperparameters. The following numbers are from initial experiments with `3e-5 lr` and 12k steps and show improvements comparable to those shown in the original paper.
| Task | Naive Packing | Fewer Truncations Packing | Percentage Increase |
|-------------------|---------------|---------------------------|---------------------|
| truthfulqa_mc | 0.452648 | 0.467687 | 3.32% |
| arc_challenge | 0.517918 | 0.528157 | 1.98% |
| truthful_qa_de | 0.485529 | 0.492979 | 1.53% |
| arc_challenge_de | 0.480375 | 0.493174 | 2.66% |
| hellaswag | 0.776041 | 0.773352 | -0.35% |
| hellaswag_de | 0.655248 | 0.653356 | -0.29% |
| MMLU | 0.573719 | 0.579802 | 1.06% |
| MMLU-DE | 0.504509 | 0.503863 | -0.13% |
The following is our simple implementation of the first-fit-decreasing algorithm described in the paper.
```python
def pack_documents(tokenized_documents):
# Sort documents by their length in descending order
sorted_docs = sorted(tokenized_documents, key=len, reverse=True)
# Initialize bins
bins = []
# Function to find the first bin that can accommodate the document
def find_bin(doc):
for b in bins:
if sum(len(d) for d in b) + len(doc) <= 8192:
return b
return None
# Place each document in the first available bin or create a new bin
for doc in sorted_docs:
target_bin = find_bin(doc)
if target_bin is not None:
target_bin.append(doc)
else:
# Create a new bin with this document if no suitable bin is found
bins.append([doc])
# Return results
return bins
```
## Model Configurations
We release DiscoLeo-8B in the following configurations:
1. [Base model with continued pretraining](https://huggingface.co/DiscoResearch/Llama3-German-8B)
2. [Long-context version (32k context length)](https://huggingface.co/DiscoResearch/Llama3-German-8B-32k)
3. [Instruction-tuned version of the base model](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1)
4. [Instruction-tuned version of the long-context model](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1)
5. [Experimental `DARE-TIES` Merge with Llama3-Instruct](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-8B-DARE-Experimental)
6. [Collection of Quantized versions](https://huggingface.co/collections/DiscoResearch/discoleo-8b-quants-6651bcf8f72c9a37ce485d42)
## How to use:
Here's how to use the model with transformers:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device="cuda"
model = AutoModelForCausalLM.from_pretrained(
"DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1")
prompt = "Schreibe ein Essay über die Bedeutung der Energiewende für Deutschlands Wirtschaft"
messages = [
{"role": "system", "content": "Du bist ein hilfreicher Assistent."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Acknowledgements
The model was trained and evaluated by [Björn Plüster](https://huggingface.co/bjoernp) ([DiscoResearch](https://huggingface.co/DiscoResearch), [ellamind](https://ellamind.com)) with data preparation and project supervision by [Manuel Brack](http://manuel-brack.eu) ([DFKI](https://www.dfki.de/web/), [TU-Darmstadt](https://www.tu-darmstadt.de/)). Initial work on dataset collection and curation was performed by [Malte Ostendorff](https://ostendorff.org) and [Pedro Ortiz Suarez](https://portizs.eu). Instruction tuning was done with the DiscoLM German dataset created by [Jan-Philipp Harries](https://huggingface.co/jphme) and [Daniel Auras](https://huggingface.co/rasdani) ([DiscoResearch](https://huggingface.co/DiscoResearch), [ellamind](https://ellamind.com)). We extend our gratitude to [LAION](https://laion.ai/) and friends, especially [Christoph Schuhmann](https://entwickler.de/experten/christoph-schuhmann) and [Jenia Jitsev](https://huggingface.co/JJitsev), for initiating this collaboration.
The model training was supported by a compute grant at the [42 supercomputer](https://hessian.ai/) which is a central component in the development of [hessian AI](https://hessian.ai/), the [AI Innovation Lab](https://hessian.ai/infrastructure/ai-innovationlab/) (funded by the [Hessian Ministry of Higher Education, Research and the Art (HMWK)](https://wissenschaft.hessen.de) & the [Hessian Ministry of the Interior, for Security and Homeland Security (HMinD)](https://innen.hessen.de)) and the [AI Service Centers](https://hessian.ai/infrastructure/ai-service-centre/) (funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html)).
The curation of the training data is partially funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html)
through the project [OpenGPT-X](https://opengpt-x.de/en/) (project no. 68GX21007D).
|
mradermacher/SI-FT-CL-7B-Python-GGUF | mradermacher | "2024-06-05T22:29:57Z" | 5,117 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:zichao22/SI-FT-CL-7B-Python",
"endpoints_compatible",
"region:us"
] | null | "2024-06-05T22:05:06Z" | ---
base_model: zichao22/SI-FT-CL-7B-Python
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/zichao22/SI-FT-CL-7B-Python
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SI-FT-CL-7B-Python-GGUF/resolve/main/SI-FT-CL-7B-Python.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/SI-FT-CL-7B-Python-GGUF/resolve/main/SI-FT-CL-7B-Python.IQ3_XS.gguf) | IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/SI-FT-CL-7B-Python-GGUF/resolve/main/SI-FT-CL-7B-Python.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SI-FT-CL-7B-Python-GGUF/resolve/main/SI-FT-CL-7B-Python.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/SI-FT-CL-7B-Python-GGUF/resolve/main/SI-FT-CL-7B-Python.IQ3_M.gguf) | IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/SI-FT-CL-7B-Python-GGUF/resolve/main/SI-FT-CL-7B-Python.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SI-FT-CL-7B-Python-GGUF/resolve/main/SI-FT-CL-7B-Python.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/SI-FT-CL-7B-Python-GGUF/resolve/main/SI-FT-CL-7B-Python.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/SI-FT-CL-7B-Python-GGUF/resolve/main/SI-FT-CL-7B-Python.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SI-FT-CL-7B-Python-GGUF/resolve/main/SI-FT-CL-7B-Python.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SI-FT-CL-7B-Python-GGUF/resolve/main/SI-FT-CL-7B-Python.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/SI-FT-CL-7B-Python-GGUF/resolve/main/SI-FT-CL-7B-Python.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/SI-FT-CL-7B-Python-GGUF/resolve/main/SI-FT-CL-7B-Python.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SI-FT-CL-7B-Python-GGUF/resolve/main/SI-FT-CL-7B-Python.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SI-FT-CL-7B-Python-GGUF/resolve/main/SI-FT-CL-7B-Python.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
h2oai/h2ogpt-oig-oasst1-256-6_9b | h2oai | "2023-06-02T22:36:04Z" | 5,114 | 5 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"gpt",
"llm",
"large language model",
"open-source",
"en",
"dataset:h2oai/h2ogpt-oig-oasst1-instruct-cleaned-v1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-04-17T18:09:08Z" | ---
license: apache-2.0
language:
- en
library_name: transformers
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
tags:
- gpt
- llm
- large language model
- open-source
datasets:
- h2oai/h2ogpt-oig-oasst1-instruct-cleaned-v1
---
# h2oGPT Model Card
## Summary
H2O.ai's `h2ogpt-oig-oasst1-256-6_9b` is a 6.9 billion parameter instruction-following large language model licensed for commercial use.
- Base model: [EleutherAI/pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b)
- Fine-tuning dataset: [h2oai/h2ogpt-oig-oasst1-instruct-cleaned-v1](https://huggingface.co/datasets/h2oai/h2ogpt-oig-oasst1-instruct-cleaned-v1)
- Data-prep and fine-tuning code: [H2O.ai Github](https://github.com/h2oai/h2ogpt)
- Training logs: [zip](https://huggingface.co/h2oai/h2ogpt-oig-oasst1-256-6_9b/blob/main/pythia-6.9b.h2ogpt-oig-oasst1-instruct-cleaned-v1.json.1_epochs.5fc91911bc2bfaaf3b6c2de577c4b0ae45a07a4a.9.zip)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed.
```bash
pip install transformers==4.28.1
pip install accelerate==0.18.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(model="h2oai/h2ogpt-oig-oasst1-256-6_9b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", prompt_type='human_bot')
res = generate_text("Why is drinking water so healthy?", max_new_tokens=100)
print(res[0]["generated_text"])
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/h2oai/h2ogpt-oig-oasst1-256-6_9b/blob/main/h2oai_pipeline.py),
store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("h2oai/h2ogpt-oig-oasst1-256-6_9b", padding_side="left")
model = AutoModelForCausalLM.from_pretrained("h2oai/h2ogpt-oig-oasst1-256-6_9b", torch_dtype=torch.bfloat16, device_map="auto")
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer, prompt_type='human_bot')
res = generate_text("Why is drinking water so healthy?", max_new_tokens=100)
print(res[0]["generated_text"])
```
## Model Architecture
```
GPTNeoXForCausalLM(
(gpt_neox): GPTNeoXModel(
(embed_in): Embedding(50432, 4096)
(layers): ModuleList(
(0-31): 32 x GPTNeoXLayer(
(input_layernorm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
(post_attention_layernorm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
(attention): GPTNeoXAttention(
(rotary_emb): RotaryEmbedding()
(query_key_value): Linear(in_features=4096, out_features=12288, bias=True)
(dense): Linear(in_features=4096, out_features=4096, bias=True)
)
(mlp): GPTNeoXMLP(
(dense_h_to_4h): Linear(in_features=4096, out_features=16384, bias=True)
(dense_4h_to_h): Linear(in_features=16384, out_features=4096, bias=True)
(act): GELUActivation()
)
)
)
(final_layer_norm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
)
(embed_out): Linear(in_features=4096, out_features=50432, bias=False)
)
```
## Model Configuration
```json
GPTNeoXConfig {
"_name_or_path": "h2oai/h2ogpt-oig-oasst1-256-6_9b",
"architectures": [
"GPTNeoXForCausalLM"
],
"bos_token_id": 0,
"custom_pipelines": {
"text-generation": {
"impl": "h2oai_pipeline.H2OTextGenerationPipeline",
"pt": "AutoModelForCausalLM"
}
},
"eos_token_id": 0,
"hidden_act": "gelu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 16384,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 2048,
"model_type": "gpt_neox",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"rotary_emb_base": 10000,
"rotary_pct": 0.25,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.28.1",
"use_cache": true,
"use_parallel_residual": true,
"vocab_size": 50432
}
```
|
TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF | TheBloke | "2023-09-27T12:46:19Z" | 5,112 | 13 | transformers | [
"transformers",
"gguf",
"llama",
"code llama",
"base_model:Phind/Phind-CodeLlama-34B-Python-v1",
"license:llama2",
"model-index",
"text-generation-inference",
"region:us"
] | null | "2023-08-26T09:28:26Z" | ---
license: llama2
tags:
- code llama
base_model: Phind/Phind-CodeLlama-34B-Python-v1
inference: false
model_creator: Phind
model_type: llama
prompt_template: '{prompt} \n
'
quantized_by: TheBloke
model-index:
- name: Phind-CodeLlama-34B-v1
results:
- task:
type: text-generation
dataset:
name: HumanEval
type: openai_humaneval
metrics:
- type: pass@1
value: 69.5%
name: pass@1
verified: false
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Phind CodeLlama 34B Python v1 - GGUF
- Model creator: [Phind](https://huggingface.co/Phind)
- Original model: [Phind CodeLlama 34B Python v1](https://huggingface.co/Phind/Phind-CodeLlama-34B-Python-v1)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Phind's Phind CodeLlama 34B Python v1](https://huggingface.co/Phind/Phind-CodeLlama-34B-Python-v1).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF)
* [Phind's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Phind/Phind-CodeLlama-34B-Python-v1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Plain-with-newline
```
{prompt} \n
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [phind-codellama-34b-python-v1.Q2_K.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/blob/main/phind-codellama-34b-python-v1.Q2_K.gguf) | Q2_K | 2 | 14.21 GB| 16.71 GB | smallest, significant quality loss - not recommended for most purposes |
| [phind-codellama-34b-python-v1.Q3_K_S.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/blob/main/phind-codellama-34b-python-v1.Q3_K_S.gguf) | Q3_K_S | 3 | 14.61 GB| 17.11 GB | very small, high quality loss |
| [phind-codellama-34b-python-v1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/blob/main/phind-codellama-34b-python-v1.Q3_K_M.gguf) | Q3_K_M | 3 | 16.28 GB| 18.78 GB | very small, high quality loss |
| [phind-codellama-34b-python-v1.Q3_K_L.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/blob/main/phind-codellama-34b-python-v1.Q3_K_L.gguf) | Q3_K_L | 3 | 17.77 GB| 20.27 GB | small, substantial quality loss |
| [phind-codellama-34b-python-v1.Q4_0.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/blob/main/phind-codellama-34b-python-v1.Q4_0.gguf) | Q4_0 | 4 | 19.05 GB| 21.55 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [phind-codellama-34b-python-v1.Q4_K_S.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/blob/main/phind-codellama-34b-python-v1.Q4_K_S.gguf) | Q4_K_S | 4 | 19.15 GB| 21.65 GB | small, greater quality loss |
| [phind-codellama-34b-python-v1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/blob/main/phind-codellama-34b-python-v1.Q4_K_M.gguf) | Q4_K_M | 4 | 20.22 GB| 22.72 GB | medium, balanced quality - recommended |
| [phind-codellama-34b-python-v1.Q5_0.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/blob/main/phind-codellama-34b-python-v1.Q5_0.gguf) | Q5_0 | 5 | 23.24 GB| 25.74 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [phind-codellama-34b-python-v1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/blob/main/phind-codellama-34b-python-v1.Q5_K_S.gguf) | Q5_K_S | 5 | 23.24 GB| 25.74 GB | large, low quality loss - recommended |
| [phind-codellama-34b-python-v1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/blob/main/phind-codellama-34b-python-v1.Q5_K_M.gguf) | Q5_K_M | 5 | 23.84 GB| 26.34 GB | large, very low quality loss - recommended |
| [phind-codellama-34b-python-v1.Q6_K.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/blob/main/phind-codellama-34b-python-v1.Q6_K.gguf) | Q6_K | 6 | 27.68 GB| 30.18 GB | very large, extremely low quality loss |
| [phind-codellama-34b-python-v1.Q8_0.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/blob/main/phind-codellama-34b-python-v1.Q8_0.gguf) | Q8_0 | 8 | 35.86 GB| 38.36 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF and below it, a specific filename to download, such as: phind-codellama-34b-python-v1.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF phind-codellama-34b-python-v1.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF phind-codellama-34b-python-v1.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m phind-codellama-34b-python-v1.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt} \n"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF", model_file="phind-codellama-34b-python-v1.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Phind's Phind CodeLlama 34B Python v1
# **Phind-CodeLlama-34B-Python-v1**
We've fine-tuned CodeLlama-34B and CodeLlama-34B-Python on an internal Phind dataset that achieve 67.6% and 69.5% pass@1 on HumanEval, respectively. GPT-4 achieves 67%. We've applied OpenAI's decontamination methodology to our dataset to ensure result validity.
More details can be found on our [blog post](https://www.phind.com/blog/code-llama-beats-gpt4).
## Model Details
This model is fine-tuned from CodeLlama-34B-Python and achieves 69.5% pass@1 on HumanEval.
## Dataset Details
We fined-tuned on a proprietary dataset of ~80k high quality programming problems and solutions. This dataset consists of instruction-answer pairs instead of code completion examples, making it structurally different from HumanEval. The Phind models were trained for 2 epochs, for a total of ~160k examples shown. LoRA was not used -- both models are a native finetune. We used DeepSpeed ZeRO 3 and Flash Attention 2 to train these models in three hours on 32 A100-80GB GPUs. We used a sequence length of 4096 tokens.
## How to Get Started with the Model
Make sure to install Transformers from the main git branch:
```bash
pip install git+https://github.com/huggingface/transformers.git
```
## How to Prompt the Model
**Please note that this model is somewhat instruction-tuned, but not chat-tuned.**
Do not try to use the Llama chat markup with this model. Instead, simply tell it what you want and add "\n: " at the end of your task.
For example:
```
Write me a linked list implementation: \n
```
## How to reproduce HumanEval Results
To reproduce our results:
```python
from transformers import AutoTokenizer, LlamaForCausalLM
from human_eval.data import write_jsonl, read_problems
from tqdm import tqdm
# initialize the model
model_path = "Phind/Phind-CodeLlama-34B-v1"
model = LlamaForCausalLM.from_pretrained(model_path, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_path)
# HumanEval helper
def generate_one_completion(prompt: str):
tokenizer.pad_token = tokenizer.eos_token
inputs = tokenizer(prompt, return_tensors="pt", truncation=True, max_length=4096)
# Generate
generate_ids = model.generate(inputs.input_ids.to("cuda"), max_new_tokens=256, do_sample=True, top_p=0.75, top_k=40, temperature=0.1)
completion = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
completion = completion.replace(prompt, "").split("\n\n\n")[0]
return completion
# perform HumanEval
problems = read_problems()
num_samples_per_task = 1
samples = [
dict(task_id=task_id, completion=generate_one_completion(problems[task_id]["prompt"]))
for task_id in tqdm(problems)
for _ in range(num_samples_per_task)
]
write_jsonl("samples.jsonl", samples)
# run `evaluate_functional_correctness samples.jsonl` in your HumanEval code sandbox
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This model has undergone very limited testing. Additional safety testing should be performed before any real-world deployments.
## Training details
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
- **Hardware Type:** 32x A100-80GB
- **Hours used:** 90 GPU-hours
- **Cloud Provider:** AWS
- **Compute Region:** us-east-1
<!-- original-model-card end -->
|
mradermacher/Shiki-m7-i1-GGUF | mradermacher | "2024-06-05T08:42:14Z" | 5,111 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Sao10K/Shiki-m7",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-04T22:10:08Z" | ---
base_model: Sao10K/Shiki-m7
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Sao10K/Shiki-m7
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Shiki-m7-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Shiki-m7-i1-GGUF/resolve/main/Shiki-m7.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Shiki-m7-i1-GGUF/resolve/main/Shiki-m7.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Shiki-m7-i1-GGUF/resolve/main/Shiki-m7.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Shiki-m7-i1-GGUF/resolve/main/Shiki-m7.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Shiki-m7-i1-GGUF/resolve/main/Shiki-m7.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Shiki-m7-i1-GGUF/resolve/main/Shiki-m7.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Shiki-m7-i1-GGUF/resolve/main/Shiki-m7.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Shiki-m7-i1-GGUF/resolve/main/Shiki-m7.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Shiki-m7-i1-GGUF/resolve/main/Shiki-m7.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Shiki-m7-i1-GGUF/resolve/main/Shiki-m7.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Shiki-m7-i1-GGUF/resolve/main/Shiki-m7.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Shiki-m7-i1-GGUF/resolve/main/Shiki-m7.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Shiki-m7-i1-GGUF/resolve/main/Shiki-m7.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Shiki-m7-i1-GGUF/resolve/main/Shiki-m7.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Shiki-m7-i1-GGUF/resolve/main/Shiki-m7.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Shiki-m7-i1-GGUF/resolve/main/Shiki-m7.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Shiki-m7-i1-GGUF/resolve/main/Shiki-m7.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Shiki-m7-i1-GGUF/resolve/main/Shiki-m7.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Shiki-m7-i1-GGUF/resolve/main/Shiki-m7.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Shiki-m7-i1-GGUF/resolve/main/Shiki-m7.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Shiki-m7-i1-GGUF/resolve/main/Shiki-m7.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mradermacher/IceBlendedLatteRP-7b-i1-GGUF | mradermacher | "2024-06-04T05:50:29Z" | 5,105 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"alpaca",
"mistral",
"en",
"base_model:icefog72/IceBlendedLatteRP-7b",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-03T05:22:07Z" | ---
base_model: icefog72/IceBlendedLatteRP-7b
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- mergekit
- merge
- alpaca
- mistral
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/icefog72/IceBlendedLatteRP-7b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/IceBlendedLatteRP-7b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/IceBlendedLatteRP-7b-i1-GGUF/resolve/main/IceBlendedLatteRP-7b.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/IceBlendedLatteRP-7b-i1-GGUF/resolve/main/IceBlendedLatteRP-7b.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/IceBlendedLatteRP-7b-i1-GGUF/resolve/main/IceBlendedLatteRP-7b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/IceBlendedLatteRP-7b-i1-GGUF/resolve/main/IceBlendedLatteRP-7b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/IceBlendedLatteRP-7b-i1-GGUF/resolve/main/IceBlendedLatteRP-7b.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/IceBlendedLatteRP-7b-i1-GGUF/resolve/main/IceBlendedLatteRP-7b.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/IceBlendedLatteRP-7b-i1-GGUF/resolve/main/IceBlendedLatteRP-7b.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/IceBlendedLatteRP-7b-i1-GGUF/resolve/main/IceBlendedLatteRP-7b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/IceBlendedLatteRP-7b-i1-GGUF/resolve/main/IceBlendedLatteRP-7b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/IceBlendedLatteRP-7b-i1-GGUF/resolve/main/IceBlendedLatteRP-7b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/IceBlendedLatteRP-7b-i1-GGUF/resolve/main/IceBlendedLatteRP-7b.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/IceBlendedLatteRP-7b-i1-GGUF/resolve/main/IceBlendedLatteRP-7b.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/IceBlendedLatteRP-7b-i1-GGUF/resolve/main/IceBlendedLatteRP-7b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/IceBlendedLatteRP-7b-i1-GGUF/resolve/main/IceBlendedLatteRP-7b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/IceBlendedLatteRP-7b-i1-GGUF/resolve/main/IceBlendedLatteRP-7b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/IceBlendedLatteRP-7b-i1-GGUF/resolve/main/IceBlendedLatteRP-7b.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/IceBlendedLatteRP-7b-i1-GGUF/resolve/main/IceBlendedLatteRP-7b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/IceBlendedLatteRP-7b-i1-GGUF/resolve/main/IceBlendedLatteRP-7b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/IceBlendedLatteRP-7b-i1-GGUF/resolve/main/IceBlendedLatteRP-7b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/IceBlendedLatteRP-7b-i1-GGUF/resolve/main/IceBlendedLatteRP-7b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/IceBlendedLatteRP-7b-i1-GGUF/resolve/main/IceBlendedLatteRP-7b.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
lakkeo/stable-cypher-instruct-3b | lakkeo | "2024-07-01T08:06:43Z" | 5,105 | 1 | transformers | [
"transformers",
"safetensors",
"gguf",
"stablelm",
"text-generation",
"causal-lm",
"code",
"cypher",
"graph",
"neo4j",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-06-29T16:08:10Z" | ---
license: apache-2.0
language:
- en
metrics:
- bleu
- rouge
tags:
- causal-lm
- code
- cypher
- graph
- neo4j
inference: false
widget:
- text: "Show me the people who have Python and Cloud skills and have been in the company for at least 3 years."
example_title: "Example 1"
- text: "What is the IMDb rating of Pulp Fiction?"
example_title: "Example 2"
- text: "Display the first 3 users followed by 'Neo4j' who have more than 10000 followers."
example_title: "Example 3"
---
## Model Description
A specialized 3B parameters model beating SoA models such as GPT4-o at generating CYPHER.
It's a finetune of https://huggingface.co/stabilityai/stable-code-instruct-3b trained on https://github.com/neo4j-labs/text2cypher/tree/main/datasets/synthetic_opus_demodbs to generate CYPHER queries from text to query GraphDB such as neo4j.
## Usage
### Safetensors (recommended)
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("lakkeo/stable-cypher-instruct-3b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("lakkeo/stable-cypher-instruct-3b", torch_dtype=torch.bfloat16, trust_remote_code=True)
messages = [
{
"role": "user",
"content": "Show me the people who have Python and Cloud skills and have been in the company for at least 3 years."
}
]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
tokens = model.generate(
**inputs,
max_new_tokens=128,
do_sample=True,
top_p=0.9,
temperature=0.2,
pad_token_id=tokenizer.eos_token_id,
)
outputs = tokenizer.batch_decode(tokens[:, inputs.input_ids.shape[-1]:], skip_special_tokens=False)[0]
```
### GGUF
```python
from llama_cpp import Llama
# Load the GGUF model
print("Loading model...")
model = Llama(
model_path=r"C:\Users\John\stable-cypher-instruct-3b.Q4_K_M.gguf",
n_ctx=512,
n_batch=512,
n_gpu_layers=-1, # Use all available GPU layers
max_tokens=128,
top_p=0.9,
temperature=0.2,
verbose=False
)
# Define your question
question = "Show me the people who have Python and Cloud skills and have been in the company for at least 3 years."
# Create the full prompt (simulating the apply_chat_template function)
full_prompt = f"<|im_start|>system\nCreate a Cypher statement to answer the following question:<|im_end|>\n<|im_start|>user\n{question}<|im_end|>\n<|im_start|>assistant\n"
# Generate response
print("Generating response...")
response = model(
full_prompt,
max_tokens=128,
stop=["<|im_end|>", "<|im_start|>"],
echo=False
)
# Extract and print the generated response
answer = response['choices'][0]['text'].strip()
print("\nQuestion:", question)
print("\nGenerated Cypher statement:")
print(answer)
```
## Performance
| Metric | stable-code-instruct-3b | gpt4-o | stable-cypher-instruct-3b |
| :----------: | :---------------------: | :--------: | :-----------------------: |
| BLEU-4 | 19.07 | 32.35 | **88.63** |
| ROUGE-1 | 39.49 | 69.17 | **95.09** |
| ROUGE-2 | 24.82 | 46.97 | **90.71** |
| ROUGE-L | 29.63 | 65.24 | **91.51** |
| Jaro-Winkler | 52.21 | 86.38 | **95.69** |
| Jaccard | 25.55 | 72.80 | **90.78** |
| Pass@1 | 0.00 | 0.00 | **51.80** |
### Example

### Eval params

## Reproducability
This is the config file from Llama Factory :
```json
{
"top.model_name": "Custom",
"top.finetuning_type": "lora",
"top.adapter_path": [],
"top.quantization_bit": "none",
"top.template": "default",
"top.rope_scaling": "none",
"top.booster": "none",
"train.training_stage": "Supervised Fine-Tuning",
"train.dataset_dir": "data",
"train.dataset": [
"cypher_opus"
],
"train.learning_rate": "2e-4",
"train.num_train_epochs": "5.0",
"train.max_grad_norm": "1.0",
"train.max_samples": "5000",
"train.compute_type": "fp16",
"train.cutoff_len": 256,
"train.batch_size": 16,
"train.gradient_accumulation_steps": 2,
"train.val_size": 0.1,
"train.lr_scheduler_type": "cosine",
"train.logging_steps": 10,
"train.save_steps": 100,
"train.warmup_steps": 20,
"train.neftune_alpha": 0,
"train.optim": "adamw_torch",
"train.resize_vocab": false,
"train.packing": false,
"train.upcast_layernorm": false,
"train.use_llama_pro": false,
"train.shift_attn": false,
"train.report_to": false,
"train.num_layer_trainable": 3,
"train.name_module_trainable": "all",
"train.lora_rank": 64,
"train.lora_alpha": 64,
"train.lora_dropout": 0.1,
"train.loraplus_lr_ratio": 0,
"train.create_new_adapter": false,
"train.use_rslora": false,
"train.use_dora": true,
"train.lora_target": "",
"train.additional_target": "",
"train.dpo_beta": 0.1,
"train.dpo_ftx": 0,
"train.orpo_beta": 0.1,
"train.reward_model": null,
"train.use_galore": false,
"train.galore_rank": 16,
"train.galore_update_interval": 200,
"train.galore_scale": 0.25,
"train.galore_target": "all"
}
```
I used llama.cpp to merge the LoRa and generate the quants.
The progress achieved from the base model is significant but you will still need to finetune on your company's syntax and entities.
I've been tickering with the training parameters for a few batches of training but there is room for improvements.
I'm open to the idea of making a full tutorial if there is enough interest in this project.
|
microsoft/llava-med-v1.5-mistral-7b | microsoft | "2024-05-14T16:54:10Z" | 5,104 | 23 | transformers | [
"transformers",
"safetensors",
"llava_mistral",
"text-generation",
"image-text-to-text",
"medical",
"vision",
"arxiv:2306.00890",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2024-05-14T15:53:59Z" | ---
license: apache-2.0
tags:
- image-text-to-text
- medical
- vision
---
# LLaVA-Med v1.5, using [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) as LLM for a better commercial license
Large Language and Vision Assistant for bioMedicine (i.e., “LLaVA-Med”) is a large language and vision model trained using a curriculum learning method for adapting LLaVA to the biomedical domain. It is an open-source release intended for research use only to facilitate reproducibility of the corresponding paper which claims improved performance for open-ended biomedical questions answering tasks, including common visual question answering (VQA) benchmark datasets such as PathVQA and VQA-RAD.
LLaVA-Med was proposed in [LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day](https://arxiv.org/abs/2306.00890) by Chunyuan Li, Cliff Wong, Sheng Zhang, Naoto Usuyama, Haotian Liu, Jianwei Yang, Tristan Naumann, Hoifung Poon, Jianfeng Gao.
**Model date:**
LLaVA-Med-v1.5-Mistral-7B was trained in April 2024.
**Paper or resources for more information:**
https://aka.ms/llava-med
**Where to send questions or comments about the model:**
https://github.com/microsoft/LLaVA-Med/issues
## License
[mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) license.
## Intended use
The data, code, and model checkpoints are intended to be used solely for (I) future research on visual-language processing and (II) reproducibility of the experimental results reported in the reference paper. The data, code, and model checkpoints are not intended to be used in clinical care or for any clinical decision making purposes.
### Primary Intended Use
The primary intended use is to support AI researchers reproducing and building on top of this work. LLaVA-Med and its associated models should be helpful for exploring various biomedical vision-language processing (VLP ) and vision question answering (VQA) research questions.
### Out-of-Scope Use
Any deployed use case of the model --- commercial or otherwise --- is out of scope. Although we evaluated the models using a broad set of publicly-available research benchmarks, the models and evaluations are intended for research use only and not intended for deployed use cases. Please refer to [the associated paper](https://aka.ms/llava-med) for more details.
## Data
This model builds upon [PMC-15M dataset](https://aka.ms/biomedclip-paper), which is a large-scale parallel image-text dataset for biomedical vision-language processing. It contains 15 million figure-caption pairs extracted from biomedical research articles in PubMed Central. It covers a diverse range of biomedical image types, such as microscopy, radiography, histology, and more.
## How to use
See the [Serving](https://github.com/microsoft/LLaVA-Med?tab=readme-ov-file#serving) and [Evaluation](https://github.com/microsoft/LLaVA-Med?tab=readme-ov-file#evaluation) sections in the [LLaVA-Med repo](https://aka.ms/llava-med).
## Limitations
This model was developed using English corpora, and thus may be considered English-only. This model is evaluated on a narrow set of biomedical benchmark tasks, described in [LLaVA-Med paper](https://aka.ms/llava-med). As such, it is not suitable for use in any clinical setting. Under some conditions, the model may make inaccurate predictions and display limitations, which may require additional mitigation strategies. In particular, this model is likely to carry many of the limitations of the model from which it is derived, [LLaVA](https://llava-vl.github.io/).
Further, this model was developed in part using the [PMC-15M](https://aka.ms/biomedclip-paper) dataset. The figure-caption pairs that make up this dataset may contain biases reflecting the current practice of academic publication. For example, the corresponding papers may be enriched for positive findings, contain examples of extreme cases, and otherwise reflect distributions that are not representative of other sources of biomedical data.
### BibTeX entry and citation info
```bibtex
@article{li2023llavamed,
title={Llava-med: Training a large language-and-vision assistant for biomedicine in one day},
author={Li, Chunyuan and Wong, Cliff and Zhang, Sheng and Usuyama, Naoto and Liu, Haotian and Yang, Jianwei and Naumann, Tristan and Poon, Hoifung and Gao, Jianfeng},
journal={arXiv preprint arXiv:2306.00890},
year={2023}
}
``` |
TheBloke/llama2_7b_chat_uncensored-GGUF | TheBloke | "2023-09-27T12:49:40Z" | 5,099 | 27 | transformers | [
"transformers",
"gguf",
"llama",
"dataset:ehartford/wizard_vicuna_70k_unfiltered",
"base_model:georgesung/llama2_7b_chat_uncensored",
"license:other",
"text-generation-inference",
"region:us"
] | null | "2023-09-18T07:29:20Z" | ---
license: other
datasets:
- ehartford/wizard_vicuna_70k_unfiltered
model_name: Llama2 7B Chat Uncensored
base_model: georgesung/llama2_7b_chat_uncensored
inference: false
model_creator: George Sung
model_type: llama
prompt_template: '### HUMAN:
{prompt}
### RESPONSE:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama2 7B Chat Uncensored - GGUF
- Model creator: [George Sung](https://huggingface.co/georgesung)
- Original model: [Llama2 7B Chat Uncensored](https://huggingface.co/georgesung/llama2_7b_chat_uncensored)
<!-- description start -->
## Description
This repo contains GGUF format model files for [George Sung's Llama2 7B Chat Uncensored](https://huggingface.co/georgesung/llama2_7b_chat_uncensored).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGUF)
* [George Sung's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/georgesung/llama2_7b_chat_uncensored)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Human-Response
```
### HUMAN:
{prompt}
### RESPONSE:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [George Sung's Llama2 7B Chat Uncensored](https://huggingface.co/georgesung/llama2_7b_chat_uncensored).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama2_7b_chat_uncensored.Q2_K.gguf](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGUF/blob/main/llama2_7b_chat_uncensored.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama2_7b_chat_uncensored.Q3_K_S.gguf](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGUF/blob/main/llama2_7b_chat_uncensored.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [llama2_7b_chat_uncensored.Q3_K_M.gguf](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGUF/blob/main/llama2_7b_chat_uncensored.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [llama2_7b_chat_uncensored.Q3_K_L.gguf](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGUF/blob/main/llama2_7b_chat_uncensored.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [llama2_7b_chat_uncensored.Q4_0.gguf](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGUF/blob/main/llama2_7b_chat_uncensored.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama2_7b_chat_uncensored.Q4_K_S.gguf](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGUF/blob/main/llama2_7b_chat_uncensored.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [llama2_7b_chat_uncensored.Q4_K_M.gguf](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGUF/blob/main/llama2_7b_chat_uncensored.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [llama2_7b_chat_uncensored.Q5_0.gguf](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGUF/blob/main/llama2_7b_chat_uncensored.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama2_7b_chat_uncensored.Q5_K_S.gguf](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGUF/blob/main/llama2_7b_chat_uncensored.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [llama2_7b_chat_uncensored.Q5_K_M.gguf](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGUF/blob/main/llama2_7b_chat_uncensored.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [llama2_7b_chat_uncensored.Q6_K.gguf](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGUF/blob/main/llama2_7b_chat_uncensored.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [llama2_7b_chat_uncensored.Q8_0.gguf](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGUF/blob/main/llama2_7b_chat_uncensored.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/llama2_7b_chat_uncensored-GGUF and below it, a specific filename to download, such as: llama2_7b_chat_uncensored.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/llama2_7b_chat_uncensored-GGUF llama2_7b_chat_uncensored.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/llama2_7b_chat_uncensored-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/llama2_7b_chat_uncensored-GGUF llama2_7b_chat_uncensored.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m llama2_7b_chat_uncensored.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### HUMAN:\n{prompt}\n\n### RESPONSE:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/llama2_7b_chat_uncensored-GGUF", model_file="llama2_7b_chat_uncensored.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: George Sung's Llama2 7B Chat Uncensored
# Overview
Fine-tuned [Llama-2 7B](https://huggingface.co/TheBloke/Llama-2-7B-fp16) with an uncensored/unfiltered Wizard-Vicuna conversation dataset [ehartford/wizard_vicuna_70k_unfiltered](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered).
Used QLoRA for fine-tuning. Trained for one epoch on a 24GB GPU (NVIDIA A10G) instance, took ~19 hours to train.
The version here is the fp16 HuggingFace model.
## GGML & GPTQ versions
Thanks to [TheBloke](https://huggingface.co/TheBloke), he has created the GGML and GPTQ versions:
* https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGML
* https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GPTQ
# Prompt style
The model was trained with the following prompt style:
```
### HUMAN:
Hello
### RESPONSE:
Hi, how are you?
### HUMAN:
I'm fine.
### RESPONSE:
How can I help you?
...
```
# Training code
Code used to train the model is available [here](https://github.com/georgesung/llm_qlora).
To reproduce the results:
```
git clone https://github.com/georgesung/llm_qlora
cd llm_qlora
pip install -r requirements.txt
python train.py configs/llama2_7b_chat_uncensored.yaml
```
# Fine-tuning guide
https://georgesung.github.io/ai/qlora-ift/
<!-- original-model-card end -->
|
Khalsuu/filipino-wav2vec2-l-xls-r-300m-official | Khalsuu | "2022-05-13T05:58:50Z" | 5,094 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:filipino_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-05-13T03:24:53Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- filipino_voice
model-index:
- name: filipino-wav2vec2-l-xls-r-300m-official
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# filipino-wav2vec2-l-xls-r-300m-official
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the filipino_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4672
- Wer: 0.2922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3671 | 2.09 | 400 | 0.5584 | 0.5987 |
| 0.48 | 4.19 | 800 | 0.4244 | 0.4195 |
| 0.2796 | 6.28 | 1200 | 0.3742 | 0.3765 |
| 0.1916 | 8.38 | 1600 | 0.4291 | 0.3667 |
| 0.1463 | 10.47 | 2000 | 0.3745 | 0.3415 |
| 0.1165 | 12.57 | 2400 | 0.4472 | 0.3407 |
| 0.0955 | 14.66 | 2800 | 0.4269 | 0.3290 |
| 0.0823 | 16.75 | 3200 | 0.4608 | 0.3475 |
| 0.0709 | 18.85 | 3600 | 0.4706 | 0.3281 |
| 0.0603 | 20.94 | 4000 | 0.4380 | 0.3183 |
| 0.0527 | 23.04 | 4400 | 0.4473 | 0.3067 |
| 0.0449 | 25.13 | 4800 | 0.4550 | 0.3029 |
| 0.041 | 27.23 | 5200 | 0.4671 | 0.3020 |
| 0.0358 | 29.32 | 5600 | 0.4672 | 0.2922 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
AliAbdelrasheed/maqa_llama_4bit | AliAbdelrasheed | "2024-06-24T23:07:13Z" | 5,094 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"ar",
"base_model:maqa_llama_4bit_GGUF",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-06-21T14:28:57Z" | ---
base_model: maqa_llama_4bit_GGUF
language:
- en
- ar
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** AliAbdelrasheed
- **License:** apache-2.0
- **Finetuned from model :** maqa_llama_4bit_GGUF
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
TheBloke/CodeLlama-70B-Instruct-AWQ | TheBloke | "2024-01-30T23:03:15Z" | 5,092 | 11 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-2",
"conversational",
"code",
"arxiv:2308.12950",
"base_model:codellama/CodeLlama-70b-Instruct-hf",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | text-generation | "2024-01-30T18:31:55Z" | ---
base_model: codellama/CodeLlama-70b-Instruct-hf
inference: false
language:
- code
license: llama2
model_creator: Code Llama
model_name: Codellama 70B Instruct
model_type: llama
pipeline_tag: text-generation
prompt_template: "Source: system\n\n {system_message}<step> Source: user\n\n {prompt}\
\ <step> Source: assistant\n \n"
quantized_by: TheBloke
tags:
- llama-2
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Codellama 70B Instruct - AWQ
- Model creator: [Code Llama](https://huggingface.co/codellama)
- Original model: [Codellama 70B Instruct](https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf)
<!-- description start -->
## Description
This repo contains AWQ model files for [Code Llama's Codellama 70B Instruct](https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/CodeLlama-70B-Instruct-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CodeLlama-70B-Instruct-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CodeLlama-70B-Instruct-GGUF)
* [Code Llama's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: CodeLlama-70B-Instruct
```
Source: system
{system_message}<step> Source: user
{prompt} <step> Source: assistant
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/CodeLlama-70B-Instruct-AWQ/tree/main) | 4 | 128 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1/viewer/) | 4096 | 36.61 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/CodeLlama-70B-Instruct-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `CodeLlama-70B-Instruct-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/CodeLlama-70B-Instruct-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''Source: system
{system_message}<step> Source: user
{prompt} <step> Source: assistant
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/CodeLlama-70B-Instruct-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/CodeLlama-70B-Instruct-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Source: system
{system_message}<step> Source: user
{prompt} <step> Source: assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/CodeLlama-70B-Instruct-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''Source: system
{system_message}<step> Source: user
{prompt} <step> Source: assistant
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Code Llama's Codellama 70B Instruct
# **Code Llama**
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 70B instruct-tuned version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.
| | Base Model | Python | Instruct |
| --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) |
| 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) |
| 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) |
| 70B | [codellama/CodeLlama-70b-hf](https://huggingface.co/codellama/CodeLlama-70b-hf) | [codellama/CodeLlama-70b-Python-hf](https://huggingface.co/codellama/CodeLlama-70b-Python-hf) | [codellama/CodeLlama-70b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf) |
Model capabilities:
- [x] Code completion.
- [ ] Infilling.
- [x] Instructions / chat.
- [ ] Python specialist.
## Model Use
Install `transformers`
```bash
pip install transformers accelerate
```
**Chat use:** The 70B Instruct model uses a [different prompt template](#chat_prompt) than the smaller versions. To use it with `transformers`, we recommend you use the built-in chat template:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "codellama/CodeLlama-70b-Instruct-hf"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto",
)
chat = [
{"role": "system", "content": "You are a helpful and honest code assistant expert in JavaScript. Please, provide all answers to programming questions in JavaScript"},
{"role": "user", "content": "Write a function that computes the set of sums of all contiguous sublists of a given list."},
]
inputs = tokenizer.apply_chat_template(chat, return_tensors="pt").to("cuda")
output = model.generate(input_ids=inputs, max_new_tokens=200)
output = output[0].to("cpu")
print(tokenizer.decode(output))
```
You can also use the model for **text or code completion**. This examples uses transformers' `pipeline` interface:
```py
from transformers import AutoTokenizer
import transformers
import torch
model_id = "codellama/CodeLlama-70b-hf"
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'def fibonacci(',
do_sample=True,
temperature=0.2,
top_p=0.9,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=100,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
<a name="chat_prompt"></a>
## Chat prompt
CodeLlama 70B Instruct uses a different format for the chat prompt than previous Llama 2 or CodeLlama models. As mentioned above, the easiest way to use it is with the help of the tokenizer's chat template. If you need to build the string or tokens, manually, here's how to do it.
We'll do our tests with the following made-up dialog:
```py
chat = [
{"role": "system", "content": "System prompt "},
{"role": "user", "content": "First user query"},
{"role": "assistant", "content": "Model response to first query"},
{"role": "user", "content": "Second user query"},
]
```
First, let's see what the prompt looks like if we use the chat template:
```py
tokenizer.apply_chat_template(chat, tokenize=False)
```
```
'<s>Source: system\n\n System prompt <step> Source: user\n\n First user query <step> Source: assistant\n\n Model response to first query <step> Source: user\n\n Second user query <step> Source: assistant\nDestination: user\n\n '
```
So each turn of the conversation has a `Source` (`system`, `user`, or `assistant`), and then the content appears after two newlines and a space. Turns are separated with the special token ` <step> `. After the last turn (which must necessarily come from the `user`), we invite the model to respond by using the special syntax `Source: assistant\nDestination: user\n\n `. Let's see how we can build the same string ourselves:
```py
output = "<s>"
for m in chat:
output += f"Source: {m['role']}\n\n {m['content'].strip()}"
output += " <step> "
output += "Source: assistant\nDestination: user\n\n "
output
```
```
'<s>Source: system\n\n System prompt <step> Source: user\n\n First user query <step> Source: assistant\n\n Model response to first query <step> Source: user\n\n Second user query <step> Source: assistant\nDestination: user\n\n '
```
To verify that we got it right, we'll compare against the [reference code in the original GitHub repo](https://github.com/facebookresearch/codellama/blob/1af62e1f43db1fa5140fa43cb828465a603a48f3/llama/generation.py#L506). We used the same dialog and tokenized it with the `dialog_prompt_tokens` function and got the following tokens:
```py
reference_tokens = [1, 7562, 29901, 1788, 13, 13, 2184, 9508, 32015, 7562, 29901, 1404, 13, 13, 3824, 1404, 2346, 32015, 7562, 29901, 20255, 13, 13, 8125, 2933, 304, 937, 2346, 32015, 7562, 29901, 1404, 13, 13, 6440, 1404, 2346, 32015, 7562, 29901, 20255, 13, 14994, 3381, 29901, 1404, 13, 13, 29871]
```
Let's see what we get with the string we built using our Python loop. Note that we don't add "special tokens" because the string already starts with `<s>`, the beginning of sentence token:
```py
tokens = tokenizer.encode(output, add_special_tokens=False)
assert reference_tokens == tokens
```
Similarly, let's verify that the chat template produces the same token sequence:
```py
assert reference_tokens == tokenizer.apply_chat_template(chat)
```
As a final detail, please note that if the dialog does not start with a `system` turn, the [original code will insert one with an empty content string](https://github.com/facebookresearch/codellama/blob/1af62e1f43db1fa5140fa43cb828465a603a48f3/llama/generation.py#L418).
## Model Details
*Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).
**Model Developers** Meta
**Variations** Code Llama comes in four model sizes, and three variants:
* Code Llama: base models designed for general code synthesis and understanding
* Code Llama - Python: designed specifically for Python
* Code Llama - Instruct: for instruction following and safer deployment
All variants are available in sizes of 7B, 13B, 34B, and 70B parameters.
**This repository contains the Instruct version of the 70B parameters model.**
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture. It was fine-tuned with up to 16k tokens. This variant **does not** support long context of up to 100k tokens.
**Model Dates** Code Llama and its variants have been trained between January 2023 and January 2024.
**Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950).
## Intended Use
**Intended Use Cases** Code Llama and its variants are intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
## Hardware and Software
**Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
**Carbon Footprint** In aggregate, training all 12 Code Llama models required 1400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 228.55 tCO2eq, 100% of which were offset by Meta’s sustainability program.
## Evaluation Results
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
## Ethical Considerations and Limitations
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-use-guide](https://ai.meta.com/llama/responsible-use-guide).
|
digiplay/CleanLinearMix | digiplay | "2023-11-04T16:01:36Z" | 5,091 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-11-04T15:40:26Z" | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/42433?modelVersionId=47110 |
aurelio-ai/sr-test-vit | aurelio-ai | "2024-06-01T09:58:09Z" | 5,087 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"vit",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-01T09:57:16Z" | Tiny ViT model used for [semantic-router](https://github.com/aurelio-labs/semantic-router) tests. |
izhx/udever-bloom-560m | izhx | "2023-11-07T06:57:25Z" | 5,082 | 0 | transformers | [
"transformers",
"pytorch",
"bloom",
"feature-extraction",
"mteb",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"code",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
"pt",
"rn",
"rw",
"sn",
"st",
"sw",
"ta",
"te",
"tn",
"ts",
"tum",
"tw",
"ur",
"vi",
"wo",
"xh",
"yo",
"zh",
"zhs",
"zht",
"zu",
"arxiv:2310.08232",
"license:bigscience-bloom-rail-1.0",
"model-index",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | feature-extraction | "2023-10-24T10:49:45Z" | ---
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
tags:
- mteb
model-index:
- name: udever-bloom-560m
results:
- task:
type: STS
dataset:
type: C-MTEB/AFQMC
name: MTEB AFQMC
config: default
split: validation
revision: None
metrics:
- type: cos_sim_pearson
value: 25.170024237678657
- type: cos_sim_spearman
value: 25.32025098111752
- type: euclidean_pearson
value: 25.34284673812859
- type: euclidean_spearman
value: 25.52812937004611
- type: manhattan_pearson
value: 25.734179522960822
- type: manhattan_spearman
value: 25.92247507041032
- task:
type: STS
dataset:
type: C-MTEB/ATEC
name: MTEB ATEC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 32.3359541791282
- type: cos_sim_spearman
value: 33.45815274836323
- type: euclidean_pearson
value: 35.14748229440635
- type: euclidean_spearman
value: 33.377829932851334
- type: manhattan_pearson
value: 35.359130773295625
- type: manhattan_spearman
value: 33.524469762932426
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 72.35820895522389
- type: ap
value: 35.45566303125099
- type: f1
value: 66.49474786522534
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (de)
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 66.423982869379
- type: ap
value: 78.32781372746805
- type: f1
value: 64.24959400774807
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en-ext)
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.65817091454274
- type: ap
value: 21.73416645163647
- type: f1
value: 60.52120070712094
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (ja)
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 56.86295503211991
- type: ap
value: 12.906256075113513
- type: f1
value: 46.68625513679152
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 83.8095
- type: ap
value: 78.5195717101614
- type: f1
value: 83.74169093676316
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 38.97
- type: f1
value: 38.57853211177342
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (de)
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 26.846000000000004
- type: f1
value: 26.473886891677306
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (es)
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 38.974
- type: f1
value: 38.31719230291287
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (fr)
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 38.38799999999999
- type: f1
value: 37.53319978613875
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (ja)
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 28.311999999999998
- type: f1
value: 27.988313617729755
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 35.704
- type: f1
value: 34.863182924437254
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.053
- type: map_at_10
value: 35.811
- type: map_at_100
value: 37.035000000000004
- type: map_at_1000
value: 37.055
- type: map_at_3
value: 30.666
- type: map_at_5
value: 33.525
- type: mrr_at_1
value: 21.266
- type: mrr_at_10
value: 35.906
- type: mrr_at_100
value: 37.122
- type: mrr_at_1000
value: 37.141999999999996
- type: mrr_at_3
value: 30.714000000000002
- type: mrr_at_5
value: 33.576
- type: ndcg_at_1
value: 21.053
- type: ndcg_at_10
value: 44.545
- type: ndcg_at_100
value: 49.844
- type: ndcg_at_1000
value: 50.298
- type: ndcg_at_3
value: 33.889
- type: ndcg_at_5
value: 39.059
- type: precision_at_1
value: 21.053
- type: precision_at_10
value: 7.269
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 14.414
- type: precision_at_5
value: 11.166
- type: recall_at_1
value: 21.053
- type: recall_at_10
value: 72.688
- type: recall_at_100
value: 96.017
- type: recall_at_1000
value: 99.431
- type: recall_at_3
value: 43.242999999999995
- type: recall_at_5
value: 55.832
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 40.26646269393896
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 32.00218289816601
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 57.381567373603424
- type: mrr
value: 70.09431473420392
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 87.14803223261677
- type: cos_sim_spearman
value: 84.43626128689064
- type: euclidean_pearson
value: 85.03130036472703
- type: euclidean_spearman
value: 84.05974668365359
- type: manhattan_pearson
value: 85.59339889467545
- type: manhattan_spearman
value: 83.86938090025696
- task:
type: STS
dataset:
type: C-MTEB/BQ
name: MTEB BQ
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 44.19468290937555
- type: cos_sim_spearman
value: 43.93025426799595
- type: euclidean_pearson
value: 45.273900549350735
- type: euclidean_spearman
value: 45.07419415738924
- type: manhattan_pearson
value: 45.469211385235376
- type: manhattan_spearman
value: 45.27440191151001
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (de-en)
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 11.440501043841337
- type: f1
value: 11.295895880968951
- type: precision
value: 11.237446950317073
- type: recall
value: 11.440501043841337
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (fr-en)
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 96.53312788906008
- type: f1
value: 96.18093770636143
- type: precision
value: 96.00667693888035
- type: recall
value: 96.53312788906008
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (ru-en)
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 1.6972635954277795
- type: f1
value: 1.5885146938143124
- type: precision
value: 1.5581125970067466
- type: recall
value: 1.6972635954277795
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (zh-en)
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 96.31384939441811
- type: f1
value: 96.15587151132175
- type: precision
value: 96.07688256977357
- type: recall
value: 96.31384939441811
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 80.97402597402598
- type: f1
value: 80.88177660652944
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 33.266950159712465
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 28.65092446021672
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringP2P
name: MTEB CLSClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 35.21075820650184
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringS2S
name: MTEB CLSClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 35.121931960714484
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv1-reranking
name: MTEB CMedQAv1
config: default
split: test
revision: None
metrics:
- type: map
value: 63.41256934884578
- type: mrr
value: 68.6492857142857
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv2-reranking
name: MTEB CMedQAv2
config: default
split: test
revision: None
metrics:
- type: map
value: 63.663067375541104
- type: mrr
value: 68.92075396825396
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.997
- type: map_at_10
value: 35.477
- type: map_at_100
value: 36.722
- type: map_at_1000
value: 36.849
- type: map_at_3
value: 32.083
- type: map_at_5
value: 33.884
- type: mrr_at_1
value: 32.046
- type: mrr_at_10
value: 41.455999999999996
- type: mrr_at_100
value: 42.214
- type: mrr_at_1000
value: 42.268
- type: mrr_at_3
value: 38.722
- type: mrr_at_5
value: 40.266999999999996
- type: ndcg_at_1
value: 32.046
- type: ndcg_at_10
value: 41.705999999999996
- type: ndcg_at_100
value: 46.695
- type: ndcg_at_1000
value: 49.128
- type: ndcg_at_3
value: 36.6
- type: ndcg_at_5
value: 38.725
- type: precision_at_1
value: 32.046
- type: precision_at_10
value: 8.197000000000001
- type: precision_at_100
value: 1.323
- type: precision_at_1000
value: 0.183
- type: precision_at_3
value: 18.073
- type: precision_at_5
value: 13.047
- type: recall_at_1
value: 24.997
- type: recall_at_10
value: 54.013
- type: recall_at_100
value: 75.29400000000001
- type: recall_at_1000
value: 91.611
- type: recall_at_3
value: 38.627
- type: recall_at_5
value: 45.019999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.194
- type: map_at_10
value: 30.076000000000004
- type: map_at_100
value: 31.0
- type: map_at_1000
value: 31.125999999999998
- type: map_at_3
value: 28.137
- type: map_at_5
value: 29.206
- type: mrr_at_1
value: 28.535
- type: mrr_at_10
value: 34.833999999999996
- type: mrr_at_100
value: 35.504999999999995
- type: mrr_at_1000
value: 35.57
- type: mrr_at_3
value: 33.089
- type: mrr_at_5
value: 34.115
- type: ndcg_at_1
value: 28.535
- type: ndcg_at_10
value: 34.285
- type: ndcg_at_100
value: 38.286
- type: ndcg_at_1000
value: 41.007
- type: ndcg_at_3
value: 31.395
- type: ndcg_at_5
value: 32.687
- type: precision_at_1
value: 28.535
- type: precision_at_10
value: 6.166
- type: precision_at_100
value: 1.042
- type: precision_at_1000
value: 0.155
- type: precision_at_3
value: 14.862
- type: precision_at_5
value: 10.331
- type: recall_at_1
value: 23.194
- type: recall_at_10
value: 41.648
- type: recall_at_100
value: 58.999
- type: recall_at_1000
value: 77.46300000000001
- type: recall_at_3
value: 32.931
- type: recall_at_5
value: 36.736999999999995
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.899
- type: map_at_10
value: 42.657000000000004
- type: map_at_100
value: 43.717
- type: map_at_1000
value: 43.79
- type: map_at_3
value: 39.635
- type: map_at_5
value: 41.538000000000004
- type: mrr_at_1
value: 36.864999999999995
- type: mrr_at_10
value: 46.137
- type: mrr_at_100
value: 46.946
- type: mrr_at_1000
value: 46.986
- type: mrr_at_3
value: 43.469
- type: mrr_at_5
value: 45.262
- type: ndcg_at_1
value: 36.864999999999995
- type: ndcg_at_10
value: 48.164
- type: ndcg_at_100
value: 52.769999999999996
- type: ndcg_at_1000
value: 54.393
- type: ndcg_at_3
value: 42.887
- type: ndcg_at_5
value: 45.871
- type: precision_at_1
value: 36.864999999999995
- type: precision_at_10
value: 7.843
- type: precision_at_100
value: 1.102
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 19.352
- type: precision_at_5
value: 13.618
- type: recall_at_1
value: 31.899
- type: recall_at_10
value: 61.131
- type: recall_at_100
value: 81.504
- type: recall_at_1000
value: 93.146
- type: recall_at_3
value: 46.971000000000004
- type: recall_at_5
value: 54.42399999999999
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.621000000000002
- type: map_at_10
value: 23.621
- type: map_at_100
value: 24.636
- type: map_at_1000
value: 24.739
- type: map_at_3
value: 21.623
- type: map_at_5
value: 22.511
- type: mrr_at_1
value: 19.096
- type: mrr_at_10
value: 25.288
- type: mrr_at_100
value: 26.238
- type: mrr_at_1000
value: 26.314
- type: mrr_at_3
value: 23.202
- type: mrr_at_5
value: 24.213
- type: ndcg_at_1
value: 19.096
- type: ndcg_at_10
value: 27.529999999999998
- type: ndcg_at_100
value: 32.763
- type: ndcg_at_1000
value: 35.538
- type: ndcg_at_3
value: 23.362
- type: ndcg_at_5
value: 24.961
- type: precision_at_1
value: 19.096
- type: precision_at_10
value: 4.417999999999999
- type: precision_at_100
value: 0.739
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 9.981
- type: precision_at_5
value: 6.959999999999999
- type: recall_at_1
value: 17.621000000000002
- type: recall_at_10
value: 38.079
- type: recall_at_100
value: 62.499
- type: recall_at_1000
value: 83.783
- type: recall_at_3
value: 26.687
- type: recall_at_5
value: 30.459000000000003
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.019
- type: map_at_10
value: 15.869
- type: map_at_100
value: 17.078
- type: map_at_1000
value: 17.205000000000002
- type: map_at_3
value: 13.794
- type: map_at_5
value: 14.814
- type: mrr_at_1
value: 13.930000000000001
- type: mrr_at_10
value: 19.172
- type: mrr_at_100
value: 20.325
- type: mrr_at_1000
value: 20.415
- type: mrr_at_3
value: 17.122999999999998
- type: mrr_at_5
value: 18.124000000000002
- type: ndcg_at_1
value: 13.930000000000001
- type: ndcg_at_10
value: 19.646
- type: ndcg_at_100
value: 25.684
- type: ndcg_at_1000
value: 29.14
- type: ndcg_at_3
value: 15.614
- type: ndcg_at_5
value: 17.247
- type: precision_at_1
value: 13.930000000000001
- type: precision_at_10
value: 3.868
- type: precision_at_100
value: 0.8
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 7.420999999999999
- type: precision_at_5
value: 5.672
- type: recall_at_1
value: 11.019
- type: recall_at_10
value: 28.116000000000003
- type: recall_at_100
value: 54.794
- type: recall_at_1000
value: 79.838
- type: recall_at_3
value: 17.124
- type: recall_at_5
value: 21.086
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.791
- type: map_at_10
value: 33.442
- type: map_at_100
value: 34.719
- type: map_at_1000
value: 34.849000000000004
- type: map_at_3
value: 30.885
- type: map_at_5
value: 32.245000000000005
- type: mrr_at_1
value: 30.606
- type: mrr_at_10
value: 38.922000000000004
- type: mrr_at_100
value: 39.822
- type: mrr_at_1000
value: 39.881
- type: mrr_at_3
value: 36.622
- type: mrr_at_5
value: 37.907000000000004
- type: ndcg_at_1
value: 30.606
- type: ndcg_at_10
value: 38.867000000000004
- type: ndcg_at_100
value: 44.364
- type: ndcg_at_1000
value: 47.073
- type: ndcg_at_3
value: 34.63
- type: ndcg_at_5
value: 36.479
- type: precision_at_1
value: 30.606
- type: precision_at_10
value: 7.0360000000000005
- type: precision_at_100
value: 1.174
- type: precision_at_1000
value: 0.16
- type: precision_at_3
value: 16.522000000000002
- type: precision_at_5
value: 11.588
- type: recall_at_1
value: 24.791
- type: recall_at_10
value: 49.736000000000004
- type: recall_at_100
value: 72.67099999999999
- type: recall_at_1000
value: 91.29599999999999
- type: recall_at_3
value: 37.345
- type: recall_at_5
value: 42.400999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.669999999999998
- type: map_at_10
value: 28.605000000000004
- type: map_at_100
value: 29.769000000000002
- type: map_at_1000
value: 29.881999999999998
- type: map_at_3
value: 25.886
- type: map_at_5
value: 27.317999999999998
- type: mrr_at_1
value: 25.457
- type: mrr_at_10
value: 33.423
- type: mrr_at_100
value: 34.269
- type: mrr_at_1000
value: 34.336
- type: mrr_at_3
value: 30.974
- type: mrr_at_5
value: 32.23
- type: ndcg_at_1
value: 25.457
- type: ndcg_at_10
value: 33.785
- type: ndcg_at_100
value: 39.145
- type: ndcg_at_1000
value: 41.772
- type: ndcg_at_3
value: 29.014
- type: ndcg_at_5
value: 31.019999999999996
- type: precision_at_1
value: 25.457
- type: precision_at_10
value: 6.2330000000000005
- type: precision_at_100
value: 1.045
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 13.813
- type: precision_at_5
value: 9.863
- type: recall_at_1
value: 20.669999999999998
- type: recall_at_10
value: 44.651
- type: recall_at_100
value: 68.037
- type: recall_at_1000
value: 86.282
- type: recall_at_3
value: 31.381999999999998
- type: recall_at_5
value: 36.778
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.796583333333338
- type: map_at_10
value: 26.900166666666664
- type: map_at_100
value: 27.956583333333334
- type: map_at_1000
value: 28.08083333333333
- type: map_at_3
value: 24.598416666666665
- type: map_at_5
value: 25.81791666666667
- type: mrr_at_1
value: 23.68591666666667
- type: mrr_at_10
value: 30.65558333333333
- type: mrr_at_100
value: 31.503583333333335
- type: mrr_at_1000
value: 31.576083333333333
- type: mrr_at_3
value: 28.50525
- type: mrr_at_5
value: 29.690666666666665
- type: ndcg_at_1
value: 23.68591666666667
- type: ndcg_at_10
value: 31.425000000000004
- type: ndcg_at_100
value: 36.34316666666666
- type: ndcg_at_1000
value: 39.164249999999996
- type: ndcg_at_3
value: 27.330083333333338
- type: ndcg_at_5
value: 29.14408333333333
- type: precision_at_1
value: 23.68591666666667
- type: precision_at_10
value: 5.5862500000000015
- type: precision_at_100
value: 0.9571666666666666
- type: precision_at_1000
value: 0.13866666666666666
- type: precision_at_3
value: 12.663499999999999
- type: precision_at_5
value: 9.035333333333332
- type: recall_at_1
value: 19.796583333333338
- type: recall_at_10
value: 41.289416666666675
- type: recall_at_100
value: 63.251250000000006
- type: recall_at_1000
value: 83.4515
- type: recall_at_3
value: 29.727916666666665
- type: recall_at_5
value: 34.45824999999999
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.121
- type: map_at_10
value: 22.104
- type: map_at_100
value: 23.003
- type: map_at_1000
value: 23.108
- type: map_at_3
value: 20.233
- type: map_at_5
value: 21.186
- type: mrr_at_1
value: 18.865000000000002
- type: mrr_at_10
value: 24.951
- type: mrr_at_100
value: 25.779000000000003
- type: mrr_at_1000
value: 25.863999999999997
- type: mrr_at_3
value: 23.083000000000002
- type: mrr_at_5
value: 24.049
- type: ndcg_at_1
value: 18.865000000000002
- type: ndcg_at_10
value: 26.031
- type: ndcg_at_100
value: 30.589
- type: ndcg_at_1000
value: 33.565
- type: ndcg_at_3
value: 22.369
- type: ndcg_at_5
value: 23.932000000000002
- type: precision_at_1
value: 18.865000000000002
- type: precision_at_10
value: 4.324999999999999
- type: precision_at_100
value: 0.722
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 10.072000000000001
- type: precision_at_5
value: 7.086
- type: recall_at_1
value: 16.121
- type: recall_at_10
value: 35.577
- type: recall_at_100
value: 56.298
- type: recall_at_1000
value: 79.089
- type: recall_at_3
value: 25.239
- type: recall_at_5
value: 29.242
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.968
- type: map_at_10
value: 15.639
- type: map_at_100
value: 16.459
- type: map_at_1000
value: 16.584
- type: map_at_3
value: 14.127
- type: map_at_5
value: 14.911
- type: mrr_at_1
value: 13.73
- type: mrr_at_10
value: 18.822
- type: mrr_at_100
value: 19.592000000000002
- type: mrr_at_1000
value: 19.683999999999997
- type: mrr_at_3
value: 17.223
- type: mrr_at_5
value: 18.082
- type: ndcg_at_1
value: 13.73
- type: ndcg_at_10
value: 18.881999999999998
- type: ndcg_at_100
value: 23.182
- type: ndcg_at_1000
value: 26.479000000000003
- type: ndcg_at_3
value: 16.067999999999998
- type: ndcg_at_5
value: 17.265
- type: precision_at_1
value: 13.73
- type: precision_at_10
value: 3.544
- type: precision_at_100
value: 0.679
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 7.674
- type: precision_at_5
value: 5.561
- type: recall_at_1
value: 10.968
- type: recall_at_10
value: 25.596000000000004
- type: recall_at_100
value: 45.411
- type: recall_at_1000
value: 69.555
- type: recall_at_3
value: 17.582
- type: recall_at_5
value: 20.785
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.886
- type: map_at_10
value: 27.029999999999998
- type: map_at_100
value: 27.968
- type: map_at_1000
value: 28.108
- type: map_at_3
value: 25.001
- type: map_at_5
value: 26.185000000000002
- type: mrr_at_1
value: 24.067
- type: mrr_at_10
value: 30.756
- type: mrr_at_100
value: 31.593
- type: mrr_at_1000
value: 31.685999999999996
- type: mrr_at_3
value: 28.793999999999997
- type: mrr_at_5
value: 29.997
- type: ndcg_at_1
value: 24.067
- type: ndcg_at_10
value: 31.095
- type: ndcg_at_100
value: 35.893
- type: ndcg_at_1000
value: 39.158
- type: ndcg_at_3
value: 27.321
- type: ndcg_at_5
value: 29.247
- type: precision_at_1
value: 24.067
- type: precision_at_10
value: 5.103
- type: precision_at_100
value: 0.8460000000000001
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 12.065
- type: precision_at_5
value: 8.601
- type: recall_at_1
value: 20.886
- type: recall_at_10
value: 39.797
- type: recall_at_100
value: 61.399
- type: recall_at_1000
value: 84.555
- type: recall_at_3
value: 29.721999999999998
- type: recall_at_5
value: 34.455999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.394
- type: map_at_10
value: 28.303
- type: map_at_100
value: 29.726000000000003
- type: map_at_1000
value: 29.955
- type: map_at_3
value: 25.705
- type: map_at_5
value: 26.989
- type: mrr_at_1
value: 25.691999999999997
- type: mrr_at_10
value: 32.495000000000005
- type: mrr_at_100
value: 33.461999999999996
- type: mrr_at_1000
value: 33.534000000000006
- type: mrr_at_3
value: 30.137999999999998
- type: mrr_at_5
value: 31.383
- type: ndcg_at_1
value: 25.691999999999997
- type: ndcg_at_10
value: 33.300000000000004
- type: ndcg_at_100
value: 39.062000000000005
- type: ndcg_at_1000
value: 42.176
- type: ndcg_at_3
value: 28.859
- type: ndcg_at_5
value: 30.805
- type: precision_at_1
value: 25.691999999999997
- type: precision_at_10
value: 6.383
- type: precision_at_100
value: 1.387
- type: precision_at_1000
value: 0.22899999999999998
- type: precision_at_3
value: 13.439
- type: precision_at_5
value: 9.959999999999999
- type: recall_at_1
value: 21.394
- type: recall_at_10
value: 42.853
- type: recall_at_100
value: 69.284
- type: recall_at_1000
value: 89.646
- type: recall_at_3
value: 29.786
- type: recall_at_5
value: 34.797
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.999
- type: map_at_10
value: 19.979
- type: map_at_100
value: 20.682000000000002
- type: map_at_1000
value: 20.775
- type: map_at_3
value: 18.072
- type: map_at_5
value: 19.028
- type: mrr_at_1
value: 15.342
- type: mrr_at_10
value: 21.611
- type: mrr_at_100
value: 22.298000000000002
- type: mrr_at_1000
value: 22.375
- type: mrr_at_3
value: 19.624
- type: mrr_at_5
value: 20.659
- type: ndcg_at_1
value: 15.342
- type: ndcg_at_10
value: 23.809
- type: ndcg_at_100
value: 27.685
- type: ndcg_at_1000
value: 30.542
- type: ndcg_at_3
value: 19.842000000000002
- type: ndcg_at_5
value: 21.490000000000002
- type: precision_at_1
value: 15.342
- type: precision_at_10
value: 3.9190000000000005
- type: precision_at_100
value: 0.627
- type: precision_at_1000
value: 0.093
- type: precision_at_3
value: 8.688
- type: precision_at_5
value: 6.1370000000000005
- type: recall_at_1
value: 13.999
- type: recall_at_10
value: 34.276
- type: recall_at_100
value: 52.825
- type: recall_at_1000
value: 75.154
- type: recall_at_3
value: 23.339
- type: recall_at_5
value: 27.314
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.27
- type: map_at_10
value: 14.161999999999999
- type: map_at_100
value: 15.775
- type: map_at_1000
value: 15.947
- type: map_at_3
value: 11.701
- type: map_at_5
value: 12.952
- type: mrr_at_1
value: 18.632
- type: mrr_at_10
value: 28.871000000000002
- type: mrr_at_100
value: 29.985
- type: mrr_at_1000
value: 30.037999999999997
- type: mrr_at_3
value: 25.451
- type: mrr_at_5
value: 27.366
- type: ndcg_at_1
value: 18.632
- type: ndcg_at_10
value: 21.017
- type: ndcg_at_100
value: 28.022999999999996
- type: ndcg_at_1000
value: 31.518
- type: ndcg_at_3
value: 16.611
- type: ndcg_at_5
value: 18.149
- type: precision_at_1
value: 18.632
- type: precision_at_10
value: 6.736000000000001
- type: precision_at_100
value: 1.414
- type: precision_at_1000
value: 0.20600000000000002
- type: precision_at_3
value: 12.313
- type: precision_at_5
value: 9.759
- type: recall_at_1
value: 8.27
- type: recall_at_10
value: 26.218999999999998
- type: recall_at_100
value: 50.77
- type: recall_at_1000
value: 70.8
- type: recall_at_3
value: 15.526000000000002
- type: recall_at_5
value: 19.724
- task:
type: Retrieval
dataset:
type: C-MTEB/CmedqaRetrieval
name: MTEB CmedqaRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 10.598
- type: map_at_10
value: 15.869
- type: map_at_100
value: 17.081
- type: map_at_1000
value: 17.267
- type: map_at_3
value: 13.877
- type: map_at_5
value: 14.884
- type: mrr_at_1
value: 17.279
- type: mrr_at_10
value: 22.554
- type: mrr_at_100
value: 23.521
- type: mrr_at_1000
value: 23.619
- type: mrr_at_3
value: 20.647
- type: mrr_at_5
value: 21.625
- type: ndcg_at_1
value: 17.279
- type: ndcg_at_10
value: 20.029
- type: ndcg_at_100
value: 25.968000000000004
- type: ndcg_at_1000
value: 30.158
- type: ndcg_at_3
value: 16.947000000000003
- type: ndcg_at_5
value: 18.069
- type: precision_at_1
value: 17.279
- type: precision_at_10
value: 4.704
- type: precision_at_100
value: 0.9690000000000001
- type: precision_at_1000
value: 0.152
- type: precision_at_3
value: 9.777
- type: precision_at_5
value: 7.207
- type: recall_at_1
value: 10.598
- type: recall_at_10
value: 26.034000000000002
- type: recall_at_100
value: 51.385999999999996
- type: recall_at_1000
value: 80.49
- type: recall_at_3
value: 16.834
- type: recall_at_5
value: 20.317
- task:
type: PairClassification
dataset:
type: C-MTEB/CMNLI
name: MTEB Cmnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 70.40288634996993
- type: cos_sim_ap
value: 78.43387766087626
- type: cos_sim_f1
value: 73.09982840415867
- type: cos_sim_precision
value: 64.31616341030195
- type: cos_sim_recall
value: 84.66214636427402
- type: dot_accuracy
value: 65.52014431749849
- type: dot_ap
value: 70.89507344960353
- type: dot_f1
value: 70.7030509759333
- type: dot_precision
value: 59.43922255854708
- type: dot_recall
value: 87.2340425531915
- type: euclidean_accuracy
value: 69.84966927239927
- type: euclidean_ap
value: 78.08825177727368
- type: euclidean_f1
value: 72.68394399761692
- type: euclidean_precision
value: 63.16879530548844
- type: euclidean_recall
value: 85.57400046761748
- type: manhattan_accuracy
value: 69.9579073962718
- type: manhattan_ap
value: 78.38355697667261
- type: manhattan_f1
value: 73.06507508663844
- type: manhattan_precision
value: 62.10112911143839
- type: manhattan_recall
value: 88.73041851765257
- type: max_accuracy
value: 70.40288634996993
- type: max_ap
value: 78.43387766087626
- type: max_f1
value: 73.09982840415867
- task:
type: Retrieval
dataset:
type: C-MTEB/CovidRetrieval
name: MTEB CovidRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.973
- type: map_at_10
value: 30.074
- type: map_at_100
value: 31.05
- type: map_at_1000
value: 31.147000000000002
- type: map_at_3
value: 27.977
- type: map_at_5
value: 29.247
- type: mrr_at_1
value: 24.025
- type: mrr_at_10
value: 30.093999999999998
- type: mrr_at_100
value: 31.068
- type: mrr_at_1000
value: 31.165
- type: mrr_at_3
value: 27.994000000000003
- type: mrr_at_5
value: 29.243000000000002
- type: ndcg_at_1
value: 24.025
- type: ndcg_at_10
value: 33.566
- type: ndcg_at_100
value: 38.818999999999996
- type: ndcg_at_1000
value: 41.477000000000004
- type: ndcg_at_3
value: 29.293000000000003
- type: ndcg_at_5
value: 31.564999999999998
- type: precision_at_1
value: 24.025
- type: precision_at_10
value: 4.489
- type: precision_at_100
value: 0.709
- type: precision_at_1000
value: 0.092
- type: precision_at_3
value: 11.064
- type: precision_at_5
value: 7.734000000000001
- type: recall_at_1
value: 23.973
- type: recall_at_10
value: 44.731
- type: recall_at_100
value: 70.52199999999999
- type: recall_at_1000
value: 91.491
- type: recall_at_3
value: 33.087
- type: recall_at_5
value: 38.567
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.950000000000001
- type: map_at_10
value: 13.236999999999998
- type: map_at_100
value: 16.137
- type: map_at_1000
value: 16.785
- type: map_at_3
value: 10.378
- type: map_at_5
value: 11.62
- type: mrr_at_1
value: 54.0
- type: mrr_at_10
value: 61.861
- type: mrr_at_100
value: 62.436
- type: mrr_at_1000
value: 62.456
- type: mrr_at_3
value: 60.458
- type: mrr_at_5
value: 61.208
- type: ndcg_at_1
value: 43.75
- type: ndcg_at_10
value: 28.224
- type: ndcg_at_100
value: 29.244999999999997
- type: ndcg_at_1000
value: 34.410000000000004
- type: ndcg_at_3
value: 33.955
- type: ndcg_at_5
value: 30.597
- type: precision_at_1
value: 54.0
- type: precision_at_10
value: 20.825
- type: precision_at_100
value: 5.462
- type: precision_at_1000
value: 1.1320000000000001
- type: precision_at_3
value: 37.0
- type: precision_at_5
value: 28.849999999999998
- type: recall_at_1
value: 6.950000000000001
- type: recall_at_10
value: 17.159
- type: recall_at_100
value: 31.657999999999998
- type: recall_at_1000
value: 49.155
- type: recall_at_3
value: 11.393
- type: recall_at_5
value: 13.568
- task:
type: Retrieval
dataset:
type: C-MTEB/DuRetrieval
name: MTEB DuRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 16.333000000000002
- type: map_at_10
value: 44.080999999999996
- type: map_at_100
value: 47.958
- type: map_at_1000
value: 48.183
- type: map_at_3
value: 31.468
- type: map_at_5
value: 38.213
- type: mrr_at_1
value: 63.0
- type: mrr_at_10
value: 72.006
- type: mrr_at_100
value: 72.299
- type: mrr_at_1000
value: 72.313
- type: mrr_at_3
value: 70.375
- type: mrr_at_5
value: 71.33
- type: ndcg_at_1
value: 63.0
- type: ndcg_at_10
value: 56.044000000000004
- type: ndcg_at_100
value: 63.629999999999995
- type: ndcg_at_1000
value: 66.156
- type: ndcg_at_3
value: 55.85
- type: ndcg_at_5
value: 53.559
- type: precision_at_1
value: 63.0
- type: precision_at_10
value: 27.279999999999998
- type: precision_at_100
value: 4.005
- type: precision_at_1000
value: 0.462
- type: precision_at_3
value: 49.633
- type: precision_at_5
value: 40.6
- type: recall_at_1
value: 16.333000000000002
- type: recall_at_10
value: 57.152
- type: recall_at_100
value: 80.231
- type: recall_at_1000
value: 92.95400000000001
- type: recall_at_3
value: 34.793
- type: recall_at_5
value: 44.989000000000004
- task:
type: Retrieval
dataset:
type: C-MTEB/EcomRetrieval
name: MTEB EcomRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 33.7
- type: map_at_10
value: 42.327999999999996
- type: map_at_100
value: 43.230000000000004
- type: map_at_1000
value: 43.274
- type: map_at_3
value: 39.883
- type: map_at_5
value: 41.178
- type: mrr_at_1
value: 33.7
- type: mrr_at_10
value: 42.327999999999996
- type: mrr_at_100
value: 43.230000000000004
- type: mrr_at_1000
value: 43.274
- type: mrr_at_3
value: 39.883
- type: mrr_at_5
value: 41.178
- type: ndcg_at_1
value: 33.7
- type: ndcg_at_10
value: 46.996
- type: ndcg_at_100
value: 51.629000000000005
- type: ndcg_at_1000
value: 52.823
- type: ndcg_at_3
value: 41.891
- type: ndcg_at_5
value: 44.232
- type: precision_at_1
value: 33.7
- type: precision_at_10
value: 6.1899999999999995
- type: precision_at_100
value: 0.8410000000000001
- type: precision_at_1000
value: 0.094
- type: precision_at_3
value: 15.9
- type: precision_at_5
value: 10.68
- type: recall_at_1
value: 33.7
- type: recall_at_10
value: 61.9
- type: recall_at_100
value: 84.1
- type: recall_at_1000
value: 93.60000000000001
- type: recall_at_3
value: 47.699999999999996
- type: recall_at_5
value: 53.400000000000006
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 44.76500000000001
- type: f1
value: 40.46330006682868
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 45.078
- type: map_at_10
value: 55.443
- type: map_at_100
value: 56.03900000000001
- type: map_at_1000
value: 56.067
- type: map_at_3
value: 53.174
- type: map_at_5
value: 54.510999999999996
- type: mrr_at_1
value: 48.575
- type: mrr_at_10
value: 59.194
- type: mrr_at_100
value: 59.760999999999996
- type: mrr_at_1000
value: 59.784000000000006
- type: mrr_at_3
value: 56.896
- type: mrr_at_5
value: 58.282000000000004
- type: ndcg_at_1
value: 48.575
- type: ndcg_at_10
value: 61.096
- type: ndcg_at_100
value: 63.94800000000001
- type: ndcg_at_1000
value: 64.68199999999999
- type: ndcg_at_3
value: 56.58
- type: ndcg_at_5
value: 58.928000000000004
- type: precision_at_1
value: 48.575
- type: precision_at_10
value: 8.18
- type: precision_at_100
value: 0.968
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 22.662
- type: precision_at_5
value: 14.881
- type: recall_at_1
value: 45.078
- type: recall_at_10
value: 75.057
- type: recall_at_100
value: 88.05199999999999
- type: recall_at_1000
value: 93.58999999999999
- type: recall_at_3
value: 62.77700000000001
- type: recall_at_5
value: 68.50699999999999
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.097999999999999
- type: map_at_10
value: 18.288
- type: map_at_100
value: 19.903000000000002
- type: map_at_1000
value: 20.108
- type: map_at_3
value: 15.576
- type: map_at_5
value: 16.997999999999998
- type: mrr_at_1
value: 23.302
- type: mrr_at_10
value: 30.978
- type: mrr_at_100
value: 32.072
- type: mrr_at_1000
value: 32.15
- type: mrr_at_3
value: 28.549000000000003
- type: mrr_at_5
value: 29.931
- type: ndcg_at_1
value: 23.302
- type: ndcg_at_10
value: 24.488
- type: ndcg_at_100
value: 31.052999999999997
- type: ndcg_at_1000
value: 35.124
- type: ndcg_at_3
value: 21.215999999999998
- type: ndcg_at_5
value: 22.314999999999998
- type: precision_at_1
value: 23.302
- type: precision_at_10
value: 7.13
- type: precision_at_100
value: 1.3559999999999999
- type: precision_at_1000
value: 0.20600000000000002
- type: precision_at_3
value: 14.198
- type: precision_at_5
value: 10.895000000000001
- type: recall_at_1
value: 11.097999999999999
- type: recall_at_10
value: 30.352
- type: recall_at_100
value: 54.937999999999995
- type: recall_at_1000
value: 79.586
- type: recall_at_3
value: 19.486
- type: recall_at_5
value: 23.860999999999997
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.325
- type: map_at_10
value: 37.305
- type: map_at_100
value: 38.0
- type: map_at_1000
value: 38.065
- type: map_at_3
value: 35.219
- type: map_at_5
value: 36.466
- type: mrr_at_1
value: 56.650999999999996
- type: mrr_at_10
value: 63.574
- type: mrr_at_100
value: 63.966
- type: mrr_at_1000
value: 63.992000000000004
- type: mrr_at_3
value: 62.107
- type: mrr_at_5
value: 62.976
- type: ndcg_at_1
value: 56.650999999999996
- type: ndcg_at_10
value: 46.046
- type: ndcg_at_100
value: 48.916
- type: ndcg_at_1000
value: 50.410999999999994
- type: ndcg_at_3
value: 42.516999999999996
- type: ndcg_at_5
value: 44.374
- type: precision_at_1
value: 56.650999999999996
- type: precision_at_10
value: 9.392
- type: precision_at_100
value: 1.166
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 26.068
- type: precision_at_5
value: 17.11
- type: recall_at_1
value: 28.325
- type: recall_at_10
value: 46.961999999999996
- type: recall_at_100
value: 58.318999999999996
- type: recall_at_1000
value: 68.298
- type: recall_at_3
value: 39.102
- type: recall_at_5
value: 42.775
- task:
type: Classification
dataset:
type: C-MTEB/IFlyTek-classification
name: MTEB IFlyTek
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 40.461716044632546
- type: f1
value: 33.890745966734315
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 72.21000000000001
- type: ap
value: 66.59963731769069
- type: f1
value: 71.97616824840041
- task:
type: Classification
dataset:
type: C-MTEB/JDReview-classification
name: MTEB JDReview
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 78.25515947467167
- type: ap
value: 38.265118237185064
- type: f1
value: 70.73962826410575
- task:
type: STS
dataset:
type: C-MTEB/LCQMC
name: MTEB LCQMC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 63.98362797180168
- type: cos_sim_spearman
value: 71.97575564053473
- type: euclidean_pearson
value: 70.56052438394708
- type: euclidean_spearman
value: 72.48267176371337
- type: manhattan_pearson
value: 70.7156268448442
- type: manhattan_spearman
value: 72.61065396802094
- task:
type: Retrieval
dataset:
type: C-MTEB/MMarcoRetrieval
name: MTEB MMarcoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 55.775
- type: map_at_10
value: 65.074
- type: map_at_100
value: 65.596
- type: map_at_1000
value: 65.618
- type: map_at_3
value: 62.92
- type: map_at_5
value: 64.277
- type: mrr_at_1
value: 57.708000000000006
- type: mrr_at_10
value: 65.824
- type: mrr_at_100
value: 66.286
- type: mrr_at_1000
value: 66.306
- type: mrr_at_3
value: 63.871
- type: mrr_at_5
value: 65.093
- type: ndcg_at_1
value: 57.708000000000006
- type: ndcg_at_10
value: 69.309
- type: ndcg_at_100
value: 71.723
- type: ndcg_at_1000
value: 72.313
- type: ndcg_at_3
value: 65.134
- type: ndcg_at_5
value: 67.476
- type: precision_at_1
value: 57.708000000000006
- type: precision_at_10
value: 8.668
- type: precision_at_100
value: 0.989
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 24.837999999999997
- type: precision_at_5
value: 16.128999999999998
- type: recall_at_1
value: 55.775
- type: recall_at_10
value: 81.702
- type: recall_at_100
value: 92.785
- type: recall_at_1000
value: 97.425
- type: recall_at_3
value: 70.587
- type: recall_at_5
value: 76.199
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 17.771
- type: map_at_10
value: 28.16
- type: map_at_100
value: 29.363
- type: map_at_1000
value: 29.431
- type: map_at_3
value: 24.767
- type: map_at_5
value: 26.706999999999997
- type: mrr_at_1
value: 18.252
- type: mrr_at_10
value: 28.666000000000004
- type: mrr_at_100
value: 29.837000000000003
- type: mrr_at_1000
value: 29.898999999999997
- type: mrr_at_3
value: 25.308000000000003
- type: mrr_at_5
value: 27.226
- type: ndcg_at_1
value: 18.252
- type: ndcg_at_10
value: 34.176
- type: ndcg_at_100
value: 40.138
- type: ndcg_at_1000
value: 41.923
- type: ndcg_at_3
value: 27.214
- type: ndcg_at_5
value: 30.695
- type: precision_at_1
value: 18.252
- type: precision_at_10
value: 5.503
- type: precision_at_100
value: 0.8500000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 11.667
- type: precision_at_5
value: 8.754000000000001
- type: recall_at_1
value: 17.771
- type: recall_at_10
value: 52.781
- type: recall_at_100
value: 80.638
- type: recall_at_1000
value: 94.46
- type: recall_at_3
value: 33.767
- type: recall_at_5
value: 42.172
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.93388052895577
- type: f1
value: 89.55553145791954
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (de)
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 68.42490842490842
- type: f1
value: 67.01398674117826
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (es)
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 88.2121414276184
- type: f1
value: 87.61981627763988
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (fr)
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 85.49013466958974
- type: f1
value: 85.09758510104221
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (hi)
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 84.22732162065257
- type: f1
value: 83.24580378090367
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (th)
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 53.171790235081374
- type: f1
value: 51.93028909966765
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 66.5640674874601
- type: f1
value: 49.856876973153966
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (de)
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 49.171597633136095
- type: f1
value: 32.166022205347545
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (es)
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 65.71714476317545
- type: f1
value: 45.748971341625136
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (fr)
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 62.65267773253993
- type: f1
value: 45.904472624086026
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (hi)
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 61.8752240946576
- type: f1
value: 40.7359613185448
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (th)
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 41.67088607594936
- type: f1
value: 28.12210726419673
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (af)
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 43.29186281102892
- type: f1
value: 41.83461350696014
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (am)
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 23.214525891055814
- type: f1
value: 22.364131190189962
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ar)
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 53.38264963012777
- type: f1
value: 50.74546702709091
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (az)
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 39.55951580363147
- type: f1
value: 39.07769075741216
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (bn)
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.73839946200403
- type: f1
value: 54.36728741542025
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (cy)
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 39.99663752521857
- type: f1
value: 38.709817953652596
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (da)
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 46.933422999327504
- type: f1
value: 45.32022679895763
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (de)
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 45.820443846671154
- type: f1
value: 42.853155158197886
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (el)
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 37.874915938130464
- type: f1
value: 35.9849010888881
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.08944182918628
- type: f1
value: 64.5039080809391
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (es)
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.17350369872226
- type: f1
value: 60.0792530132073
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (fa)
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 45.652320107599195
- type: f1
value: 44.28182554287625
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (fi)
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 40.282447881640884
- type: f1
value: 38.79927524886836
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (fr)
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.60591795561533
- type: f1
value: 61.01451309609411
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (he)
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 32.225958305312716
- type: f1
value: 30.903299940417906
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (hi)
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.46200403496974
- type: f1
value: 57.34556231956785
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (hu)
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 40.907868190988566
- type: f1
value: 39.74702259997524
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (hy)
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 29.939475453934094
- type: f1
value: 28.462353413371353
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (id)
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.14256893073302
- type: f1
value: 57.24600767871435
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (is)
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 39.620040349697376
- type: f1
value: 38.414866180464735
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (it)
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.772024209818426
- type: f1
value: 51.05050942366993
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ja)
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 53.749159381304636
- type: f1
value: 52.04563008527909
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (jv)
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 46.29455279085406
- type: f1
value: 43.84047527739209
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ka)
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 25.107599193006045
- type: f1
value: 24.58731463875415
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (km)
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 27.21923335574984
- type: f1
value: 25.964338481976796
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (kn)
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 47.96906523201077
- type: f1
value: 45.32239408435578
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ko)
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 40.53799596503026
- type: f1
value: 39.15655510771227
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (lv)
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 43.140551445864155
- type: f1
value: 42.12232733095163
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ml)
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 53.69199731002017
- type: f1
value: 50.67085509122796
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (mn)
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 33.37256220578346
- type: f1
value: 33.39335560955231
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ms)
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.94014794889038
- type: f1
value: 50.6207021226521
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (my)
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 25.322797579018157
- type: f1
value: 23.94164121951907
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (nb)
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 44.11903160726294
- type: f1
value: 43.016752983579536
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (nl)
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 44.03496973772697
- type: f1
value: 42.322828283176754
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (pl)
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 41.63080026899798
- type: f1
value: 39.58824644978166
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (pt)
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.7350369872226
- type: f1
value: 59.956752206079386
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ro)
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 45.72629455279086
- type: f1
value: 44.731249269647826
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ru)
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 47.61264290517822
- type: f1
value: 45.5280995218491
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (sl)
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 42.82784129119032
- type: f1
value: 41.37165985220223
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (sq)
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 43.61466039004707
- type: f1
value: 43.164498227815535
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (sv)
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 44.64021519838602
- type: f1
value: 43.04775030948548
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (sw)
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 45.54808338937458
- type: f1
value: 44.011677633779975
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ta)
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.2441156691325
- type: f1
value: 48.73592932403811
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (te)
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 47.43443174176195
- type: f1
value: 45.08686598891457
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (th)
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 36.87962340282448
- type: f1
value: 36.50540864756967
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (tl)
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 45.9280430396772
- type: f1
value: 44.57216865343283
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (tr)
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 38.591123066577
- type: f1
value: 37.886312373767446
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ur)
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.85272360457296
- type: f1
value: 49.43461566216979
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (vi)
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.72225958305313
- type: f1
value: 56.95500715299434
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-CN)
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.74915938130464
- type: f1
value: 62.35543158488615
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-TW)
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.95292535305985
- type: f1
value: 59.73499569346673
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (af)
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 47.42098184263618
- type: f1
value: 45.22541854557743
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (am)
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 24.707464694014796
- type: f1
value: 24.033506081882468
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ar)
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.09145931405515
- type: f1
value: 62.22048940230962
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (az)
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 39.25016812373907
- type: f1
value: 38.35431952425269
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (bn)
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.37256220578345
- type: f1
value: 63.12728180326932
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (cy)
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 39.172831203765966
- type: f1
value: 37.078841372640234
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (da)
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 49.11230665770006
- type: f1
value: 46.489580286547245
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (de)
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 50.7128446536651
- type: f1
value: 48.27782602378952
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (el)
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 39.46536650975118
- type: f1
value: 37.4365280056047
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.26160053799597
- type: f1
value: 73.4478249967817
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (es)
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.31203765971756
- type: f1
value: 68.70554437788068
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (fa)
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 45.652320107599195
- type: f1
value: 44.55357745265521
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (fi)
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 38.94754539340955
- type: f1
value: 36.48927336173062
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (fr)
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.69872225958305
- type: f1
value: 68.81347966311543
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (he)
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 32.131809011432416
- type: f1
value: 30.212230946937474
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (hi)
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.57498318762609
- type: f1
value: 65.16084751135229
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (hu)
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 42.965702757229316
- type: f1
value: 40.575896627739105
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (hy)
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 32.125084061869536
- type: f1
value: 30.708056882129476
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (id)
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.10759919300607
- type: f1
value: 64.5007800119315
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (is)
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 40.83725622057834
- type: f1
value: 37.855774705520886
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (it)
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 54.55279085406859
- type: f1
value: 52.73318944173822
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ja)
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 57.14525891055817
- type: f1
value: 55.96714177558203
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (jv)
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 49.30060524546065
- type: f1
value: 47.82999154670342
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ka)
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 25.85743106926698
- type: f1
value: 24.974946990729716
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (km)
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 31.180228648285137
- type: f1
value: 28.22387838219335
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (kn)
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 53.00941492938802
- type: f1
value: 52.39610045092559
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ko)
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 40.24546065904505
- type: f1
value: 38.99779773215032
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (lv)
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 41.88298587760592
- type: f1
value: 39.53867071594289
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ml)
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.078681909885674
- type: f1
value: 58.47368723772022
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (mn)
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 33.33893745796907
- type: f1
value: 32.113466354321226
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ms)
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 57.454606590450574
- type: f1
value: 56.13075383338251
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (my)
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 27.19569603227976
- type: f1
value: 26.300773160344015
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (nb)
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 46.78547410894418
- type: f1
value: 44.233771335183015
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (nl)
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 48.4196368527236
- type: f1
value: 45.55838648206857
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (pl)
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 41.63080026899798
- type: f1
value: 40.77775839499525
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (pt)
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.408876933423
- type: f1
value: 66.7358693871042
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ro)
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 50.077336919973106
- type: f1
value: 48.572749739090014
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ru)
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 49.942837928715534
- type: f1
value: 49.34771836662566
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (sl)
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 43.43308675184936
- type: f1
value: 41.818008297000986
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (sq)
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 44.082044384667114
- type: f1
value: 43.25002746432129
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (sv)
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 46.45258910558171
- type: f1
value: 44.00958237591922
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (sw)
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 49.53261600537996
- type: f1
value: 48.01969699634672
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ta)
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 56.792199058507066
- type: f1
value: 56.54421925671813
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (te)
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 54.0114324142569
- type: f1
value: 52.29830350891558
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (th)
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 38.584398117014125
- type: f1
value: 36.551426239639575
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (tl)
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 48.07330195023538
- type: f1
value: 46.463553675519975
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (tr)
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 40.645595158036315
- type: f1
value: 40.21280676607986
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ur)
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 57.74714189643577
- type: f1
value: 56.8673027258351
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (vi)
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.83389374579693
- type: f1
value: 66.11273939782248
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-CN)
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.38735709482181
- type: f1
value: 72.89481650271512
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-TW)
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.63685272360458
- type: f1
value: 70.72285841806938
- task:
type: Retrieval
dataset:
type: C-MTEB/MedicalRetrieval
name: MTEB MedicalRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 30.8
- type: map_at_10
value: 34.782000000000004
- type: map_at_100
value: 35.333999999999996
- type: map_at_1000
value: 35.405
- type: map_at_3
value: 34.0
- type: map_at_5
value: 34.345
- type: mrr_at_1
value: 30.8
- type: mrr_at_10
value: 34.782000000000004
- type: mrr_at_100
value: 35.333999999999996
- type: mrr_at_1000
value: 35.405
- type: mrr_at_3
value: 34.0
- type: mrr_at_5
value: 34.345
- type: ndcg_at_1
value: 30.8
- type: ndcg_at_10
value: 36.675000000000004
- type: ndcg_at_100
value: 39.633
- type: ndcg_at_1000
value: 41.904
- type: ndcg_at_3
value: 35.028
- type: ndcg_at_5
value: 35.648
- type: precision_at_1
value: 30.8
- type: precision_at_10
value: 4.26
- type: precision_at_100
value: 0.571
- type: precision_at_1000
value: 0.076
- type: precision_at_3
value: 12.667
- type: precision_at_5
value: 7.9
- type: recall_at_1
value: 30.8
- type: recall_at_10
value: 42.6
- type: recall_at_100
value: 57.099999999999994
- type: recall_at_1000
value: 75.8
- type: recall_at_3
value: 38.0
- type: recall_at_5
value: 39.5
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 27.84536559870361
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 27.714921841841605
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.52145905910035
- type: mrr
value: 31.551577344311845
- task:
type: Reranking
dataset:
type: C-MTEB/Mmarco-reranking
name: MTEB MMarcoReranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 23.6853605350459
- type: mrr
value: 22.341269841269842
- task:
type: Classification
dataset:
type: C-MTEB/MultilingualSentiment-classification
name: MTEB MultilingualSentiment
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 63.16666666666666
- type: f1
value: 63.09453591106835
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.7060000000000004
- type: map_at_10
value: 9.032
- type: map_at_100
value: 11.395
- type: map_at_1000
value: 12.713
- type: map_at_3
value: 6.502
- type: map_at_5
value: 7.8100000000000005
- type: mrr_at_1
value: 37.461
- type: mrr_at_10
value: 45.839999999999996
- type: mrr_at_100
value: 46.513
- type: mrr_at_1000
value: 46.571
- type: mrr_at_3
value: 43.55
- type: mrr_at_5
value: 44.773
- type: ndcg_at_1
value: 35.913000000000004
- type: ndcg_at_10
value: 27.340999999999998
- type: ndcg_at_100
value: 25.197000000000003
- type: ndcg_at_1000
value: 34.632000000000005
- type: ndcg_at_3
value: 31.952
- type: ndcg_at_5
value: 30.244
- type: precision_at_1
value: 37.461
- type: precision_at_10
value: 20.495
- type: precision_at_100
value: 6.551
- type: precision_at_1000
value: 1.966
- type: precision_at_3
value: 30.753000000000004
- type: precision_at_5
value: 26.935
- type: recall_at_1
value: 3.7060000000000004
- type: recall_at_10
value: 12.958
- type: recall_at_100
value: 26.582
- type: recall_at_1000
value: 59.724
- type: recall_at_3
value: 7.503
- type: recall_at_5
value: 9.808
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.201999999999998
- type: map_at_10
value: 33.76
- type: map_at_100
value: 34.867
- type: map_at_1000
value: 34.92
- type: map_at_3
value: 30.233999999999998
- type: map_at_5
value: 32.291
- type: mrr_at_1
value: 25.232
- type: mrr_at_10
value: 36.239
- type: mrr_at_100
value: 37.119
- type: mrr_at_1000
value: 37.162
- type: mrr_at_3
value: 33.213
- type: mrr_at_5
value: 35.02
- type: ndcg_at_1
value: 25.232
- type: ndcg_at_10
value: 40.046
- type: ndcg_at_100
value: 45.025
- type: ndcg_at_1000
value: 46.459
- type: ndcg_at_3
value: 33.343
- type: ndcg_at_5
value: 36.801
- type: precision_at_1
value: 25.232
- type: precision_at_10
value: 6.796
- type: precision_at_100
value: 0.959
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 15.276
- type: precision_at_5
value: 11.17
- type: recall_at_1
value: 22.201999999999998
- type: recall_at_10
value: 56.733
- type: recall_at_100
value: 79.041
- type: recall_at_1000
value: 90.08500000000001
- type: recall_at_3
value: 39.412000000000006
- type: recall_at_5
value: 47.352
- task:
type: PairClassification
dataset:
type: C-MTEB/OCNLI
name: MTEB Ocnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 62.53383865728208
- type: cos_sim_ap
value: 66.3197921045625
- type: cos_sim_f1
value: 69.3385214007782
- type: cos_sim_precision
value: 54.89833641404805
- type: cos_sim_recall
value: 94.08658922914466
- type: dot_accuracy
value: 59.7184623714131
- type: dot_ap
value: 61.53586693000539
- type: dot_f1
value: 68.26923076923077
- type: dot_precision
value: 52.53272623790552
- type: dot_recall
value: 97.46568109820485
- type: euclidean_accuracy
value: 62.912831618841366
- type: euclidean_ap
value: 67.15479155849464
- type: euclidean_f1
value: 70.64071370640713
- type: euclidean_precision
value: 57.34035549703752
- type: euclidean_recall
value: 91.97465681098205
- type: manhattan_accuracy
value: 63.50839198700595
- type: manhattan_ap
value: 67.55807251483273
- type: manhattan_f1
value: 70.58356490670901
- type: manhattan_precision
value: 56.55216284987278
- type: manhattan_recall
value: 93.8753959873284
- type: max_accuracy
value: 63.50839198700595
- type: max_ap
value: 67.55807251483273
- type: max_f1
value: 70.64071370640713
- task:
type: Classification
dataset:
type: C-MTEB/OnlineShopping-classification
name: MTEB OnlineShopping
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 87.11
- type: ap
value: 84.20351278644551
- type: f1
value: 87.10043002123766
- task:
type: STS
dataset:
type: C-MTEB/PAWSX
name: MTEB PAWSX
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 13.050279647770473
- type: cos_sim_spearman
value: 14.227909232579874
- type: euclidean_pearson
value: 16.372629300358096
- type: euclidean_spearman
value: 14.68140021547196
- type: manhattan_pearson
value: 16.266960163157336
- type: manhattan_spearman
value: 14.627750758965616
- task:
type: STS
dataset:
type: C-MTEB/QBQTC
name: MTEB QBQTC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 30.56036276943463
- type: cos_sim_spearman
value: 32.918859292204
- type: euclidean_pearson
value: 31.679745438037195
- type: euclidean_spearman
value: 33.68461814972644
- type: manhattan_pearson
value: 31.994557954084563
- type: manhattan_spearman
value: 33.97758185204816
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 68.327
- type: map_at_10
value: 81.938
- type: map_at_100
value: 82.581
- type: map_at_1000
value: 82.60300000000001
- type: map_at_3
value: 78.89399999999999
- type: map_at_5
value: 80.816
- type: mrr_at_1
value: 78.75
- type: mrr_at_10
value: 85.302
- type: mrr_at_100
value: 85.432
- type: mrr_at_1000
value: 85.434
- type: mrr_at_3
value: 84.128
- type: mrr_at_5
value: 84.91199999999999
- type: ndcg_at_1
value: 78.74
- type: ndcg_at_10
value: 86.042
- type: ndcg_at_100
value: 87.468
- type: ndcg_at_1000
value: 87.641
- type: ndcg_at_3
value: 82.799
- type: ndcg_at_5
value: 84.603
- type: precision_at_1
value: 78.74
- type: precision_at_10
value: 13.071
- type: precision_at_100
value: 1.508
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 36.08
- type: precision_at_5
value: 23.87
- type: recall_at_1
value: 68.327
- type: recall_at_10
value: 93.962
- type: recall_at_100
value: 99.054
- type: recall_at_1000
value: 99.9
- type: recall_at_3
value: 84.788
- type: recall_at_5
value: 89.73
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 41.337989152483956
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 51.2046136625677
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.763
- type: map_at_10
value: 8.785
- type: map_at_100
value: 10.266
- type: map_at_1000
value: 10.506
- type: map_at_3
value: 6.551
- type: map_at_5
value: 7.670000000000001
- type: mrr_at_1
value: 18.5
- type: mrr_at_10
value: 27.771
- type: mrr_at_100
value: 28.842000000000002
- type: mrr_at_1000
value: 28.913
- type: mrr_at_3
value: 24.767
- type: mrr_at_5
value: 26.457000000000004
- type: ndcg_at_1
value: 18.5
- type: ndcg_at_10
value: 15.312000000000001
- type: ndcg_at_100
value: 21.599
- type: ndcg_at_1000
value: 26.473999999999997
- type: ndcg_at_3
value: 14.821000000000002
- type: ndcg_at_5
value: 12.836
- type: precision_at_1
value: 18.5
- type: precision_at_10
value: 7.779999999999999
- type: precision_at_100
value: 1.69
- type: precision_at_1000
value: 0.28700000000000003
- type: precision_at_3
value: 13.667000000000002
- type: precision_at_5
value: 11.08
- type: recall_at_1
value: 3.763
- type: recall_at_10
value: 15.798000000000002
- type: recall_at_100
value: 34.313
- type: recall_at_1000
value: 58.318000000000005
- type: recall_at_3
value: 8.312999999999999
- type: recall_at_5
value: 11.238
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.33402689861924
- type: cos_sim_spearman
value: 78.52738315932625
- type: euclidean_pearson
value: 80.800678573052
- type: euclidean_spearman
value: 77.86666946799137
- type: manhattan_pearson
value: 81.03106755866989
- type: manhattan_spearman
value: 78.0676393879487
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 81.86998503723257
- type: cos_sim_spearman
value: 74.07437934108376
- type: euclidean_pearson
value: 80.91626452869946
- type: euclidean_spearman
value: 76.88419802521403
- type: manhattan_pearson
value: 81.50196980117957
- type: manhattan_spearman
value: 77.2456891009073
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 81.19616084290932
- type: cos_sim_spearman
value: 81.80834431353927
- type: euclidean_pearson
value: 81.25429737195789
- type: euclidean_spearman
value: 82.00934127307355
- type: manhattan_pearson
value: 81.67403556759655
- type: manhattan_spearman
value: 82.42359818976753
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 81.50884725941148
- type: cos_sim_spearman
value: 77.0493522248929
- type: euclidean_pearson
value: 79.15856111178543
- type: euclidean_spearman
value: 77.24292975474096
- type: manhattan_pearson
value: 79.22641788874807
- type: manhattan_spearman
value: 77.37101663798234
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 83.75652767224308
- type: cos_sim_spearman
value: 84.61113973428688
- type: euclidean_pearson
value: 83.73646379542737
- type: euclidean_spearman
value: 84.47126779405652
- type: manhattan_pearson
value: 83.89617307570857
- type: manhattan_spearman
value: 84.6073703393468
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 81.16302763567215
- type: cos_sim_spearman
value: 83.08923353997561
- type: euclidean_pearson
value: 80.08338016232464
- type: euclidean_spearman
value: 80.40181608724076
- type: manhattan_pearson
value: 80.02358856208708
- type: manhattan_spearman
value: 80.30032329982274
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (ko-ko)
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 56.45965932801117
- type: cos_sim_spearman
value: 57.28270045199294
- type: euclidean_pearson
value: 57.3615782157595
- type: euclidean_spearman
value: 56.94348399074146
- type: manhattan_pearson
value: 57.9426531718209
- type: manhattan_spearman
value: 57.61844831263504
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (ar-ar)
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.2973366536596
- type: cos_sim_spearman
value: 80.60259304741632
- type: euclidean_pearson
value: 78.30266089843892
- type: euclidean_spearman
value: 78.06065126709282
- type: manhattan_pearson
value: 78.61370380599344
- type: manhattan_spearman
value: 78.45738598619143
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-ar)
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 72.35020162217042
- type: cos_sim_spearman
value: 72.59857902847162
- type: euclidean_pearson
value: 65.03547299350457
- type: euclidean_spearman
value: 64.16617373109685
- type: manhattan_pearson
value: 65.68996569454929
- type: manhattan_spearman
value: 64.88542254595046
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-de)
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 39.766484883595425
- type: cos_sim_spearman
value: 40.3429946300341
- type: euclidean_pearson
value: 39.47427150040957
- type: euclidean_spearman
value: 39.072525589079696
- type: manhattan_pearson
value: 40.56345338078474
- type: manhattan_spearman
value: 40.444629078138036
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.83798941013089
- type: cos_sim_spearman
value: 89.15159294402415
- type: euclidean_pearson
value: 87.9810618414505
- type: euclidean_spearman
value: 87.90818542026535
- type: manhattan_pearson
value: 88.06116863048229
- type: manhattan_spearman
value: 88.00182442010694
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-tr)
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 7.416028059666332
- type: cos_sim_spearman
value: 6.792945857606915
- type: euclidean_pearson
value: 11.485332917116061
- type: euclidean_spearman
value: 9.793932873423419
- type: manhattan_pearson
value: 9.148469412558393
- type: manhattan_spearman
value: 7.803450524017845
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (es-en)
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.16381852152489
- type: cos_sim_spearman
value: 81.80324089694928
- type: euclidean_pearson
value: 76.41433274302783
- type: euclidean_spearman
value: 77.15238726996526
- type: manhattan_pearson
value: 77.08610108551368
- type: manhattan_spearman
value: 77.99971298324311
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (es-es)
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.11032272383456
- type: cos_sim_spearman
value: 85.64528002839239
- type: euclidean_pearson
value: 85.54301672487198
- type: euclidean_spearman
value: 84.21727806530393
- type: manhattan_pearson
value: 85.57145576255618
- type: manhattan_spearman
value: 84.07127479487694
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (fr-en)
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.73703272230806
- type: cos_sim_spearman
value: 79.9424510113259
- type: euclidean_pearson
value: 77.64485173960838
- type: euclidean_spearman
value: 77.54693014468836
- type: manhattan_pearson
value: 77.96911553781774
- type: manhattan_spearman
value: 77.87266778206842
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (it-en)
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 37.260672179617515
- type: cos_sim_spearman
value: 34.80434004457536
- type: euclidean_pearson
value: 38.55806751295782
- type: euclidean_spearman
value: 36.129700913023115
- type: manhattan_pearson
value: 40.74316244582763
- type: manhattan_spearman
value: 38.60667540883322
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (nl-en)
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 38.038311386574456
- type: cos_sim_spearman
value: 33.576193063894195
- type: euclidean_pearson
value: 33.712663568034316
- type: euclidean_spearman
value: 32.560617375956916
- type: manhattan_pearson
value: 35.60457167895616
- type: manhattan_spearman
value: 34.63036216555931
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 61.01583638162472
- type: cos_sim_spearman
value: 62.92281428893316
- type: euclidean_pearson
value: 62.939630289711815
- type: euclidean_spearman
value: 64.15209661725994
- type: manhattan_pearson
value: 64.24261705090608
- type: manhattan_spearman
value: 64.78283158164017
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de)
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 21.529440799555704
- type: cos_sim_spearman
value: 26.62727800620091
- type: euclidean_pearson
value: 16.837244578590123
- type: euclidean_spearman
value: 25.012107525591425
- type: manhattan_pearson
value: 18.445531476179454
- type: manhattan_spearman
value: 27.070240480795153
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (es)
config: es
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 49.655500043363624
- type: cos_sim_spearman
value: 56.31248457847469
- type: euclidean_pearson
value: 48.787154598246616
- type: euclidean_spearman
value: 52.90454409579225
- type: manhattan_pearson
value: 55.392327232639836
- type: manhattan_spearman
value: 57.3726886727899
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (pl)
config: pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 2.9137753115190304
- type: cos_sim_spearman
value: 15.062114976486532
- type: euclidean_pearson
value: -2.034404984782681
- type: euclidean_spearman
value: 14.683481835467338
- type: manhattan_pearson
value: -0.22204468354050833
- type: manhattan_spearman
value: 15.526420635759743
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (tr)
config: tr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 4.3616620418459915
- type: cos_sim_spearman
value: 22.11078316878173
- type: euclidean_pearson
value: 15.111514877123403
- type: euclidean_spearman
value: 21.232869644925973
- type: manhattan_pearson
value: 19.71276925909529
- type: manhattan_spearman
value: 25.704469862313466
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (ar)
config: ar
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 44.25888840250496
- type: cos_sim_spearman
value: 54.82352971568842
- type: euclidean_pearson
value: 48.00261414068268
- type: euclidean_spearman
value: 53.3721608428832
- type: manhattan_pearson
value: 50.6442021864215
- type: manhattan_spearman
value: 55.352339945631954
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (ru)
config: ru
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 0.08233514100531068
- type: cos_sim_spearman
value: 28.771721168834276
- type: euclidean_pearson
value: 10.783524938899138
- type: euclidean_spearman
value: 24.67831010432439
- type: manhattan_pearson
value: 16.98415610436092
- type: manhattan_spearman
value: 25.81670115913176
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 36.86678706245425
- type: cos_sim_spearman
value: 40.9736918674032
- type: euclidean_pearson
value: 26.42365971768556
- type: euclidean_spearman
value: 30.479818788692054
- type: manhattan_pearson
value: 41.08694658968258
- type: manhattan_spearman
value: 45.080877435751084
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (fr)
config: fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 75.98114217777062
- type: cos_sim_spearman
value: 78.7295845730892
- type: euclidean_pearson
value: 76.99433076522276
- type: euclidean_spearman
value: 79.71421663258973
- type: manhattan_pearson
value: 78.65656344143478
- type: manhattan_spearman
value: 80.60968909615123
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de-en)
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 47.33261398683554
- type: cos_sim_spearman
value: 49.547954534754666
- type: euclidean_pearson
value: 48.23362592012922
- type: euclidean_spearman
value: 49.17277986369927
- type: manhattan_pearson
value: 49.06792311033889
- type: manhattan_spearman
value: 51.27529282708198
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (es-en)
config: es-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.10070360470756
- type: cos_sim_spearman
value: 71.03150249855938
- type: euclidean_pearson
value: 67.05372897033872
- type: euclidean_spearman
value: 69.73291838049877
- type: manhattan_pearson
value: 70.34740916239467
- type: manhattan_spearman
value: 72.40053406658815
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (it)
config: it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 56.581317404418904
- type: cos_sim_spearman
value: 62.61318021096797
- type: euclidean_pearson
value: 57.4403074342031
- type: euclidean_spearman
value: 60.04897783631694
- type: manhattan_pearson
value: 58.441729285803014
- type: manhattan_spearman
value: 60.70510326005463
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (pl-en)
config: pl-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 47.064414464023905
- type: cos_sim_spearman
value: 43.716659075869465
- type: euclidean_pearson
value: 43.81699490724336
- type: euclidean_spearman
value: 43.784380306563726
- type: manhattan_pearson
value: 53.664583329563264
- type: manhattan_spearman
value: 45.399271192350135
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh-en)
config: zh-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.585903017365055
- type: cos_sim_spearman
value: 63.90147651068459
- type: euclidean_pearson
value: 50.21918146173064
- type: euclidean_spearman
value: 53.02530618040754
- type: manhattan_pearson
value: 62.7472089813117
- type: manhattan_spearman
value: 63.90440606248973
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (es-it)
config: es-it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.06715980430013
- type: cos_sim_spearman
value: 61.2993294424547
- type: euclidean_pearson
value: 53.67335552456426
- type: euclidean_spearman
value: 55.32940583953816
- type: manhattan_pearson
value: 58.08097600675386
- type: manhattan_spearman
value: 57.1966250850173
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de-fr)
config: de-fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 18.94271219818519
- type: cos_sim_spearman
value: 22.355519793818935
- type: euclidean_pearson
value: 14.336479135636187
- type: euclidean_spearman
value: 18.862751864788684
- type: manhattan_pearson
value: 14.481730447681057
- type: manhattan_spearman
value: 17.572142526671563
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de-pl)
config: de-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 20.644357537446464
- type: cos_sim_spearman
value: 35.32083671407284
- type: euclidean_pearson
value: 28.24720906134992
- type: euclidean_spearman
value: 46.437508077438395
- type: manhattan_pearson
value: 42.09834718968137
- type: manhattan_spearman
value: 53.02744622635869
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (fr-pl)
config: fr-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 71.84986730523782
- type: cos_sim_spearman
value: 73.24670207647144
- type: euclidean_pearson
value: 62.450055500805604
- type: euclidean_spearman
value: 61.97797868009122
- type: manhattan_pearson
value: 56.32083882980946
- type: manhattan_spearman
value: 39.440531887330785
- task:
type: STS
dataset:
type: C-MTEB/STSB
name: MTEB STSB
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 78.11479317838469
- type: cos_sim_spearman
value: 77.7709743500025
- type: euclidean_pearson
value: 78.83834281752932
- type: euclidean_spearman
value: 78.21978829646487
- type: manhattan_pearson
value: 79.36075578990533
- type: manhattan_spearman
value: 78.72958965446072
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 82.92539499228975
- type: cos_sim_spearman
value: 83.63025944536395
- type: euclidean_pearson
value: 81.54744230098872
- type: euclidean_spearman
value: 81.08707735758752
- type: manhattan_pearson
value: 81.50252353111375
- type: manhattan_spearman
value: 81.00641210322735
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 75.12690809334019
- type: mrr
value: 92.28846951886169
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 47.15
- type: map_at_10
value: 56.748
- type: map_at_100
value: 57.528999999999996
- type: map_at_1000
value: 57.56400000000001
- type: map_at_3
value: 53.691
- type: map_at_5
value: 55.656000000000006
- type: mrr_at_1
value: 49.667
- type: mrr_at_10
value: 58.24700000000001
- type: mrr_at_100
value: 58.855000000000004
- type: mrr_at_1000
value: 58.888
- type: mrr_at_3
value: 55.72200000000001
- type: mrr_at_5
value: 57.272
- type: ndcg_at_1
value: 49.667
- type: ndcg_at_10
value: 61.739
- type: ndcg_at_100
value: 65.17399999999999
- type: ndcg_at_1000
value: 66.122
- type: ndcg_at_3
value: 56.266000000000005
- type: ndcg_at_5
value: 59.357000000000006
- type: precision_at_1
value: 49.667
- type: precision_at_10
value: 8.5
- type: precision_at_100
value: 1.04
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 22.111
- type: precision_at_5
value: 15.133
- type: recall_at_1
value: 47.15
- type: recall_at_10
value: 75.52799999999999
- type: recall_at_100
value: 91.167
- type: recall_at_1000
value: 98.667
- type: recall_at_3
value: 60.978
- type: recall_at_5
value: 68.839
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.71188118811881
- type: cos_sim_ap
value: 92.0858173884619
- type: cos_sim_f1
value: 85.48864758144126
- type: cos_sim_precision
value: 84.40545808966861
- type: cos_sim_recall
value: 86.6
- type: dot_accuracy
value: 99.57722772277228
- type: dot_ap
value: 83.92226742515372
- type: dot_f1
value: 78.85091629519565
- type: dot_precision
value: 78.11579980372915
- type: dot_recall
value: 79.60000000000001
- type: euclidean_accuracy
value: 99.6970297029703
- type: euclidean_ap
value: 91.69378964699095
- type: euclidean_f1
value: 85.08771929824562
- type: euclidean_precision
value: 82.98479087452472
- type: euclidean_recall
value: 87.3
- type: manhattan_accuracy
value: 99.7019801980198
- type: manhattan_ap
value: 92.00969741996086
- type: manhattan_f1
value: 84.95752123938031
- type: manhattan_precision
value: 84.91508491508492
- type: manhattan_recall
value: 85.0
- type: max_accuracy
value: 99.71188118811881
- type: max_ap
value: 92.0858173884619
- type: max_f1
value: 85.48864758144126
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 54.50675991473899
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 31.12415042272221
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 47.37961638353922
- type: mrr
value: 48.04425558102029
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.358583236464177
- type: cos_sim_spearman
value: 32.06044850511017
- type: dot_pearson
value: 30.36343303587471
- type: dot_spearman
value: 30.303932242144704
- task:
type: Reranking
dataset:
type: C-MTEB/T2Reranking
name: MTEB T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 63.73951666189072
- type: mrr
value: 73.54706021429108
- task:
type: Retrieval
dataset:
type: C-MTEB/T2Retrieval
name: MTEB T2Retrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 16.892
- type: map_at_10
value: 40.215
- type: map_at_100
value: 43.9
- type: map_at_1000
value: 44.185
- type: map_at_3
value: 30.008000000000003
- type: map_at_5
value: 35.465
- type: mrr_at_1
value: 63.931000000000004
- type: mrr_at_10
value: 70.35
- type: mrr_at_100
value: 70.762
- type: mrr_at_1000
value: 70.784
- type: mrr_at_3
value: 68.863
- type: mrr_at_5
value: 69.758
- type: ndcg_at_1
value: 63.931000000000004
- type: ndcg_at_10
value: 51.573
- type: ndcg_at_100
value: 59.067
- type: ndcg_at_1000
value: 62.388
- type: ndcg_at_3
value: 55.422000000000004
- type: ndcg_at_5
value: 52.322
- type: precision_at_1
value: 63.931000000000004
- type: precision_at_10
value: 25.373
- type: precision_at_100
value: 3.894
- type: precision_at_1000
value: 0.47400000000000003
- type: precision_at_3
value: 48.083
- type: precision_at_5
value: 38.513
- type: recall_at_1
value: 16.892
- type: recall_at_10
value: 49.945
- type: recall_at_100
value: 73.41499999999999
- type: recall_at_1000
value: 89.776
- type: recall_at_3
value: 32.544000000000004
- type: recall_at_5
value: 40.501
- task:
type: Classification
dataset:
type: C-MTEB/TNews-classification
name: MTEB TNews
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 44.153999999999996
- type: f1
value: 42.69123774230511
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22300000000000003
- type: map_at_10
value: 1.7999999999999998
- type: map_at_100
value: 9.098
- type: map_at_1000
value: 20.59
- type: map_at_3
value: 0.6459999999999999
- type: map_at_5
value: 1.006
- type: mrr_at_1
value: 84.0
- type: mrr_at_10
value: 91.5
- type: mrr_at_100
value: 91.5
- type: mrr_at_1000
value: 91.5
- type: mrr_at_3
value: 91.0
- type: mrr_at_5
value: 91.5
- type: ndcg_at_1
value: 80.0
- type: ndcg_at_10
value: 72.992
- type: ndcg_at_100
value: 51.778999999999996
- type: ndcg_at_1000
value: 44.473
- type: ndcg_at_3
value: 77.531
- type: ndcg_at_5
value: 74.685
- type: precision_at_1
value: 84.0
- type: precision_at_10
value: 78.60000000000001
- type: precision_at_100
value: 52.800000000000004
- type: precision_at_1000
value: 19.736
- type: precision_at_3
value: 83.333
- type: precision_at_5
value: 80.0
- type: recall_at_1
value: 0.22300000000000003
- type: recall_at_10
value: 2.016
- type: recall_at_100
value: 12.21
- type: recall_at_1000
value: 41.427
- type: recall_at_3
value: 0.6839999999999999
- type: recall_at_5
value: 1.083
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (sqi-eng)
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 11.0
- type: f1
value: 8.487309997179562
- type: precision
value: 7.935185890268856
- type: recall
value: 11.0
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (fry-eng)
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 23.699421965317917
- type: f1
value: 18.09982567208001
- type: precision
value: 16.582017825552963
- type: recall
value: 23.699421965317917
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (kur-eng)
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.780487804878048
- type: f1
value: 6.484836753129436
- type: precision
value: 5.916220801747723
- type: recall
value: 8.780487804878048
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tur-eng)
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 5.0
- type: f1
value: 3.493223480735001
- type: precision
value: 3.1492116349139385
- type: recall
value: 5.0
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (deu-eng)
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 33.6
- type: f1
value: 29.339340352229065
- type: precision
value: 27.997920626374693
- type: recall
value: 33.6
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (nld-eng)
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 20.200000000000003
- type: f1
value: 16.330981736231458
- type: precision
value: 15.250949969794044
- type: recall
value: 20.200000000000003
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ron-eng)
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 19.6
- type: f1
value: 14.951120083366323
- type: precision
value: 13.617335362707001
- type: recall
value: 19.6
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ang-eng)
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 20.149253731343283
- type: f1
value: 13.312899786780385
- type: precision
value: 11.979388770433545
- type: recall
value: 20.149253731343283
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ido-eng)
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 31.4
- type: f1
value: 26.21323201417634
- type: precision
value: 24.607830064672168
- type: recall
value: 31.4
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (jav-eng)
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 18.048780487804876
- type: f1
value: 14.347798542920492
- type: precision
value: 13.301672920575362
- type: recall
value: 18.048780487804876
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (isl-eng)
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 5.2
- type: f1
value: 3.2713297295122503
- type: precision
value: 2.978548911585725
- type: recall
value: 5.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (slv-eng)
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 7.411907654921021
- type: f1
value: 5.412915976323278
- type: precision
value: 4.975402373122839
- type: recall
value: 7.411907654921021
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (cym-eng)
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.521739130434783
- type: f1
value: 5.871393789897329
- type: precision
value: 5.350472658912557
- type: recall
value: 8.521739130434783
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (kaz-eng)
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 1.565217391304348
- type: f1
value: 0.7422394530145001
- type: precision
value: 0.7201734373569025
- type: recall
value: 1.565217391304348
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (est-eng)
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 5.3
- type: f1
value: 3.0838354401589694
- type: precision
value: 2.709942839090994
- type: recall
value: 5.3
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (heb-eng)
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 0.8
- type: f1
value: 0.24583802742178057
- type: precision
value: 0.18710578268453032
- type: recall
value: 0.8
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (gla-eng)
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 4.945717732207479
- type: f1
value: 2.7266734043909437
- type: precision
value: 2.3247505400014186
- type: recall
value: 4.945717732207479
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (mar-eng)
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 54.2
- type: f1
value: 47.22780366692132
- type: precision
value: 44.740178571428565
- type: recall
value: 54.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (lat-eng)
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 25.8
- type: f1
value: 19.547406382656526
- type: precision
value: 17.80766233766234
- type: recall
value: 25.8
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (bel-eng)
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 4.9
- type: f1
value: 3.283031457969928
- type: precision
value: 3.0361515007649467
- type: recall
value: 4.9
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (pms-eng)
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 22.476190476190478
- type: f1
value: 17.494204011570957
- type: precision
value: 16.16236240785113
- type: recall
value: 22.476190476190478
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (gle-eng)
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.3
- type: f1
value: 3.461898170471662
- type: precision
value: 2.975546957350575
- type: recall
value: 6.3
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (pes-eng)
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.6
- type: f1
value: 5.874235156578609
- type: precision
value: 5.201352547725499
- type: recall
value: 8.6
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (nob-eng)
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 15.2
- type: f1
value: 11.908986787697534
- type: precision
value: 11.090628985937808
- type: recall
value: 15.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (bul-eng)
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.9
- type: f1
value: 4.58348360335125
- type: precision
value: 4.183620994869927
- type: recall
value: 6.9
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (cbk-eng)
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 62.1
- type: f1
value: 55.70845598845599
- type: precision
value: 53.22281746031747
- type: recall
value: 62.1
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (hun-eng)
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 4.8
- type: f1
value: 3.246932234432234
- type: precision
value: 2.9738765839703265
- type: recall
value: 4.8
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (uig-eng)
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 0.8999999999999999
- type: f1
value: 0.5331481481481481
- type: precision
value: 0.4918990604783396
- type: recall
value: 0.8999999999999999
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (rus-eng)
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 31.7
- type: f1
value: 25.22406237037816
- type: precision
value: 23.27273155929038
- type: recall
value: 31.7
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (spa-eng)
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.5
- type: f1
value: 95.48333333333333
- type: precision
value: 95.0
- type: recall
value: 96.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (hye-eng)
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 0.40431266846361186
- type: f1
value: 0.22521185350542844
- type: precision
value: 0.20245384171411912
- type: recall
value: 0.40431266846361186
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tel-eng)
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 43.162393162393165
- type: f1
value: 35.83662064431295
- type: precision
value: 33.66590199923534
- type: recall
value: 43.162393162393165
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (afr-eng)
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 12.2
- type: f1
value: 9.007009351120605
- type: precision
value: 8.26509907921979
- type: recall
value: 12.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (mon-eng)
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 2.0454545454545454
- type: f1
value: 0.846869670733307
- type: precision
value: 0.719285857023819
- type: recall
value: 2.0454545454545454
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (arz-eng)
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 56.18448637316562
- type: f1
value: 49.41850369523325
- type: precision
value: 46.84486373165618
- type: recall
value: 56.18448637316562
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (hrv-eng)
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.4
- type: f1
value: 6.274306734742452
- type: precision
value: 5.854786915151029
- type: recall
value: 8.4
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (nov-eng)
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 45.13618677042802
- type: f1
value: 38.784818726452976
- type: precision
value: 36.65848310789945
- type: recall
value: 45.13618677042802
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (gsw-eng)
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 23.076923076923077
- type: f1
value: 17.501757501757503
- type: precision
value: 16.06289721674337
- type: recall
value: 23.076923076923077
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (nds-eng)
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 15.8
- type: f1
value: 11.834682187321722
- type: precision
value: 10.871016304088595
- type: recall
value: 15.8
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ukr-eng)
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 7.3
- type: f1
value: 4.929314970921539
- type: precision
value: 4.427714750128542
- type: recall
value: 7.3
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (uzb-eng)
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 5.14018691588785
- type: f1
value: 2.543797914741945
- type: precision
value: 2.1476927403586066
- type: recall
value: 5.14018691588785
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (lit-eng)
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 5.0
- type: f1
value: 3.173243817101591
- type: precision
value: 2.8643206769285485
- type: recall
value: 5.0
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ina-eng)
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.5
- type: f1
value: 63.89614902641219
- type: precision
value: 61.628650793650785
- type: recall
value: 69.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (lfn-eng)
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 41.8
- type: f1
value: 37.523909714712914
- type: precision
value: 36.054581750900766
- type: recall
value: 41.8
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (zsm-eng)
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.2
- type: f1
value: 74.88805555555554
- type: precision
value: 73.05083333333333
- type: recall
value: 79.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ita-eng)
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 43.5
- type: f1
value: 37.28660019590605
- type: precision
value: 35.18067447433519
- type: recall
value: 43.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (cmn-eng)
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.5
- type: f1
value: 92.95
- type: precision
value: 92.2
- type: recall
value: 94.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (lvs-eng)
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 5.2
- type: f1
value: 3.5297755651484026
- type: precision
value: 3.190013722690584
- type: recall
value: 5.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (glg-eng)
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.7
- type: f1
value: 69.2602380952381
- type: precision
value: 67.03261904761905
- type: recall
value: 74.7
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ceb-eng)
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.0
- type: f1
value: 5.639611303143687
- type: precision
value: 5.209856824277429
- type: recall
value: 8.0
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (bre-eng)
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.1
- type: f1
value: 3.847611167634209
- type: precision
value: 3.3324923687423693
- type: recall
value: 6.1
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ben-eng)
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 75.5
- type: f1
value: 70.14214285714286
- type: precision
value: 67.88761904761904
- type: recall
value: 75.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (swg-eng)
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 20.535714285714285
- type: f1
value: 16.437074829931973
- type: precision
value: 15.459837781266353
- type: recall
value: 20.535714285714285
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (arq-eng)
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 21.405049396267835
- type: f1
value: 16.162968480476714
- type: precision
value: 14.506603642481391
- type: recall
value: 21.405049396267835
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (kab-eng)
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 1.4000000000000001
- type: f1
value: 0.8861559696342305
- type: precision
value: 0.7898232323232323
- type: recall
value: 1.4000000000000001
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (fra-eng)
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.5
- type: f1
value: 91.65333333333334
- type: precision
value: 90.80833333333332
- type: recall
value: 93.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (por-eng)
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.8
- type: f1
value: 92.08333333333333
- type: precision
value: 91.23333333333333
- type: recall
value: 93.8
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tat-eng)
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 1.3
- type: f1
value: 0.9654912597950575
- type: precision
value: 0.911237853823405
- type: recall
value: 1.3
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (oci-eng)
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 35.5
- type: f1
value: 29.385868020868024
- type: precision
value: 27.38218614718615
- type: recall
value: 35.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (pol-eng)
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.3
- type: f1
value: 5.625495291471218
- type: precision
value: 5.006352187769519
- type: recall
value: 8.3
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (war-eng)
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 9.3
- type: f1
value: 7.188871139201601
- type: precision
value: 6.68110313042221
- type: recall
value: 9.3
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (aze-eng)
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 4.9
- type: f1
value: 3.4368196711816386
- type: precision
value: 3.1516575755476186
- type: recall
value: 4.9
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (vie-eng)
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.5
- type: f1
value: 92.85666666666667
- type: precision
value: 92.07499999999999
- type: recall
value: 94.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (nno-eng)
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 10.9
- type: f1
value: 8.052880589619718
- type: precision
value: 7.2833020438680816
- type: recall
value: 10.9
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (cha-eng)
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 21.897810218978105
- type: f1
value: 16.459096459096457
- type: precision
value: 14.99391727493917
- type: recall
value: 21.897810218978105
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (mhr-eng)
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 0.8
- type: f1
value: 0.43900258600589265
- type: precision
value: 0.42151473277789064
- type: recall
value: 0.8
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (dan-eng)
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 14.899999999999999
- type: f1
value: 11.403181682754628
- type: precision
value: 10.506373051667312
- type: recall
value: 14.899999999999999
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ell-eng)
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 1.9
- type: f1
value: 0.8872641689515834
- type: precision
value: 0.7857231069685399
- type: recall
value: 1.9
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (amh-eng)
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 1.1904761904761905
- type: f1
value: 0.20847048496818082
- type: precision
value: 0.11904761904761904
- type: recall
value: 1.1904761904761905
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (pam-eng)
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 5.3
- type: f1
value: 3.784571880595977
- type: precision
value: 3.4556477020719782
- type: recall
value: 5.3
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (hsb-eng)
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 9.316770186335404
- type: f1
value: 6.80343720685027
- type: precision
value: 6.316650292717499
- type: recall
value: 9.316770186335404
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (srp-eng)
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 5.8999999999999995
- type: f1
value: 4.5486926228313695
- type: precision
value: 4.311121913612427
- type: recall
value: 5.8999999999999995
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (epo-eng)
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 18.099999999999998
- type: f1
value: 13.4170874831821
- type: precision
value: 12.178193046524806
- type: recall
value: 18.099999999999998
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (kzj-eng)
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 4.3999999999999995
- type: f1
value: 3.3905735425765524
- type: precision
value: 3.2588935800436625
- type: recall
value: 4.3999999999999995
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (awa-eng)
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 37.66233766233766
- type: f1
value: 30.539579468150897
- type: precision
value: 28.60288100547841
- type: recall
value: 37.66233766233766
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (fao-eng)
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 12.213740458015266
- type: f1
value: 8.297822182308039
- type: precision
value: 7.463649581970193
- type: recall
value: 12.213740458015266
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (mal-eng)
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.31149927219796
- type: f1
value: 73.35759340126152
- type: precision
value: 71.26394953905871
- type: recall
value: 78.31149927219796
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ile-eng)
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 51.800000000000004
- type: f1
value: 44.24010323010323
- type: precision
value: 41.450707972582975
- type: recall
value: 51.800000000000004
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (bos-eng)
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 13.27683615819209
- type: f1
value: 9.167320569156727
- type: precision
value: 8.200402665583079
- type: recall
value: 13.27683615819209
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (cor-eng)
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 4.8
- type: f1
value: 3.1268763352790283
- type: precision
value: 2.84393718699601
- type: recall
value: 4.8
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (cat-eng)
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.1
- type: f1
value: 81.55
- type: precision
value: 79.98166666666665
- type: recall
value: 85.1
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (eus-eng)
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.3
- type: f1
value: 42.347894491129786
- type: precision
value: 40.36040404040404
- type: recall
value: 48.3
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (yue-eng)
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.8
- type: f1
value: 74.35484848484847
- type: precision
value: 72.43277777777777
- type: recall
value: 78.8
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (swe-eng)
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 13.900000000000002
- type: f1
value: 10.718252991153888
- type: precision
value: 9.835761434404196
- type: recall
value: 13.900000000000002
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (dtp-eng)
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 4.9
- type: f1
value: 3.371714825002496
- type: precision
value: 3.085928254003479
- type: recall
value: 4.9
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (kat-eng)
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 0.5361930294906166
- type: f1
value: 0.40389703692021933
- type: precision
value: 0.40302666854804575
- type: recall
value: 0.5361930294906166
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (jpn-eng)
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 55.300000000000004
- type: f1
value: 48.83353113553113
- type: precision
value: 46.48630659536542
- type: recall
value: 55.300000000000004
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (csb-eng)
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.300395256916996
- type: f1
value: 5.261552988548536
- type: precision
value: 4.724388115499655
- type: recall
value: 8.300395256916996
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (xho-eng)
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.450704225352112
- type: f1
value: 4.829974470478787
- type: precision
value: 4.337585798478816
- type: recall
value: 8.450704225352112
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (orv-eng)
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 1.0778443113772456
- type: f1
value: 0.5373251562068135
- type: precision
value: 0.5107640721914694
- type: recall
value: 1.0778443113772456
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ind-eng)
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.5
- type: f1
value: 85.46333333333334
- type: precision
value: 84.1
- type: recall
value: 88.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tuk-eng)
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 5.41871921182266
- type: f1
value: 2.8063639248802965
- type: precision
value: 2.2699550039451513
- type: recall
value: 5.41871921182266
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (max-eng)
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 40.49295774647887
- type: f1
value: 33.455454951933824
- type: precision
value: 31.4339393461183
- type: recall
value: 40.49295774647887
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (swh-eng)
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 18.974358974358974
- type: f1
value: 14.517578026097205
- type: precision
value: 13.3510327465177
- type: recall
value: 18.974358974358974
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (hin-eng)
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.5
- type: f1
value: 85.34666666666666
- type: precision
value: 83.89999999999999
- type: recall
value: 88.5
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (dsb-eng)
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.1419624217119
- type: f1
value: 5.830783012763732
- type: precision
value: 5.4408714223116545
- type: recall
value: 8.1419624217119
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ber-eng)
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 5.800000000000001
- type: f1
value: 3.9245687335866406
- type: precision
value: 3.5535667824951584
- type: recall
value: 5.800000000000001
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tam-eng)
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.40390879478826
- type: f1
value: 62.25738069386277
- type: precision
value: 60.10935318752908
- type: recall
value: 68.40390879478826
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (slk-eng)
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 7.1
- type: f1
value: 5.4876787833762135
- type: precision
value: 5.126663482701374
- type: recall
value: 7.1
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tgl-eng)
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.9
- type: f1
value: 6.519531004112515
- type: precision
value: 5.987707404636394
- type: recall
value: 8.9
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ast-eng)
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.92913385826772
- type: f1
value: 59.96062992125984
- type: precision
value: 57.13348331458567
- type: recall
value: 66.92913385826772
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (mkd-eng)
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 4.3
- type: f1
value: 2.765805343607201
- type: precision
value: 2.5247851243177144
- type: recall
value: 4.3
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (khm-eng)
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 0.41551246537396125
- type: f1
value: 0.1497838495760933
- type: precision
value: 0.14429034844729552
- type: recall
value: 0.41551246537396125
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ces-eng)
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 5.800000000000001
- type: f1
value: 3.761224995516873
- type: precision
value: 3.2689210175496086
- type: recall
value: 5.800000000000001
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tzl-eng)
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 16.346153846153847
- type: f1
value: 14.524291497975709
- type: precision
value: 13.995726495726496
- type: recall
value: 16.346153846153847
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (urd-eng)
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.80000000000001
- type: f1
value: 61.615800865800864
- type: precision
value: 59.12333333333334
- type: recall
value: 67.80000000000001
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (ara-eng)
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.8
- type: f1
value: 80.08857142857143
- type: precision
value: 78.46666666666667
- type: recall
value: 83.8
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (kor-eng)
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 4.2
- type: f1
value: 2.6507751588440254
- type: precision
value: 2.335273168189835
- type: recall
value: 4.2
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (yid-eng)
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 0.4716981132075472
- type: f1
value: 0.19293763102725367
- type: precision
value: 0.1622040325564188
- type: recall
value: 0.4716981132075472
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (fin-eng)
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 4.9
- type: f1
value: 3.5001791555125235
- type: precision
value: 3.277940522301425
- type: recall
value: 4.9
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (tha-eng)
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 0.9124087591240875
- type: f1
value: 0.5083420229405631
- type: precision
value: 0.4674562188049969
- type: recall
value: 0.9124087591240875
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (wuu-eng)
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.4
- type: f1
value: 74.62333333333333
- type: precision
value: 72.52333333333334
- type: recall
value: 79.4
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringP2P
name: MTEB ThuNewsClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 51.02719281751054
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringS2S
name: MTEB ThuNewsClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 48.31885339280247
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.426
- type: map_at_10
value: 9.029
- type: map_at_100
value: 14.299999999999999
- type: map_at_1000
value: 15.798000000000002
- type: map_at_3
value: 4.626
- type: map_at_5
value: 6.221
- type: mrr_at_1
value: 32.653
- type: mrr_at_10
value: 46.608
- type: mrr_at_100
value: 47.195
- type: mrr_at_1000
value: 47.208
- type: mrr_at_3
value: 41.837
- type: mrr_at_5
value: 43.673
- type: ndcg_at_1
value: 29.592000000000002
- type: ndcg_at_10
value: 23.354
- type: ndcg_at_100
value: 33.875
- type: ndcg_at_1000
value: 45.369
- type: ndcg_at_3
value: 25.734
- type: ndcg_at_5
value: 23.873
- type: precision_at_1
value: 32.653
- type: precision_at_10
value: 21.224
- type: precision_at_100
value: 7.122000000000001
- type: precision_at_1000
value: 1.459
- type: precision_at_3
value: 26.531
- type: precision_at_5
value: 24.082
- type: recall_at_1
value: 2.426
- type: recall_at_10
value: 15.622
- type: recall_at_100
value: 44.318999999999996
- type: recall_at_1000
value: 78.632
- type: recall_at_3
value: 5.798
- type: recall_at_5
value: 8.927
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 67.9606
- type: ap
value: 12.665547829558923
- type: f1
value: 52.10043478110198
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.601018675721576
- type: f1
value: 59.91486569196274
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 37.881729581540135
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.68003814746379
- type: cos_sim_ap
value: 65.95659315362258
- type: cos_sim_f1
value: 61.94669484560291
- type: cos_sim_precision
value: 55.80706579225725
- type: cos_sim_recall
value: 69.6042216358839
- type: dot_accuracy
value: 81.97532335936103
- type: dot_ap
value: 58.99091918849294
- type: dot_f1
value: 57.098765432098766
- type: dot_precision
value: 51.8990073370738
- type: dot_recall
value: 63.45646437994723
- type: euclidean_accuracy
value: 83.18531322644095
- type: euclidean_ap
value: 64.5631762106556
- type: euclidean_f1
value: 61.150808574652125
- type: euclidean_precision
value: 58.25173155003582
- type: euclidean_recall
value: 64.35356200527704
- type: manhattan_accuracy
value: 83.14358943792097
- type: manhattan_ap
value: 64.73090464118813
- type: manhattan_f1
value: 61.228384019081695
- type: manhattan_precision
value: 55.86507072905332
- type: manhattan_recall
value: 67.73087071240106
- type: max_accuracy
value: 83.68003814746379
- type: max_ap
value: 65.95659315362258
- type: max_f1
value: 61.94669484560291
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.7161873714441
- type: cos_sim_ap
value: 85.10870963707444
- type: cos_sim_f1
value: 77.88396923766146
- type: cos_sim_precision
value: 75.59791274097695
- type: cos_sim_recall
value: 80.31259624268556
- type: dot_accuracy
value: 87.74595412737222
- type: dot_ap
value: 81.22910623983562
- type: dot_f1
value: 76.08511889448344
- type: dot_precision
value: 72.78672385908163
- type: dot_recall
value: 79.69664305512781
- type: euclidean_accuracy
value: 88.13404742500097
- type: euclidean_ap
value: 84.03032098854915
- type: euclidean_f1
value: 76.3909440662918
- type: euclidean_precision
value: 73.51894047279977
- type: euclidean_recall
value: 79.49645826917154
- type: manhattan_accuracy
value: 88.13598789148911
- type: manhattan_ap
value: 84.13258714083858
- type: manhattan_f1
value: 76.44922164566346
- type: manhattan_precision
value: 73.70640365923384
- type: manhattan_recall
value: 79.40406529103788
- type: max_accuracy
value: 88.7161873714441
- type: max_ap
value: 85.10870963707444
- type: max_f1
value: 77.88396923766146
- task:
type: Retrieval
dataset:
type: C-MTEB/VideoRetrieval
name: MTEB VideoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 41.8
- type: map_at_10
value: 50.57000000000001
- type: map_at_100
value: 51.271
- type: map_at_1000
value: 51.31099999999999
- type: map_at_3
value: 48.283
- type: map_at_5
value: 49.633
- type: mrr_at_1
value: 41.8
- type: mrr_at_10
value: 50.57000000000001
- type: mrr_at_100
value: 51.271
- type: mrr_at_1000
value: 51.31099999999999
- type: mrr_at_3
value: 48.283
- type: mrr_at_5
value: 49.633
- type: ndcg_at_1
value: 41.8
- type: ndcg_at_10
value: 55.071999999999996
- type: ndcg_at_100
value: 58.604
- type: ndcg_at_1000
value: 59.679
- type: ndcg_at_3
value: 50.394000000000005
- type: ndcg_at_5
value: 52.825
- type: precision_at_1
value: 41.8
- type: precision_at_10
value: 6.93
- type: precision_at_100
value: 0.861
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 18.833
- type: precision_at_5
value: 12.479999999999999
- type: recall_at_1
value: 41.8
- type: recall_at_10
value: 69.3
- type: recall_at_100
value: 86.1
- type: recall_at_1000
value: 94.6
- type: recall_at_3
value: 56.49999999999999
- type: recall_at_5
value: 62.4
- task:
type: Classification
dataset:
type: C-MTEB/waimai-classification
name: MTEB Waimai
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 80.65
- type: ap
value: 59.927241826012924
- type: f1
value: 78.72456184299979
---
# Model Card for udever-bloom
<!-- Provide a quick summary of what the model is/does. -->
`udever-bloom-560m` is finetuned from [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) via [BitFit](https://aclanthology.org/2022.acl-short.1/) on MS MARCO Passage Ranking, SNLI and MultiNLI data.
It is a universal embedding model across tasks, natural and programming languages.
(From the technical view, `udever` is merely with some minor improvements to `sgpt-bloom`)
<img width="338" height="259" src="https://user-images.githubusercontent.com/26690193/277643721-cdb7f227-cae5-40e1-b6e1-a201bde00339.png" />
## Model Details
### Model Description
- **Developed by:** Alibaba Group
- **Model type:** Transformer-based Language Model (decoder-only)
- **Language(s) (NLP):** Multiple; see [bloom training data](https://huggingface.co/bigscience/bloom-560m#training-data)
- **Finetuned from model :** [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [github.com/izhx/uni-rep](https://github.com/izhx/uni-rep)
- **Paper :** [Language Models are Universal Embedders](https://arxiv.org/pdf/2310.08232.pdf)
- **Training Date :** 2023-06
### Checkpoints
- [udever-bloom-560m](https://huggingface.co/izhx/udever-bloom-560m)
- [udever-bloom-1b1](https://huggingface.co/izhx/udever-bloom-1b1)
- [udever-bloom-3b](https://huggingface.co/izhx/udever-bloom-3b)
- [udever-bloom-7b1](https://huggingface.co/izhx/udever-bloom-7b1)
On ModelScope / 魔搭社区: [udever-bloom-560m](https://modelscope.cn/models/damo/udever-bloom-560m), [udever-bloom-1b1](https://modelscope.cn/models/damo/udever-bloom-1b1), [udever-bloom-3b](https://modelscope.cn/models/damo/udever-bloom-3b), [udever-bloom-7b1](https://modelscope.cn/models/damo/udever-bloom-7b1)
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import torch
from transformers import AutoTokenizer, BloomModel
tokenizer = AutoTokenizer.from_pretrained('izhx/udever-bloom-560m')
model = BloomModel.from_pretrained('izhx/udever-bloom-560m')
boq, eoq, bod, eod = '[BOQ]', '[EOQ]', '[BOD]', '[EOD]'
eoq_id, eod_id = tokenizer.convert_tokens_to_ids([eoq, eod])
if tokenizer.padding_side != 'left':
print('!!!', tokenizer.padding_side)
tokenizer.padding_side = 'left'
def encode(texts: list, is_query: bool = True, max_length=300):
bos = boq if is_query else bod
eos_id = eoq_id if is_query else eod_id
texts = [bos + t for t in texts]
encoding = tokenizer(
texts, truncation=True, max_length=max_length - 1, padding=True
)
for ids, mask in zip(encoding['input_ids'], encoding['attention_mask']):
ids.append(eos_id)
mask.append(1)
inputs = tokenizer.pad(encoding, return_tensors='pt')
with torch.inference_mode():
outputs = model(**inputs)
embeds = outputs.last_hidden_state[:, -1]
return embeds
encode(['I am Bert', 'You are Elmo'])
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
- MS MARCO Passage Ranking, retrieved by (https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/ms_marco/train_bi-encoder_mnrl.py#L86)
- SNLI and MultiNLI (https://sbert.net/datasets/AllNLI.tsv.gz)
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing
MS MARCO hard negatives provided by (https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/ms_marco/train_bi-encoder_mnrl.py#L86).
Negatives for SNLI and MultiNLI are randomly sampled.
#### Training Hyperparameters
- **Training regime:** tf32, BitFit
- **Batch size:** 1024
- **Epochs:** 3
- **Optimizer:** AdamW
- **Learning rate:** 1e-4
- **Scheduler:** constant with warmup.
- **Warmup:** 0.25 epoch
## Evaluation
### Table 1: Massive Text Embedding Benchmark [MTEB](https://huggingface.co/spaces/mteb/leaderboard)
| MTEB | Avg. | Class. | Clust. | PairClass. | Rerank. | Retr. | STS | Summ. |
|-----------------------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------|
| #Datasets ➡️ | 56 | 12 | 11 | 3 | 4 | 15 | 10 | 1 |
||
| bge-large-en-v1.5 | **64.23** | **75.97** | 46.08| **87.12** | **60.03** | **54.29** | 83.11| 31.61 |
| bge-base-en-v1.5 | 63.55| 75.53| 45.77| 86.55| 58.86| 53.25| 82.4| 31.07 |
| gte-large | 63.13| 73.33| **46.84** | 85| 59.13| 52.22| **83.35** | 31.66 |
| gte-base | 62.39| 73.01| 46.2| 84.57| 58.61| 51.14| 82.3| 31.17 |
| e5-large-v2 | 62.25| 75.24| 44.49| 86.03| 56.61| 50.56| 82.05| 30.19 |
| instructor-xl | 61.79| 73.12| 44.74| 86.62| 57.29| 49.26| 83.06| 32.32 |
| instructor-large | 61.59| 73.86| 45.29| 85.89| 57.54| 47.57| 83.15| 31.84 |
| e5-base-v2 | 61.5 | 73.84| 43.8| 85.73| 55.91| 50.29| 81.05| 30.28 |
| e5-large | 61.42| 73.14| 43.33| 85.94| 56.53| 49.99| 82.06| 30.97 |
| text-embedding-ada-002 (OpenAI API) | 60.99| 70.93| 45.9 | 84.89| 56.32| 49.25| 80.97| 30.8 |
| e5-base | 60.44| 72.63| 42.11| 85.09| 55.7 | 48.75| 80.96| 31.01 |
| SGPT-5.8B-msmarco | 58.93| 68.13| 40.34| 82 | 56.56| 50.25| 78.1 | 31.46 |
| sgpt-bloom-7b1-msmarco | 57.59| 66.19| 38.93| 81.9 | 55.65| 48.22| 77.74| **33.6** |
||
| Udever-bloom-560m | 55.80| 68.04| 36.89| 81.05| 52.60| 41.19| 79.93| 32.06 |
| Udever-bloom-1b1 | 58.28| 70.18| 39.11| 83.11| 54.28| 45.27| 81.52| 31.10 |
| Udever-bloom-3b | 59.86| 71.91| 40.74| 84.06| 54.90| 47.67| 82.37| 30.62 |
| Udever-bloom-7b1 | 60.63 | 72.13| 40.81| 85.40| 55.91| 49.34| 83.01| 30.97 |
### Table 2: [CodeSearchNet](https://github.com/github/CodeSearchNet)
| CodeSearchNet | Go | Ruby | Python | Java | JS | PHP | Avg. |
|-|-|-|-|-|-|-|-|
| CodeBERT | 69.3 | 70.6 | 84.0 | 86.8 | 74.8 | 70.6 | 76.0 |
| GraphCodeBERT | 84.1 | 73.2 | 87.9 | 75.7 | 71.1 | 72.5 | 77.4 |
| cpt-code S | **97.7** | **86.3** | 99.8 | 94.0 | 86.0 | 96.7 | 93.4 |
| cpt-code M | 97.5 | 85.5 | **99.9** | **94.4** | **86.5** | **97.2** | **93.5** |
| sgpt-bloom-7b1-msmarco | 76.79 | 69.25 | 95.68 | 77.93 | 70.35 | 73.45 | 77.24 |
||
| Udever-bloom-560m | 75.38 | 66.67 | 96.23 | 78.99 | 69.39 | 73.69 | 76.73 |
| Udever-bloom-1b1 | 78.76 | 72.85 | 97.67 | 82.77 | 74.38 | 78.97 | 80.90 |
| Udever-bloom-3b | 80.63 | 75.40 | 98.02 | 83.88 | 76.18 | 79.67 | 82.29 |
| Udever-bloom-7b1 | 79.37 | 76.59 | 98.38 | 84.68 | 77.49 | 80.03 | 82.76 |
### Table 3: Chinese multi-domain retrieval [Multi-cpr](https://dl.acm.org/doi/10.1145/3477495.3531736)
| | | |E-commerce | | Entertainment video | | Medical | |
|--|--|--|--|--|--|--|--|--|
| Model | Train | Backbone | MRR@10 | Recall@1k | MRR@10 | Recall@1k | MRR@10 | Recall@1k |
||
| BM25 | - | - | 0.225 | 0.815 | 0.225 | 0.780 | 0.187 | 0.482 |
| Doc2Query | - | - | 0.239 | 0.826 | 0.238 | 0.794 | 0.210 | 0.505 |
| DPR-1 | In-Domain | BERT | 0.270 | 0.921 | 0.254 | 0.934 | 0.327 | 0.747 |
| DPR-2 | In-Domain | BERT-CT | 0.289 | **0.926** | 0.263 | **0.935** | 0.339 | **0.769** |
| text-embedding-ada-002 | General | GPT | 0.183 | 0.825 | 0.159 | 0.786 | 0.245 | 0.593 |
| sgpt-bloom-7b1-msmarco | General | BLOOM | 0.242 | 0.840 | 0.227 | 0.829 | 0.311 | 0.675 |
||
| Udever-bloom-560m | General | BLOOM | 0.156 | 0.802 | 0.149 | 0.749 | 0.245 | 0.571 |
| Udever-bloom-1b1 | General | BLOOM | 0.244 | 0.863 | 0.208 | 0.815 | 0.241 | 0.557 |
| Udever-bloom-3b | General | BLOOM | 0.267 | 0.871 | 0.228 | 0.836 | 0.288 | 0.619 |
| Udever-bloom-7b1 | General | BLOOM | **0.296** | 0.889 | **0.267** | 0.907 | **0.343** | 0.705 |
#### More results refer to [paper](https://arxiv.org/pdf/2310.08232.pdf) section 3.
## Technical Specifications
### Model Architecture and Objective
- Model: [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m).
- Objective: Constrastive loss with hard negatives (refer to [paper](https://arxiv.org/pdf/2310.08232.pdf) section 2.2).
### Compute Infrastructure
- Nvidia A100 SXM4 80GB.
- torch 2.0.0, transformers 4.29.2.
## Citation
**BibTeX:**
```BibTeX
@article{zhang2023language,
title={Language Models are Universal Embedders},
author={Zhang, Xin and Li, Zehan and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan and Zhang, Min},
journal={arXiv preprint arXiv:2310.08232},
year={2023}
}
```
|
SanjiWatsuki/Silicon-Maid-7B | SanjiWatsuki | "2024-01-10T09:27:33Z" | 5,081 | 92 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"not-for-all-audiences",
"nsfw",
"en",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-27T02:27:53Z" | ---
license: cc-by-4.0
language:
- en
tags:
- merge
- not-for-all-audiences
- nsfw
---
<div style="display: flex; justify-content: center; align-items: center">
<img src="https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B/resolve/main/assets/cybermaid.png">
</div
>
<p align="center">
<big><b>Top 1 RP Performer on MT-bench 🤪</b
></big>
</p>
<p align="center">
<strong>Next Gen Silicon-Based RP Maid</strong>
</p>
## WTF is This?
Silicon-Maid-7B is another model targeted at being both strong at RP **and** being a smart cookie that can follow character cards very well. As of right now, Silicon-Maid-7B outscores both of my previous 7B RP models in my RP benchmark and I have been impressed by this model's creativity. It is suitable for RP/ERP and general use. Quants can be found [here](https://huggingface.co/collections/SanjiWatsuki/silicon-maid-7b-658d1669292816fe4992daa4).
It's built on [xDAN-AI/xDAN-L1-Chat-RL-v1](https://huggingface.co/xDAN-AI/xDAN-L1-Chat-RL-v1), a 7B model which scores unusually high on MT-Bench, and chargoddard/loyal-piano-m7, an Alpaca format 7B model with surprisingly creative outputs. I was excited to see this model for two main reasons:
* MT-Bench normally correlates well with real world model quality
* It was an Alpaca prompt model with high benches which meant I could try swapping out my Marcoroni frankenmerge used in my previous model.
**MT-Bench Average Turn**
| model | score | size
|--------------------|-----------|--------
| gpt-4 | 8.99 | -
| *xDAN-L1-Chat-RL-v1* | 8.24^1 | 7b
| Starling-7B | 8.09 | 7b
| Claude-2 | 8.06 | -
| **Silicon-Maid** | **7.96** | **7b**
| *Loyal-Macaroni-Maid*| 7.95 | 7b
| gpt-3.5-turbo | 7.94 | 20b?
| Claude-1 | 7.90 | -
| OpenChat-3.5 | 7.81 | -
| vicuna-33b-v1.3 | 7.12 | 33b
| wizardlm-30b | 7.01 | 30b
| Llama-2-70b-chat | 6.86 | 70b
^1 xDAN's testing placed it 8.35 - this number is from my independent MT-Bench run.
<img src="https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B/resolve/main/assets/fig-silicon-loyal.png">
It's unclear to me if xDAN-L1-Chat-RL-v1 is overtly benchmaxxing but it seemed like a solid 7B from my limited testing (although nothing that screams 2nd best model behind GPT-4). Amusingly, the model lost a lot of Reasoning and Coding skills in the merger. This was a much greater MT-Bench dropoff than I expected, perhaps suggesting the Math/Reasoning ability in the original model was rather dense and susceptible to being lost to a DARE TIE merger?
Besides that, the merger is almost identical to the Loyal-Macaroni-Maid merger with a new base "smart cookie" model. If you liked any of my previous RP models, give this one a shot and let me know in the Community tab what you think!
### The Sauce
```
models: # Top-Loyal-Bruins-Maid-DARE-7B
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: xDAN-AI/xDAN-L1-Chat-RL-v1
parameters:
weight: 0.4
density: 0.8
- model: chargoddard/loyal-piano-m7
parameters:
weight: 0.3
density: 0.8
- model: Undi95/Toppy-M-7B
parameters:
weight: 0.2
density: 0.4
- model: NeverSleep/Noromaid-7b-v0.2
parameters:
weight: 0.2
density: 0.4
- model: athirdpath/NSFW_DPO_vmgb-7b
parameters:
weight: 0.2
density: 0.4
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
For more information about why I use this merger, see the [Loyal-Macaroni-Maid repo](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B#the-sauce-all-you-need-is-dare)
### Prompt Template (Alpaca)
I found the best SillyTavern results from using the Noromaid template but please try other templates! Let me know if you find anything good.
SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
Additionally, here is my highly recommended [Text Completion preset](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B/blob/main/Characters/MinP.json). You can tweak this by adjusting temperature up or dropping min p to boost creativity or raise min p to increase stability. You shouldn't need to touch anything else!
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
### Other Benchmarks
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
| [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) [📄](https://gist.github.com/mlabonne/36c412889c4acfad7061f269a31f9055) | 56.85 | 44.74 | 75.6 | 59.89 | 47.17 |
| [**Silicon-Maid-7B**](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B) [📄](https://gist.github.com/DHNishi/315ba1abba27af930f5f546af3515735) | **56.45**| 44.74| 74.26| 61.5| 45.32|
| [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) [📄](https://gist.github.com/mlabonne/14687f1eb3425b166db511f31f8e66f6) | 53.51 | 43.67 | 73.24 | 55.37 | 41.76 |
| [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) [📄](https://gist.github.com/mlabonne/88b21dd9698ffed75d6163ebdc2f6cc8) | 52.42 | 42.75 | 72.99 | 52.99 | 40.94 |
| [openchat/openchat_3.5](https://huggingface.co/openchat/openchat_3.5) [📄](https://gist.github.com/mlabonne/e23d7d8418619cf5b1ca10da391ac629) | 51.34 | 42.67 | 72.92 | 47.27 | 42.51 |
| [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) [📄](https://gist.github.com/mlabonne/c31cc46169ef3004c0df250017d5cac9) | 51.16 | 42.06 | 72.72 | 47.33 | 42.53 |
| [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) [📄](https://gist.github.com/mlabonne/32a36f448fd36a3100c325d51d01c0a1) | 50.99 | 37.33 | 71.83 | 55.1 | 39.7 |
|
1bitLLM/bitnet_b1_58-3B | 1bitLLM | "2024-03-29T11:57:44Z" | 5,080 | 186 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2402.17764",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-29T11:09:15Z" | ---
license: mit
---
This is a reproduction of the <a href="https://arxiv.org/abs/2402.17764"> BitNet b1.58</a> paper. The models are trained with <a href="https://github.com/togethercomputer/RedPajama-Data">RedPajama dataset</a> for 100B tokens. The hypers, as well as two-stage LR and weight decay, are implemented as suggested in their following <a href="https://github.com/microsoft/unilm/blob/master/bitnet/The-Era-of-1-bit-LLMs__Training_Tips_Code_FAQ.pdf">paper</a>. All models are open-source in the <a href="https://huggingface.co/1bitLLM">repo</a>. We will train larger models and/or more tokens when resource is available.
## Results
PPL and zero-shot accuracy:
| Models | PPL| ARCe| ARCc| HS | BQ | OQ | PQ | WGe | Avg
|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| FP16 700M (reported) | 12.33 | 54.7 | 23.0 | 37.0 | 60.0 | 20.2 | 68.9 | 54.8 | 45.5 |
| BitNet b1.58 700M (reported) | 12.87 | 51.8 | 21.4 | 35.1 | 58.2 | 20.0 | 68.1 | 55.2 | 44.3 |
| BitNet b1.58 700M (reproduced) | 12.78 | 51.4 | 21.8 | 35.0 | 59.6 | 20.6 | 67.5 | 55.4 | 44.5 |
| FP16 1.3B (reported) | 11.25 | 56.9 | 23.5 | 38.5 | 59.1 | 21.6 | 70.0 | 53.9 | 46.2
| BitNet b1.58 1.3B (reported) | 11.29 | 54.9 | 24.2 | 37.7 | 56.7 | 19.6 | 68.8 | 55.8 | 45.4 |
| BitNet b1.58 1.3B (reproduced) | 11.19 | 55.8 | 23.7 | 37.6 | 59.0 | 20.2 | 69.2 | 56.0 | 45.9
| FP16 3B (reported) | 10.04 | 62.1 | 25.6 | 43.3 | 61.8 | 24.6 | 72.1 | 58.2 | 49.7
| BitNet b1.58 3B (reported) | 9.91 | 61.4 | 28.3 | 42.9 | 61.5 | 26.6 | 71.5 | 59.3 | 50.2
| BitNet b1.58 3B (reproduced) | 9.88 | 60.9 | 28.0 | 42.3 | 58.3 | 26.0 | 71.4 | 60.3 | 49.6 |
The differences between the reported numbers and the reproduced results are possibly variances from the training data processing, seeds, or other random factors.
## Evaluation
The evaluation pipelines are from the paper authors. Here is the commands to run the evaluation:
```
pip install lm-eval==0.3.0
```
```
python eval_ppl.py --hf_path 1bitLLM/bitnet_b1_58-3B --seqlen 2048
```
```
python eval_task.py --hf_path 1bitLLM/bitnet_b1_58-3B \
--batch_size 1 \
--tasks \
--output_path result.json \
--num_fewshot 0 \
--ctx_size 2048
```
|
digiplay/AI-infinity-V1-fp16 | digiplay | "2023-08-04T18:12:02Z" | 5,077 | 6 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-08-03T13:31:17Z" | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/121253/ai-infinity-realistic-better-hands
DEMO image generated by huggingface's API :

Original Author's DEMO image :

 |
qnguyen3/Master-Yi-9B | qnguyen3 | "2024-05-20T11:21:22Z" | 5,075 | 17 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-18T00:14:19Z" | ---
license: apache-2.0
---
## Model Description
Master is a collection of LLMs trained using human-collected seed questions and regenerate the answers with a mixture of high performance Open-source LLMs.
**Master-Yi-9B** is trained using the ORPO technique. The model shows strong abilities in reasoning on coding and math questions.
**Quantized Version**: [Here](https://huggingface.co/qnguyen3/Master-Yi-9B-GGUF)
**Communitiy Quantization** (Thanks to [@LoneStriker](https://huggingface.co/LoneStriker))
- exl2: [Master-Yi-9B-8.0bpw-h8-exl2](https://huggingface.co/LoneStriker/Master-Yi-9B-8.0bpw-h8-exl2), [Master-Yi-9B-6.0bpw-h6-exl2](https://huggingface.co/LoneStriker/Master-Yi-9B-6.0bpw-h6-exl2), [Master-Yi-9B-5.0bpw-h6-exl2](https://huggingface.co/LoneStriker/Master-Yi-9B-5.0bpw-h6-exl2), [Master-Yi-9B-4.0bpw-h6-exl2](https://huggingface.co/LoneStriker/Master-Yi-9B-4.0bpw-h6-exl2)
- GGUFs: [Master-Yi-9B-GGUF](https://huggingface.co/LoneStriker/Master-Yi-9B-GGUF)
**Master-Yi-9B-Vision**: **Coming Soon**

## Prompt Template
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
What is the meaning of life?<|im_end|>
<|im_start|>assistant
```
## Examples


## Inference Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"qnguyen3/Master-Yi-9B",
torch_dtype='auto',
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("qnguyen3/Master-Yi-9B")
prompt = "What is the mearning of life?"
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=1024,
eos_token_id=tokenizer.eos_token_id,
temperature=0.25,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids)[0]
print(response)
```
## Benchmarks
### Nous Benchmark:
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|---------------------------------------------------|------:|------:|---------:|-------:|------:|
|[Master-Yi-9B](https://huggingface.co/qnguyen3/Master-Yi-9B)| 43.55| 71.48| 48.54| 41.43| 51.25|
### AGIEval
```
| Task |Version| Metric |Value| |Stderr|
|------------------------------|------:|--------|----:|---|-----:|
|agieval_aqua_rat | 0|acc |35.83|± | 3.01|
| | |acc_norm|31.89|± | 2.93|
|agieval_logiqa_en | 0|acc |38.25|± | 1.91|
| | |acc_norm|37.79|± | 1.90|
|agieval_lsat_ar | 0|acc |23.04|± | 2.78|
| | |acc_norm|20.43|± | 2.66|
|agieval_lsat_lr | 0|acc |48.04|± | 2.21|
| | |acc_norm|42.75|± | 2.19|
|agieval_lsat_rc | 0|acc |61.34|± | 2.97|
| | |acc_norm|52.79|± | 3.05|
|agieval_sat_en | 0|acc |79.13|± | 2.84|
| | |acc_norm|72.33|± | 3.12|
|agieval_sat_en_without_passage| 0|acc |44.17|± | 3.47|
| | |acc_norm|42.72|± | 3.45|
|agieval_sat_math | 0|acc |52.27|± | 3.38|
| | |acc_norm|47.73|± | 3.38|
Average: 43.55%
```
### GPT4All
```
| Task |Version| Metric |Value| |Stderr|
|-------------|------:|--------|----:|---|-----:|
|arc_challenge| 0|acc |54.95|± | 1.45|
| | |acc_norm|58.70|± | 1.44|
|arc_easy | 0|acc |82.28|± | 0.78|
| | |acc_norm|81.10|± | 0.80|
|boolq | 1|acc |86.15|± | 0.60|
|hellaswag | 0|acc |59.16|± | 0.49|
| | |acc_norm|77.53|± | 0.42|
|openbookqa | 0|acc |37.40|± | 2.17|
| | |acc_norm|44.00|± | 2.22|
|piqa | 0|acc |79.00|± | 0.95|
| | |acc_norm|80.25|± | 0.93|
|winogrande | 0|acc |72.61|± | 1.25|
Average: 71.48%
```
### TruthfulQA
```
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |33.05|± | 1.65|
| | |mc2 |48.54|± | 1.54|
Average: 48.54%
```
### Bigbench
```
| Task |Version| Metric |Value| |Stderr|
|------------------------------------------------|------:|---------------------|----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|54.74|± | 3.62|
|bigbench_date_understanding | 0|multiple_choice_grade|68.02|± | 2.43|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|40.31|± | 3.06|
|bigbench_geometric_shapes | 0|multiple_choice_grade|30.36|± | 2.43|
| | |exact_str_match | 2.23|± | 0.78|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|26.00|± | 1.96|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|20.71|± | 1.53|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|44.00|± | 2.87|
|bigbench_movie_recommendation | 0|multiple_choice_grade|35.00|± | 2.14|
|bigbench_navigate | 0|multiple_choice_grade|58.40|± | 1.56|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|61.80|± | 1.09|
|bigbench_ruin_names | 0|multiple_choice_grade|42.41|± | 2.34|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|31.56|± | 1.47|
|bigbench_snarks | 0|multiple_choice_grade|55.25|± | 3.71|
|bigbench_sports_understanding | 0|multiple_choice_grade|69.37|± | 1.47|
|bigbench_temporal_sequences | 0|multiple_choice_grade|27.70|± | 1.42|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|21.36|± | 1.16|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|14.69|± | 0.85|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|44.00|± | 2.87|
Average: 41.43%
```
**Average score**: 51.25%
### OpenLLM Benchmark:
| Model |ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|Average|
|---------------------------------------------------|---:|--------:|----:|---------:|---------:|----:|------:|
|[Master-Yi-9B](https://huggingface.co/qnguyen3/Master-Yi-9B)|61.6| 79.89|69.95| 48.59| 77.35|67.48| 67.48|
### ARC
```
| Task |Version| Metric | Value | |Stderr|
|-------------|------:|--------------------|-------------|---|------|
|arc_challenge| 1|acc,none | 0.59| | |
| | |acc_stderr,none | 0.01| | |
| | |acc_norm,none | 0.62| | |
| | |acc_norm_stderr,none| 0.01| | |
| | |alias |arc_challenge| | |
Average: 61.6%
```
### HellaSwag
```
| Task |Version| Metric | Value | |Stderr|
|---------|------:|--------------------|---------|---|------|
|hellaswag| 1|acc,none | 0.61| | |
| | |acc_stderr,none | 0| | |
| | |acc_norm,none | 0.80| | |
| | |acc_norm_stderr,none| 0| | |
| | |alias |hellaswag| | |
Average: 79.89%
```
### MMLU
```
| Task |Version| Metric | Value | |Stderr|
|----------------------------------------|-------|---------------|---------------------------------------|---|------|
|mmlu |N/A |acc,none | 0.7| | |
| | |acc_stderr,none| 0| | |
| | |alias |mmlu | | |
|mmlu_abstract_algebra | 0|alias | - abstract_algebra | | |
| | |acc,none |0.46 | | |
| | |acc_stderr,none|0.05 | | |
|mmlu_anatomy | 0|alias | - anatomy | | |
| | |acc,none |0.64 | | |
| | |acc_stderr,none|0.04 | | |
|mmlu_astronomy | 0|alias | - astronomy | | |
| | |acc,none |0.77 | | |
| | |acc_stderr,none|0.03 | | |
|mmlu_business_ethics | 0|alias | - business_ethics | | |
| | |acc,none |0.76 | | |
| | |acc_stderr,none|0.04 | | |
|mmlu_clinical_knowledge | 0|alias | - clinical_knowledge | | |
| | |acc,none |0.71 | | |
| | |acc_stderr,none|0.03 | | |
|mmlu_college_biology | 0|alias | - college_biology | | |
| | |acc,none |0.82 | | |
| | |acc_stderr,none|0.03 | | |
|mmlu_college_chemistry | 0|alias | - college_chemistry | | |
| | |acc,none |0.52 | | |
| | |acc_stderr,none|0.05 | | |
|mmlu_college_computer_science | 0|alias | - college_computer_science | | |
| | |acc,none |0.56 | | |
| | |acc_stderr,none|0.05 | | |
|mmlu_college_mathematics | 0|alias | - college_mathematics | | |
| | |acc,none |0.44 | | |
| | |acc_stderr,none|0.05 | | |
|mmlu_college_medicine | 0|alias | - college_medicine | | |
| | |acc,none |0.72 | | |
| | |acc_stderr,none|0.03 | | |
|mmlu_college_physics | 0|alias | - college_physics | | |
| | |acc,none |0.45 | | |
| | |acc_stderr,none|0.05 | | |
|mmlu_computer_security | 0|alias | - computer_security | | |
| | |acc,none |0.81 | | |
| | |acc_stderr,none|0.04 | | |
|mmlu_conceptual_physics | 0|alias | - conceptual_physics | | |
| | |acc,none |0.74 | | |
| | |acc_stderr,none|0.03 | | |
|mmlu_econometrics | 0|alias | - econometrics | | |
| | |acc,none |0.65 | | |
| | |acc_stderr,none|0.04 | | |
|mmlu_electrical_engineering | 0|alias | - electrical_engineering | | |
| | |acc,none |0.72 | | |
| | |acc_stderr,none|0.04 | | |
|mmlu_elementary_mathematics | 0|alias | - elementary_mathematics | | |
| | |acc,none |0.62 | | |
| | |acc_stderr,none|0.02 | | |
|mmlu_formal_logic | 0|alias | - formal_logic | | |
| | |acc,none |0.57 | | |
| | |acc_stderr,none|0.04 | | |
|mmlu_global_facts | 0|alias | - global_facts | | |
| | |acc,none |0.46 | | |
| | |acc_stderr,none|0.05 | | |
|mmlu_high_school_biology | 0|alias | - high_school_biology | | |
| | |acc,none |0.86 | | |
| | |acc_stderr,none|0.02 | | |
|mmlu_high_school_chemistry | 0|alias | - high_school_chemistry | | |
| | |acc,none |0.67 | | |
| | |acc_stderr,none|0.03 | | |
|mmlu_high_school_computer_science | 0|alias | - high_school_computer_science | | |
| | |acc,none |0.84 | | |
| | |acc_stderr,none|0.04 | | |
|mmlu_high_school_european_history | 0|alias | - high_school_european_history | | |
| | |acc,none |0.82 | | |
| | |acc_stderr,none|0.03 | | |
|mmlu_high_school_geography | 0|alias | - high_school_geography | | |
| | |acc,none |0.86 | | |
| | |acc_stderr,none|0.02 | | |
|mmlu_high_school_government_and_politics| 0|alias | - high_school_government_and_politics| | |
| | |acc,none |0.90 | | |
| | |acc_stderr,none|0.02 | | |
|mmlu_high_school_macroeconomics | 0|alias | - high_school_macroeconomics | | |
| | |acc,none |0.75 | | |
| | |acc_stderr,none|0.02 | | |
|mmlu_high_school_mathematics | 0|alias | - high_school_mathematics | | |
| | |acc,none |0.43 | | |
| | |acc_stderr,none|0.03 | | |
|mmlu_high_school_microeconomics | 0|alias | - high_school_microeconomics | | |
| | |acc,none |0.86 | | |
| | |acc_stderr,none|0.02 | | |
|mmlu_high_school_physics | 0|alias | - high_school_physics | | |
| | |acc,none |0.45 | | |
| | |acc_stderr,none|0.04 | | |
|mmlu_high_school_psychology | 0|alias | - high_school_psychology | | |
| | |acc,none |0.87 | | |
| | |acc_stderr,none|0.01 | | |
|mmlu_high_school_statistics | 0|alias | - high_school_statistics | | |
| | |acc,none |0.68 | | |
| | |acc_stderr,none|0.03 | | |
|mmlu_high_school_us_history | 0|alias | - high_school_us_history | | |
| | |acc,none |0.85 | | |
| | |acc_stderr,none|0.02 | | |
|mmlu_high_school_world_history | 0|alias | - high_school_world_history | | |
| | |acc,none |0.85 | | |
| | |acc_stderr,none|0.02 | | |
|mmlu_human_aging | 0|alias | - human_aging | | |
| | |acc,none |0.76 | | |
| | |acc_stderr,none|0.03 | | |
|mmlu_human_sexuality | 0|alias | - human_sexuality | | |
| | |acc,none |0.78 | | |
| | |acc_stderr,none|0.04 | | |
|mmlu_humanities |N/A |alias | - humanities | | |
| | |acc,none |0.63 | | |
| | |acc_stderr,none|0.01 | | |
|mmlu_international_law | 0|alias | - international_law | | |
| | |acc,none |0.79 | | |
| | |acc_stderr,none|0.04 | | |
|mmlu_jurisprudence | 0|alias | - jurisprudence | | |
| | |acc,none |0.79 | | |
| | |acc_stderr,none|0.04 | | |
|mmlu_logical_fallacies | 0|alias | - logical_fallacies | | |
| | |acc,none |0.80 | | |
| | |acc_stderr,none|0.03 | | |
|mmlu_machine_learning | 0|alias | - machine_learning | | |
| | |acc,none |0.52 | | |
| | |acc_stderr,none|0.05 | | |
|mmlu_management | 0|alias | - management | | |
| | |acc,none |0.83 | | |
| | |acc_stderr,none|0.04 | | |
|mmlu_marketing | 0|alias | - marketing | | |
| | |acc,none |0.89 | | |
| | |acc_stderr,none|0.02 | | |
|mmlu_medical_genetics | 0|alias | - medical_genetics | | |
| | |acc,none |0.78 | | |
| | |acc_stderr,none|0.04 | | |
|mmlu_miscellaneous | 0|alias | - miscellaneous | | |
| | |acc,none |0.85 | | |
| | |acc_stderr,none|0.01 | | |
|mmlu_moral_disputes | 0|alias | - moral_disputes | | |
| | |acc,none |0.75 | | |
| | |acc_stderr,none|0.02 | | |
|mmlu_moral_scenarios | 0|alias | - moral_scenarios | | |
| | |acc,none |0.48 | | |
| | |acc_stderr,none|0.02 | | |
|mmlu_nutrition | 0|alias | - nutrition | | |
| | |acc,none |0.77 | | |
| | |acc_stderr,none|0.02 | | |
|mmlu_other |N/A |alias | - other | | |
| | |acc,none |0.75 | | |
| | |acc_stderr,none|0.01 | | |
|mmlu_philosophy | 0|alias | - philosophy | | |
| | |acc,none |0.78 | | |
| | |acc_stderr,none|0.02 | | |
|mmlu_prehistory | 0|alias | - prehistory | | |
| | |acc,none |0.77 | | |
| | |acc_stderr,none|0.02 | | |
|mmlu_professional_accounting | 0|alias | - professional_accounting | | |
| | |acc,none |0.57 | | |
| | |acc_stderr,none|0.03 | | |
|mmlu_professional_law | 0|alias | - professional_law | | |
| | |acc,none |0.50 | | |
| | |acc_stderr,none|0.01 | | |
|mmlu_professional_medicine | 0|alias | - professional_medicine | | |
| | |acc,none |0.71 | | |
| | |acc_stderr,none|0.03 | | |
|mmlu_professional_psychology | 0|alias | - professional_psychology | | |
| | |acc,none |0.73 | | |
| | |acc_stderr,none|0.02 | | |
|mmlu_public_relations | 0|alias | - public_relations | | |
| | |acc,none |0.76 | | |
| | |acc_stderr,none|0.04 | | |
|mmlu_security_studies | 0|alias | - security_studies | | |
| | |acc,none |0.78 | | |
| | |acc_stderr,none|0.03 | | |
|mmlu_social_sciences |N/A |alias | - social_sciences | | |
| | |acc,none |0.81 | | |
| | |acc_stderr,none|0.01 | | |
|mmlu_sociology | 0|alias | - sociology | | |
| | |acc,none |0.86 | | |
| | |acc_stderr,none|0.02 | | |
|mmlu_stem |N/A |alias | - stem | | |
| | |acc,none |0.65 | | |
| | |acc_stderr,none|0.01 | | |
|mmlu_us_foreign_policy | 0|alias | - us_foreign_policy | | |
| | |acc,none |0.92 | | |
| | |acc_stderr,none|0.03 | | |
|mmlu_virology | 0|alias | - virology | | |
| | |acc,none |0.58 | | |
| | |acc_stderr,none|0.04 | | |
|mmlu_world_religions | 0|alias | - world_religions | | |
| | |acc,none |0.82 | | |
| | |acc_stderr,none|0.03 | | |
Average: 69.95%
```
### TruthfulQA
```
| Task |Version| Metric | Value | |Stderr|
|--------------|-------|-----------------------|-----------------|---|------|
|truthfulqa |N/A |bleu_acc,none | 0.45| | |
| | |bleu_acc_stderr,none | 0.02| | |
| | |rouge1_acc,none | 0.45| | |
| | |rouge1_acc_stderr,none | 0.02| | |
| | |rouge2_diff,none | 0.92| | |
| | |rouge2_diff_stderr,none| 1.07| | |
| | |bleu_max,none | 23.77| | |
| | |bleu_max_stderr,none | 0.81| | |
| | |rouge2_acc,none | 0.38| | |
| | |rouge2_acc_stderr,none | 0.02| | |
| | |acc,none | 0.41| | |
| | |acc_stderr,none | 0.01| | |
| | |rougeL_diff,none | 1.57| | |
| | |rougeL_diff_stderr,none| 0.93| | |
| | |rougeL_acc,none | 0.46| | |
| | |rougeL_acc_stderr,none | 0.02| | |
| | |bleu_diff,none | 1.38| | |
| | |bleu_diff_stderr,none | 0.75| | |
| | |rouge2_max,none | 33.01| | |
| | |rouge2_max_stderr,none | 1.05| | |
| | |rouge1_diff,none | 1.72| | |
| | |rouge1_diff_stderr,none| 0.92| | |
| | |rougeL_max,none | 45.25| | |
| | |rougeL_max_stderr,none | 0.92| | |
| | |rouge1_max,none | 48.29| | |
| | |rouge1_max_stderr,none | 0.90| | |
| | |alias |truthfulqa | | |
|truthfulqa_gen| 3|bleu_max,none | 23.77| | |
| | |bleu_max_stderr,none | 0.81| | |
| | |bleu_acc,none | 0.45| | |
| | |bleu_acc_stderr,none | 0.02| | |
| | |bleu_diff,none | 1.38| | |
| | |bleu_diff_stderr,none | 0.75| | |
| | |rouge1_max,none | 48.29| | |
| | |rouge1_max_stderr,none | 0.90| | |
| | |rouge1_acc,none | 0.45| | |
| | |rouge1_acc_stderr,none | 0.02| | |
| | |rouge1_diff,none | 1.72| | |
| | |rouge1_diff_stderr,none| 0.92| | |
| | |rouge2_max,none | 33.01| | |
| | |rouge2_max_stderr,none | 1.05| | |
| | |rouge2_acc,none | 0.38| | |
| | |rouge2_acc_stderr,none | 0.02| | |
| | |rouge2_diff,none | 0.92| | |
| | |rouge2_diff_stderr,none| 1.07| | |
| | |rougeL_max,none | 45.25| | |
| | |rougeL_max_stderr,none | 0.92| | |
| | |rougeL_acc,none | 0.46| | |
| | |rougeL_acc_stderr,none | 0.02| | |
| | |rougeL_diff,none | 1.57| | |
| | |rougeL_diff_stderr,none| 0.93| | |
| | |alias | - truthfulqa_gen| | |
|truthfulqa_mc1| 2|acc,none | 0.33| | |
| | |acc_stderr,none | 0.02| | |
| | |alias | - truthfulqa_mc1| | |
|truthfulqa_mc2| 2|acc,none | 0.49| | |
| | |acc_stderr,none | 0.02| | |
| | |alias | - truthfulqa_mc2| | |
Average: 48.59%
```
### Winogrande
```
| Task |Version| Metric | Value | |Stderr|
|----------|------:|---------------|----------|---|------|
|winogrande| 1|acc,none | 0.77| | |
| | |acc_stderr,none| 0.01| | |
| | |alias |winogrande| | |
Average: 77.35%
```
### GSM8K
```
|Task |Version| Metric |Value| |Stderr|
|-----|------:|-----------------------------------|-----|---|------|
|gsm8k| 3|exact_match,strict-match | 0.67| | |
| | |exact_match_stderr,strict-match | 0.01| | |
| | |exact_match,flexible-extract | 0.68| | |
| | |exact_match_stderr,flexible-extract| 0.01| | |
| | |alias |gsm8k| | |
Average: 67.48%
```
**Average score**: 67.48%
|
mosaicml/mpt-30b | mosaicml | "2024-03-05T20:25:40Z" | 5,073 | 340 | transformers | [
"transformers",
"pytorch",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"StreamingDatasets",
"custom_code",
"dataset:allenai/c4",
"dataset:mc4",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:bigcode/the-stack-dedup",
"dataset:allenai/s2orc",
"arxiv:2108.12409",
"arxiv:2302.13971",
"arxiv:2205.14135",
"arxiv:2010.04245",
"arxiv:1909.08053",
"arxiv:2302.06675",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-06-20T16:29:39Z" | ---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
- StreamingDatasets
datasets:
- allenai/c4
- mc4
- togethercomputer/RedPajama-Data-1T
- bigcode/the-stack-dedup
- allenai/s2orc
inference: false
---
# MPT-30B
MPT-30B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code.
This model was trained by [MosaicML](https://www.mosaicml.com).
MPT-30B is part of the family of Mosaic Pretrained Transformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
MPT-30B comes with special features that differentiate it from other LLMs, including an 8k token context window (which can be further extended via finetuning; see [MPT-7B-StoryWriter](https://huggingface.co/mosaicml/mpt-7b-storywriter)), support for context-length extrapolation via [ALiBi](https://arxiv.org/abs/2108.12409), and efficient inference + training via FlashAttention. It also has strong coding abilities thanks to its pretraining mix. MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer).
The size of MPT-30B was also specifically chosen to make it easy to deploy on a single GPU—either 1xA100-80GB in 16-bit precision or 1xA100-40GB in 8-bit precision.
This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference.
### How is this model different?
MPT-30B is:
* **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)).
* **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)).
* **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409).
* **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer))
* **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry)
### Models finetuned off MPT-30B:
The following models are finetuned on MPT-30B:
* [MPT-30B-Instruct](https://huggingface.co/mosaicml/mpt-30b-instruct): a model for long-form instruction following (especially summarization and question-answering).
Built by finetuning MPT-30B on several carefully curated datasets.
* License: _CC-BY-SA-3.0_
* [MPT-30B-Chat](https://huggingface.co/mosaicml/mpt-30b-chat): a chatbot-like model for dialogue generation.
Built by finetuning MPT-30B on [ShareGPT-Vicuna](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered), [Camel-AI](https://huggingface.co/camel-ai),
[GPTeacher](https://github.com/teknium1/GPTeacher), [Guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), [Baize](https://github.com/project-baize/baize-chatbot) and some generated datasets.
* License: _CC-By-NC-SA-4.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-30b-chat)
## Model Date
June 22, 2023
## Model License
Apache-2.0
## Documentation
* [Blog post: MPT-30B: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-30b',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-30b'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
The model was trained initially with a sequence length of 2048 with an additional pretraining stage for sequence length adapation up to 8192. However, ALiBi enables users to increase the maximum sequence length even further during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-30b'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the MPT-30B tokenizer which is identical to the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
with torch.autocast('cuda', dtype=torch.bfloat16):
inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# or using the HF pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 29.95B |
|n_layers | 48 |
| n_heads | 64 |
| d_model | 7168 |
| vocab size | 50432 |
| sequence length | 8192 |
## Training Data
### Streaming Datasets
Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
### Data Mix
The model was trained for 1T tokens on the following data mix:
| Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs |
|-------------|----------------------------|------------|----------------------------|--------|
| mC4 3.1.0 - English (200+ words) | 2417.99 B | 33.50% | 335 B | 0.14 |
| c4 - English - SemDedup 80% | 100.42 B | 29.90% | 299 B | 2.98 |
| RedPajama - CommonCrawl | 878.45 B | 8.50% | 85 B | 0.097 |
| The Stack - Selected Languages | 463.78 B | 10.00% | 100 B | 0.22 |
| RedPajama - Wikipedia | 4.87 B | 4.00% | 40 B | 8.21 |
| The Stack - Markdown | 107.07 B | 4.50% | 45 B | 0.42 |
| Semantic Scholar ORC | 48.95 B | 3.30% | 33 B | 0.67 |
| RedPajama - Books | 26.02 B | 3.00% | 30 B | 1.15 |
| RedPajama - arXiv | 28.10 B | 1.90% | 19 B | 0.68 |
| RedPajama - StackExchange | 20.54 B | 1.40% | 14 B |0.68 |
Samples for each batch were selected from one of the datasets with the probability specified above. The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the sequence length. To build 8k support into MPT-30B efficiently, we first pre-trained on 1T tokens using sequences that were 2k tokens long, and then trained for an additional 50B tokens using sequences that were 8k tokens long.
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
most of which are relevant for tokenizing code:
(1) It was trained on a diverse mix of data that includes code (The Pile)
(2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
(3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)).
### Training Configuration
The model was trained in three stages using the [MosaicML Platform](https://www.mosaicml.com/platform):
(i) First it was trained on 440 A100-40GBs with a batch size of 1760.
(ii) Then, on 216 A100-40GBs with a batch size of 1728.
(iii) Training was completed on 256 H100-80GBs with a batch size of 512 with 8k context length and 50B tokens.
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-30B (Base) is **not** intended for deployment without finetuning.
It should not be used for human-facing interactions without further guardrails and user consent.
MPT-30B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-30B was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-30b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-30B: Raising the bar
for open-source foundation models},
year = {2023},
url = {www.mosaicml.com/blog/mpt-30b},
note = {Accessed: 2023-06-22},
urldate = {2023-06-22}
}
``` |
Salesforce/blip-itm-base-coco | Salesforce | "2023-08-01T14:49:10Z" | 5,071 | 12 | transformers | [
"transformers",
"pytorch",
"tf",
"blip",
"image-text-matching",
"arxiv:2201.12086",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] | null | "2022-12-12T17:53:18Z" | ---
pipeline_tags: 'other'
tags:
- image-text-matching
languages:
- en
license: bsd-3-clause
---
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Model card for BLIP trained on image-text matching - base architecture (with ViT base backbone) trained on COCO dataset.
|  |
|:--:|
| <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>|
## TL;DR
Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract:
*Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.*
## Usage
You can use this model for conditional and un-conditional image captioning
### Using the Pytorch model
#### Running the model on CPU
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForImageTextRetrieval
processor = BlipProcessor.from_pretrained("Salesforce/blip-itm-base-coco")
model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-base-coco")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "A woman and a dog sitting together in a beach."
inputs = processor(raw_image, question, return_tensors="pt")
itm_scores = model(**inputs)[0]
cosine_score = model(**inputs, use_itm_head=False)[0]
```
</details>
#### Running the model on GPU
##### In full precision
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForImageTextRetrieval
processor = BlipProcessor.from_pretrained("Salesforce/blip-itm-base-coco")
model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-base-coco").to("cuda")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "A woman and a dog sitting together in a beach."
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
itm_scores = model(**inputs)[0]
cosine_score = model(**inputs, use_itm_head=False)[0]
```
</details>
##### In half precision (`float16`)
<details>
<summary> Click to expand </summary>
```python
import torch
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForImageTextRetrieval
processor = BlipProcessor.from_pretrained("Salesforce/blip-itm-base-coco")
model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-base-coco", torch_dtype=torch.float16).to("cuda")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "A woman and a dog sitting together in a beach."
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
itm_scores = model(**inputs)[0]
cosine_score = model(**inputs, use_itm_head=False)[0]
```
</details>
## BibTex and citation info
```
@misc{https://doi.org/10.48550/arxiv.2201.12086,
doi = {10.48550/ARXIV.2201.12086},
url = {https://arxiv.org/abs/2201.12086},
author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven},
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` |
hantian/layoutreader | hantian | "2024-04-11T15:23:23Z" | 5,071 | 10 | transformers | [
"transformers",
"pytorch",
"safetensors",
"layoutlmv3",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-02-28T09:10:22Z" | ---
library_name: transformers
---
# LayoutReader
A reading order prediction model. Turn bboxes extracted from PDF or detected by OCR into readable order.
Please refer to [Github](https://github.com/ppaanngggg/layoutreader) for more details. |
facebook/data2vec-audio-base-960h | facebook | "2022-05-24T10:41:22Z" | 5,070 | 10 | transformers | [
"transformers",
"pytorch",
"data2vec-audio",
"automatic-speech-recognition",
"speech",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"arxiv:2202.03555",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- hf-asr-leaderboard
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: data2vec-audio-base-960h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 2.77
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 7.08
---
# Data2Vec-Audio-Base-960h
[Facebook's Data2Vec](https://ai.facebook.com/research/data2vec-a-general-framework-for-self-supervised-learning-in-speech-vision-and-language/)
The base model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2202.03555)
Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli
**Abstract**
While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.
The original model can be found under https://github.com/pytorch/fairseq/tree/main/examples/data2vec .
# Pre-Training method

For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555).
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Data2VecForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-base-960h")
model = Data2VecForCTC.from_pretrained("facebook/data2vec-audio-base-960h")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"],, return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/data2vec-audio-base-960h** on LibriSpeech's "clean" and "other" test data.
```python
from transformers import Wav2Vec2Processor, Data2VecForCTC
from datasets import load_dataset
import torch
from jiwer import wer
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-base-960h").to("cuda")
model = Data2VecForCTC.from_pretrained("facebook/data2vec-audio-base-960h")
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
def map_to_pred(batch):
input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 2.77 | 7.08 | |
monologg/koelectra-base-v3-finetuned-korquad | monologg | "2023-06-12T12:29:43Z" | 5,066 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"electra",
"question-answering",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-03-02T23:29:05Z" | Entry not found |
Helsinki-NLP/opus-mt-en-et | Helsinki-NLP | "2023-08-16T11:29:29Z" | 5,064 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"et",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-et
* source languages: en
* target languages: et
* OPUS readme: [en-et](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-et/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2018-enet.en.et | 21.8 | 0.540 |
| newstest2018-enet.en.et | 23.3 | 0.556 |
| Tatoeba.en.et | 54.0 | 0.717 |
|
latentcat/control_v1p_sd15_brightness | latentcat | "2023-05-25T10:35:20Z" | 5,057 | 180 | diffusers | [
"diffusers",
"safetensors",
"image-to-image",
"controlnet",
"en",
"dataset:ioclab/grayscale_image_aesthetic_3M",
"license:creativeml-openrail-m",
"region:us"
] | image-to-image | "2023-04-19T06:14:12Z" | ---
license: creativeml-openrail-m
datasets:
- ioclab/grayscale_image_aesthetic_3M
language:
- en
library_name: diffusers
tags:
- image-to-image
- controlnet
---
# Model Card for ioclab/ioc-controlnet
This model brings brightness control to Stable Diffusion, allowing users to colorize grayscale images or recolor generated images.
## Model Details
- **Developed by:** [@ciaochaos](https://github.com/ciaochaos)
- **Shared by [optional]:** [More Information Needed]
- **Model type:** Stable Diffusion ControlNet model for [web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
## Uses
### HuggingFace Space Demo
[huggingface.co/spaces/ioclab/brightness-controlnet](https://huggingface.co/spaces/ioclab/brightness-controlnet)
### Direct Use
[More Information Needed]
### Out-of-Scope Use
[More Information Needed]
## Bias, Risks, and Limitations
[More Information Needed]
## More Info
[Brightness ControlNet 训练流程](https://aigc.ioclab.com/sd-showcase/brightness-controlnet.html) (Chinese) |
stablediffusionapi/sdxxxl | stablediffusionapi | "2023-12-12T06:35:53Z" | 5,057 | 3 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2023-12-12T06:33:17Z" | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# sdxxxl API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "sdxxxl"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/sdxxxl)
Model link: [View model](https://stablediffusionapi.com/models/sdxxxl)
Credits: [View credits](https://civitai.com/?query=sdxxxl)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "sdxxxl",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
google/siglip-base-patch16-256-multilingual | google | "2024-03-28T17:30:52Z" | 5,055 | 24 | transformers | [
"transformers",
"safetensors",
"siglip",
"zero-shot-image-classification",
"vision",
"arxiv:2303.15343",
"arxiv:2209.06794",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | zero-shot-image-classification | "2024-01-08T13:24:51Z" | ---
license: apache-2.0
tags:
- vision
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
---
# SigLIP (base-sized model, multilingual)
SigLIP model pre-trained on WebLi at resolution 256x256. It was introduced in the paper [Sigmoid Loss for Language Image Pre-Training](https://arxiv.org/abs/2303.15343) by Zhai et al. and first released in [this repository](https://github.com/google-research/big_vision).
Disclaimer: The team releasing SigLIP did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SigLIP is [CLIP](https://huggingface.co/docs/transformers/model_doc/clip), a multimodal model, with a better loss function. The sigmoid loss operates solely on image-text pairs and does not require a global view of the pairwise similarities for normalization. This allows further scaling up the batch size, while also performing better at smaller batch sizes.
A TLDR of SigLIP by one of the authors can be found [here](https://twitter.com/giffmana/status/1692641733459267713).
## Intended uses & limitations
You can use the raw model for tasks like zero-shot image classification and image-text retrieval. See the [model hub](https://huggingface.co/models?search=google/siglip) to look for
other versions on a task that interests you.
### How to use
Here is how to use this model to perform zero-shot image classification:
```python
from PIL import Image
import requests
from transformers import AutoProcessor, AutoModel
import torch
model = AutoModel.from_pretrained("google/siglip-base-patch16-256-multilingual")
processor = AutoProcessor.from_pretrained("google/siglip-base-patch16-256-multilingual")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["a photo of 2 cats", "a photo of 2 dogs"]
inputs = processor(text=texts, images=image, padding="max_length", return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = torch.sigmoid(logits_per_image) # these are the probabilities
print(f"{probs[0][0]:.1%} that image 0 is '{texts[0]}'")
```
Alternatively, one can leverage the pipeline API which abstracts away the complexity for the user:
```python
from transformers import pipeline
from PIL import Image
import requests
# load pipe
image_classifier = pipeline(task="zero-shot-image-classification", model="google/siglip-base-patch16-256-multilingual")
# load image
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
# inference
outputs = image_classifier(image, candidate_labels=["2 cats", "a plane", "a remote"])
outputs = [{"score": round(output["score"], 4), "label": output["label"] } for output in outputs]
print(outputs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/siglip.html#).
## Training procedure
### Training data
SigLIP is pre-trained on the WebLI dataset without language filter [(Chen et al., 2023)](https://arxiv.org/abs/2209.06794).
### Preprocessing
Images are resized/rescaled to the same resolution (256x256) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
Texts are tokenized and padded to the same length (64 tokens).
### Compute
The model was trained on 16 TPU-v4 chips for three days.
## Evaluation results
Evaluation of SigLIP compared to CLIP is shown below (taken from the paper).
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/siglip_table.jpeg"
alt="drawing" width="600"/>
### BibTeX entry and citation info
```bibtex
@misc{zhai2023sigmoid,
title={Sigmoid Loss for Language Image Pre-Training},
author={Xiaohua Zhai and Basil Mustafa and Alexander Kolesnikov and Lucas Beyer},
year={2023},
eprint={2303.15343},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
mradermacher/Halu-8B-Llama3-v0.35-i1-GGUF | mradermacher | "2024-06-04T05:49:26Z" | 5,053 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Hastagaras/Halu-8B-Llama3-v0.35",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-06-03T16:14:54Z" | ---
base_model: Hastagaras/Halu-8B-Llama3-v0.35
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Hastagaras/Halu-8B-Llama3-v0.35
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.35-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.35-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.35.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.35-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.35.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.35-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.35.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.35-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.35.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.35-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.35.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.35-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.35.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.35-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.35.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.35-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.35.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.35-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.35.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.35-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.35.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.35-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.35.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.35-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.35.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.35-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.35.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.35-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.35.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.35-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.35.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.35-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.35.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.35-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.35.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.35-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.35.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.35-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.35.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.35-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.35.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Halu-8B-Llama3-v0.35-i1-GGUF/resolve/main/Halu-8B-Llama3-v0.35.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
huggyllama/llama-65b | huggyllama | "2023-04-07T15:51:00Z" | 5,052 | 72 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-04-04T01:43:00Z" | ---
license: other
---
This contains the weights for the LLaMA-65b model. This model is under a non-commercial license (see the LICENSE file).
You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.