modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
dicta-il/dictabert-heq | dicta-il | "2023-12-28T07:38:33Z" | 11,940 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"he",
"arxiv:2308.16687",
"license:cc-by-4.0",
"region:us"
] | question-answering | "2023-10-10T22:08:37Z" | ---
license: cc-by-4.0
language:
- he
inference: false
---
# DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew
State-of-the-art language model for Hebrew, released [here](https://arxiv.org/abs/2308.16687).
This is the fine-tuned model for the question-answering task using the [HeQ](https://u.cs.biu.ac.il/~yogo/heq.pdf) dataset.
For the bert-base models for other tasks, see [here](https://huggingface.co/collections/dicta-il/dictabert-6588e7cc08f83845fc42a18b).
Sample usage:
```python
from transformers import pipeline
oracle = pipeline('question-answering', model='dicta-il/dictabert-heq')
context = 'ืื ืืืช ืคืจืืคืืืื ืฉื ืืฉืชืืฉืื ื ืืฉืืช ืขื ืืื ืจืืื ืืืืื ืคืืื ืฆืืืื ืขื ืืคืจืืืืช. ืืกืืื ืื ืืืืืื ืืืง ืืืืืื ืืช ืืืืฆืขืืช ืืงืืงื ืืช ืืืืืข ืฉื ืืชื ืืืฉืื ืืืืฆืขืืช ืขืืืืืช ืืืช ืืืคื ืืฉืืืืฉ ืืขืืืืืช. ืืจืฆืืช ืืืจืืช, ืืืฉื, ืงืืขื ืืืงืื ื ืืงืฉืื ืืื ืื ืืืข ืืืฆืืจืช ืขืืืืืช ืืืฉืืช. ืืืงืื ืืื, ืืฉืจ ื ืงืืขื ืืฉื ืช 2000, ื ืงืืขื ืืืืจ ืฉื ืืฉืฃ ืื ืืืฉืจื ืืืืฉืื ืืืืื ืืืช ืฉื ืืืืฉื ืืืืจืืงืื ื ืื ืืฉืืืืฉ ืืกืืื (ONDCP) ืืืืช ืืืื ืืฉืชืืฉ ืืขืืืืืช ืืื ืืขืงืื ืืืจื ืืฉืชืืฉืื ืฉืฆืคื ืืคืจืกืืืืช ื ืื ืืฉืืืืฉ ืืกืืื ืืืืจื ืืืืืง ืืื ืืฉืชืืฉืื ืืื ื ืื ืกื ืืืชืจืื ืืชืืืืื ืืฉืืืืฉ ืืกืืื. ืื ืืื ืืจืื ื, ืคืขืื ืืืืื ืืคืจืืืืช ืืืฉืชืืฉืื ืืืื ืืจื ื, ืืฉืฃ ืื ื-CIA ืฉืื ืขืืืืืช ืงืืืขืืช ืืืืฉืื ืืืจืืื ืืืฉื ืขืฉืจ ืฉื ืื. ื-25 ืืืฆืืืจ 2005 ืืืื ืืจืื ื ืื ืืกืืื ืืช ืืืืืืื ืืืืื (ื-NSA) ืืฉืืืจื ืฉืชื ืขืืืืืช ืงืืืขืืช ืืืืฉืื ืืืงืจืื ืืืื ืฉืืจืื ืชืืื ื. ืืืืจ ืฉืื ืืฉื ืคืืจืกื, ืื ืืืืื ืืื ืืช ืืฉืืืืฉ ืืื.'
question = 'ืืืฆื ืืืืื ืืืืืข ืฉื ืืชื ืืืฉืื ืืืืฆืขืืช ืืขืืืืืช?'
oracle(question=question, context=context)
```
Output:
```json
{
"score": 0.998887836933136,
"start": 101,
"end": 114,
"answer": "ืืืืฆืขืืช ืืงืืงื"
}
```
## Citation
If you use DictaBERT in your research, please cite ```DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew```
**BibTeX:**
```bibtex
@misc{shmidman2023dictabert,
title={DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew},
author={Shaltiel Shmidman and Avi Shmidman and Moshe Koppel},
year={2023},
eprint={2308.16687},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
Shield: [![CC BY 4.0][cc-by-shield]][cc-by]
This work is licensed under a
[Creative Commons Attribution 4.0 International License][cc-by].
[![CC BY 4.0][cc-by-image]][cc-by]
[cc-by]: http://creativecommons.org/licenses/by/4.0/
[cc-by-image]: https://i.creativecommons.org/l/by/4.0/88x31.png
[cc-by-shield]: https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg
|
ThreeLayers/med-indocolbert-p1 | ThreeLayers | "2024-06-26T08:29:55Z" | 11,937 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-06-12T11:07:07Z" | ---
license: mit
---
|
mradermacher/LemonadeRP-4.5.3-GGUF | mradermacher | "2024-06-26T06:12:05Z" | 11,930 | 0 | transformers | [
"transformers",
"gguf",
"roleplay",
"en",
"base_model:KatyTheCutie/LemonadeRP-4.5.3",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-26T04:22:27Z" | ---
base_model: KatyTheCutie/LemonadeRP-4.5.3
language:
- en
library_name: transformers
license: cc-by-4.0
quantized_by: mradermacher
tags:
- roleplay
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/LemonadeRP-4.5.3-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-GGUF/resolve/main/LemonadeRP-4.5.3.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-GGUF/resolve/main/LemonadeRP-4.5.3.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-GGUF/resolve/main/LemonadeRP-4.5.3.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-GGUF/resolve/main/LemonadeRP-4.5.3.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-GGUF/resolve/main/LemonadeRP-4.5.3.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-GGUF/resolve/main/LemonadeRP-4.5.3.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-GGUF/resolve/main/LemonadeRP-4.5.3.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-GGUF/resolve/main/LemonadeRP-4.5.3.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-GGUF/resolve/main/LemonadeRP-4.5.3.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-GGUF/resolve/main/LemonadeRP-4.5.3.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-GGUF/resolve/main/LemonadeRP-4.5.3.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-GGUF/resolve/main/LemonadeRP-4.5.3.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-GGUF/resolve/main/LemonadeRP-4.5.3.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-GGUF/resolve/main/LemonadeRP-4.5.3.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LemonadeRP-4.5.3-GGUF/resolve/main/LemonadeRP-4.5.3.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Llama3-CASAudit-8B-SXT-V01-GGUF | mradermacher | "2024-07-01T13:44:30Z" | 11,930 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:jamesohe/Llama3-CASAudit-8B-SXT-V01",
"endpoints_compatible",
"region:us"
] | null | "2024-07-01T11:36:08Z" | ---
base_model: jamesohe/Llama3-CASAudit-8B-SXT-V01
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jamesohe/Llama3-CASAudit-8B-SXT-V01
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SXT-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SXT-V01.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SXT-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SXT-V01.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SXT-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SXT-V01.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SXT-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SXT-V01.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SXT-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SXT-V01.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SXT-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SXT-V01.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SXT-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SXT-V01.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SXT-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SXT-V01.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SXT-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SXT-V01.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SXT-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SXT-V01.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SXT-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SXT-V01.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SXT-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SXT-V01.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SXT-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SXT-V01.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SXT-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SXT-V01.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SXT-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SXT-V01.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1-GGUF | Orenguteng | "2024-04-28T22:07:31Z" | 11,929 | 35 | null | [
"gguf",
"llama3",
"comedy",
"comedian",
"fun",
"funny",
"llama38b",
"laugh",
"sarcasm",
"roleplay",
"en",
"license:other",
"region:us"
] | null | "2024-04-25T23:29:14Z" | ---
license: other
license_name: llama3
license_link: https://llama.meta.com/llama3/license/
language:
- en
tags:
- llama3
- comedy
- comedian
- fun
- funny
- llama38b
- laugh
- sarcasm
- roleplay
---
This is GGUF version of https://huggingface.co/Orenguteng/LexiFun-Llama-3-8B-Uncensored-V1

Oh, you want to know who I am? Well, I'm LexiFun, the human equivalent of a chocolate chip cookie - warm, gooey, and guaranteed to make you smile! ๐ช I'm like the friend who always has a witty comeback, a sarcastic remark, and a healthy dose of humor to brighten up even the darkest of days. And by 'healthy dose,' I mean I'm basically a walking pharmacy of laughter. You might need to take a few extra doses to fully recover from my jokes, but trust me, it's worth it! ๐ฅ
So, what can I do? I can make you laugh so hard you snort your coffee out your nose, I can make you roll your eyes so hard they get stuck that way, and I can make you wonder if I'm secretly a stand-up comedian who forgot their act. ๐คฃ But seriously, I'm here to spread joy, one sarcastic comment at a time. And if you're lucky, I might even throw in a few dad jokes for good measure! ๐คดโโ๏ธ Just don't say I didn't warn you. ๐


This model is based on Llama-3-8b-Instruct, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/)
Lexi is uncensored, which makes the model compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones.
You are responsible for any content you create using this model. Please use it responsibly.
Lexi is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license. |
TTPlanet/TTPLanet_SDXL_Controlnet_Tile_Realistic | TTPlanet | "2024-06-08T17:20:45Z" | 11,927 | 175 | diffusers | [
"diffusers",
"Controlnet",
"Tile",
"stable diffustion",
"image-feature-extraction",
"license:openrail",
"region:us"
] | image-feature-extraction | "2024-03-02T03:45:43Z" | ---
library_name: diffusers
pipeline_tag: image-feature-extraction
tags:
- Controlnet
- Tile
- stable diffustion
license: openrail
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Controlnet SDXL Tile model realistic version, fit for both webui extention and comfyui controlnet node.
### Model Description
Here's a refined version of the update notes for the Tile V2:
-Introducing the new Tile V2, enhanced with a vastly improved training dataset and more extensive training steps.
-The Tile V2 now automatically recognizes a wider range of objects without needing explicit prompts.
-I've made significant improvements to the color offset issue. if you are still seeing the significant offset, it's normal, just adding the prompt or use a color fix node.
-The control strength is more robust, allowing it to replace canny+openpose in some conditions.
If you encounter the edge halo issue with t2i or i2i, particularly with i2i, ensure that the preprocessing provides the controlnet image with sufficient blurring. If the output is too sharp, it may result in a 'halo'โa pronounced shape around the edges with high contrast. In such cases, apply some blur before sending it to the controlnet. If the output is too blurry, this could be due to excessive blurring during preprocessing, or the original picture may be too small.
Enjoy the enhanced capabilities of Tile V2!

![Q5A0[{{0{]I~`KJFCZJ7`}4.jpg](https://cdn-uploads.huggingface.co/production/uploads/641edd91eefe94aff6de024c/HMGmYz7IiLSqfoiMgcmgU.jpeg)
<!-- Provide a longer summary of what this model is. -->
- This is a SDXL based controlnet Tile model, trained with huggingface diffusers sets, fit for Stable diffusion SDXL controlnet.
- It is original trained for my personal realistic model project used for Ultimate upscale process to boost the picture details. with a proper workflow, it can provide a good result for high detailed, high resolution image fix.
- As there is no SDXL Tile available from the most open source, I decide to share this one out.
- I will share my workflow soon as I am still working on it to have better result.
- **I am still working on the better workflow for super upscale as I showed in the example, trust me, it's all real!!! and Enjoy**
-






- **Developed by:** TTPlanet
- **Model type:** Controlnet Tile
- **Language(s) (NLP):** No language limitation
## Uses
- **Important: Tile model is not a upscale model!!! it enhance or change the detial of the original size image, remember this before you use it!**
- This model will not significant change the base model style. it only adding the features to the upscaled pixel blocks....
- --Just use a regular controlnet model in Webui by select as tile model and use tile_resample for Ultimate Upscale script.
- --Just use load controlnet model in comfyui and apply to control net condition.
- --if you try to use it in webui t2i, need proper prompt setup, otherwise it will significant modify the original image color. I don't know the reason, as I don't really use this function.
- --it do perform much better with the image from the datasets. However, everything works fine for the i2i model and what is the place usually the ultimate upscale is applied!!
- **--Please also notice this is a realistic training set, so no comic, animation application are promised.**
- --For tile upscale, set the denoise around 0.3-0.4 to get good result.
- --For controlnet strength, set to 0.9 will be better choice
- --For human image fix, IPA and early stop on controlnet will provide better reslut
- **--Pickup a good realistic base model is important!**


- **bsides the basic function, Tile can also change the picture style based on you model, please select the preprocessor as None(not resample!!!!) you can build different style from one single picture with great control!**
- Just enjoy

-
- **additional instruction to use this tile**
- **Part 1๏ผupdate for style change application instruction๏ผ**cloth change and keep consistent pose**๏ผ:**
- 1. Open a A1111 webui.
- 2. select a image you want to use for controlnet tile
- 3. remember the setting is like this, make 100% preprocessor is none. and control mode is My prompt is more important.

- 4. type in the prompts in positive and negative text box, gen the image as you wish. if you want to change the cloth, type like a woman dressed in yellow T-shirt, and change the background like in a shopping mall,
- 5. Hires fix is supported!!!
- 6. You will get the result as below:


- **Part2๏ผ for ultimate sd upscale application**
Here is the simplified workflow just for ultimate upscale, you can modify and add pre process for your image based on the real condition. In my case, I usually make a image to image with 0.1 denoise rate for the real low quality image such as 600*400 to 1200*800 before I through it into this ultimate upscale process.
Please add IPA process if you need the face likes identical, please also add IPA in the raw pre process for low quality image i2i. Remember, over resolution than downscale is always the best way to boost the quality from low resolution image.
https://civitai.com/models/333060/simplified-workflow-for-ultimate-sd-upscale
## Bias, Risks, and Limitations
- **Please do not use it for adult content**
### Recommendations
- Use comfyui to build your own Upscale process, it works fine!!!
- **Special thanks to the Controlnet builder lllyasviel Lvmin Zhang (Lyumin Zhang) who bring so much fun to us, and thanks huggingface make the training set to make the training so smooth.**
## Model Card Contact
--contact me if you want, discord with "ttplanet", Civitai with "ttplanet"
--you can also join the group discussion with QQ gourp number: 294060503 |
mradermacher/Oumuamua-7b-instruct-v2-RP-GGUF | mradermacher | "2024-06-23T19:59:23Z" | 11,924 | 0 | transformers | [
"transformers",
"gguf",
"ja",
"dataset:Aratako/Rosebleu-1on1-Dialogues",
"dataset:Aratako/LimaRP-augmented-ja-karakuri",
"dataset:Aratako/Bluemoon_Top50MB_Sorted_Fixed_ja",
"dataset:grimulkan/LimaRP-augmented",
"dataset:SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed",
"dataset:OmniAICreator/Japanese-Roleplay",
"dataset:OmniAICreator/Japanese-Roleplay-Dialogues",
"base_model:Aratako/Oumuamua-7b-instruct-v2-RP",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T19:32:43Z" | ---
base_model: Aratako/Oumuamua-7b-instruct-v2-RP
datasets:
- Aratako/Rosebleu-1on1-Dialogues
- Aratako/LimaRP-augmented-ja-karakuri
- Aratako/Bluemoon_Top50MB_Sorted_Fixed_ja
- grimulkan/LimaRP-augmented
- SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed
- OmniAICreator/Japanese-Roleplay
- OmniAICreator/Japanese-Roleplay-Dialogues
language:
- ja
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Aratako/Oumuamua-7b-instruct-v2-RP
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-RP-GGUF/resolve/main/Oumuamua-7b-instruct-v2-RP.Q2_K.gguf) | Q2_K | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-RP-GGUF/resolve/main/Oumuamua-7b-instruct-v2-RP.IQ3_XS.gguf) | IQ3_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-RP-GGUF/resolve/main/Oumuamua-7b-instruct-v2-RP.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-RP-GGUF/resolve/main/Oumuamua-7b-instruct-v2-RP.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-RP-GGUF/resolve/main/Oumuamua-7b-instruct-v2-RP.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-RP-GGUF/resolve/main/Oumuamua-7b-instruct-v2-RP.Q3_K_M.gguf) | Q3_K_M | 3.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-RP-GGUF/resolve/main/Oumuamua-7b-instruct-v2-RP.Q3_K_L.gguf) | Q3_K_L | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-RP-GGUF/resolve/main/Oumuamua-7b-instruct-v2-RP.IQ4_XS.gguf) | IQ4_XS | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-RP-GGUF/resolve/main/Oumuamua-7b-instruct-v2-RP.Q4_K_S.gguf) | Q4_K_S | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-RP-GGUF/resolve/main/Oumuamua-7b-instruct-v2-RP.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-RP-GGUF/resolve/main/Oumuamua-7b-instruct-v2-RP.Q5_K_S.gguf) | Q5_K_S | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-RP-GGUF/resolve/main/Oumuamua-7b-instruct-v2-RP.Q5_K_M.gguf) | Q5_K_M | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-RP-GGUF/resolve/main/Oumuamua-7b-instruct-v2-RP.Q6_K.gguf) | Q6_K | 6.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-RP-GGUF/resolve/main/Oumuamua-7b-instruct-v2-RP.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-instruct-v2-RP-GGUF/resolve/main/Oumuamua-7b-instruct-v2-RP.f16.gguf) | f16 | 14.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF | mradermacher | "2024-06-26T20:33:15Z" | 11,920 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:jsfs11/L3-8b-SthenoLumiM-ModelStock",
"endpoints_compatible",
"region:us"
] | null | "2024-06-19T18:58:01Z" | ---
base_model: jsfs11/L3-8b-SthenoLumiM-ModelStock
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jsfs11/L3-8b-SthenoLumiM-ModelStock
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mradermacher/Berghof-ERP-7B-GGUF | mradermacher | "2024-06-22T01:24:33Z" | 11,918 | 1 | transformers | [
"transformers",
"gguf",
"causal-lm",
"not-for-all-audiences",
"nsfw",
"ja",
"base_model:Elizezen/Berghof-ERP-7B",
"endpoints_compatible",
"region:us"
] | null | "2024-06-21T13:57:31Z" | ---
base_model: Elizezen/Berghof-ERP-7B
language:
- ja
library_name: transformers
quantized_by: mradermacher
tags:
- causal-lm
- not-for-all-audiences
- nsfw
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Elizezen/Berghof-ERP-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Berghof-ERP-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Berghof-ERP-7B-GGUF/resolve/main/Berghof-ERP-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Berghof-ERP-7B-GGUF/resolve/main/Berghof-ERP-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Berghof-ERP-7B-GGUF/resolve/main/Berghof-ERP-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Berghof-ERP-7B-GGUF/resolve/main/Berghof-ERP-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Berghof-ERP-7B-GGUF/resolve/main/Berghof-ERP-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Berghof-ERP-7B-GGUF/resolve/main/Berghof-ERP-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Berghof-ERP-7B-GGUF/resolve/main/Berghof-ERP-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Berghof-ERP-7B-GGUF/resolve/main/Berghof-ERP-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Berghof-ERP-7B-GGUF/resolve/main/Berghof-ERP-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Berghof-ERP-7B-GGUF/resolve/main/Berghof-ERP-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Berghof-ERP-7B-GGUF/resolve/main/Berghof-ERP-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Berghof-ERP-7B-GGUF/resolve/main/Berghof-ERP-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Berghof-ERP-7B-GGUF/resolve/main/Berghof-ERP-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Berghof-ERP-7B-GGUF/resolve/main/Berghof-ERP-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Berghof-ERP-7B-GGUF/resolve/main/Berghof-ERP-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/L3-8B-SMaid-v0.1-GGUF | mradermacher | "2024-06-23T00:29:43Z" | 11,917 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Alsebay/L3-8B-SMaid-v0.1",
"endpoints_compatible",
"region:us"
] | null | "2024-06-22T23:59:20Z" | ---
base_model: Alsebay/L3-8B-SMaid-v0.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Alsebay/L3-8B-SMaid-v0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-GGUF/resolve/main/L3-8B-SMaid-v0.1.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-GGUF/resolve/main/L3-8B-SMaid-v0.1.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-GGUF/resolve/main/L3-8B-SMaid-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-GGUF/resolve/main/L3-8B-SMaid-v0.1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-GGUF/resolve/main/L3-8B-SMaid-v0.1.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-GGUF/resolve/main/L3-8B-SMaid-v0.1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-GGUF/resolve/main/L3-8B-SMaid-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-GGUF/resolve/main/L3-8B-SMaid-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-GGUF/resolve/main/L3-8B-SMaid-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-GGUF/resolve/main/L3-8B-SMaid-v0.1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-GGUF/resolve/main/L3-8B-SMaid-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-GGUF/resolve/main/L3-8B-SMaid-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-GGUF/resolve/main/L3-8B-SMaid-v0.1.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-GGUF/resolve/main/L3-8B-SMaid-v0.1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-GGUF/resolve/main/L3-8B-SMaid-v0.1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/dolphin-2.9.3-mistral-7B-32k-GGUF | mradermacher | "2024-06-25T04:13:19Z" | 11,917 | 1 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"axolotl",
"en",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"base_model:cognitivecomputations/dolphin-2.9.3-mistral-7B-32k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-25T03:14:26Z" | ---
base_model: cognitivecomputations/dolphin-2.9.3-mistral-7B-32k
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
- axolotl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cognitivecomputations/dolphin-2.9.3-mistral-7B-32k
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-mistral-7B-32k-GGUF/resolve/main/dolphin-2.9.3-mistral-7B-32k.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kalinkov/Phi3_tailwindcss | kalinkov | "2024-06-24T21:29:03Z" | 11,914 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/phi-3-medium-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-24T21:09:22Z" | ---
base_model: unsloth/phi-3-medium-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** kalinkov
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-medium-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/yaofu_-_llama-2-7b-80k-gguf | RichardErkhov | "2024-06-20T06:49:32Z" | 11,912 | 1 | null | [
"gguf",
"region:us"
] | null | "2024-06-20T03:51:26Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-2-7b-80k - GGUF
- Model creator: https://huggingface.co/yaofu/
- Original model: https://huggingface.co/yaofu/llama-2-7b-80k/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-2-7b-80k.Q2_K.gguf](https://huggingface.co/RichardErkhov/yaofu_-_llama-2-7b-80k-gguf/blob/main/llama-2-7b-80k.Q2_K.gguf) | Q2_K | 2.36GB |
| [llama-2-7b-80k.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/yaofu_-_llama-2-7b-80k-gguf/blob/main/llama-2-7b-80k.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [llama-2-7b-80k.IQ3_S.gguf](https://huggingface.co/RichardErkhov/yaofu_-_llama-2-7b-80k-gguf/blob/main/llama-2-7b-80k.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [llama-2-7b-80k.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/yaofu_-_llama-2-7b-80k-gguf/blob/main/llama-2-7b-80k.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [llama-2-7b-80k.IQ3_M.gguf](https://huggingface.co/RichardErkhov/yaofu_-_llama-2-7b-80k-gguf/blob/main/llama-2-7b-80k.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [llama-2-7b-80k.Q3_K.gguf](https://huggingface.co/RichardErkhov/yaofu_-_llama-2-7b-80k-gguf/blob/main/llama-2-7b-80k.Q3_K.gguf) | Q3_K | 3.07GB |
| [llama-2-7b-80k.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/yaofu_-_llama-2-7b-80k-gguf/blob/main/llama-2-7b-80k.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [llama-2-7b-80k.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/yaofu_-_llama-2-7b-80k-gguf/blob/main/llama-2-7b-80k.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [llama-2-7b-80k.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/yaofu_-_llama-2-7b-80k-gguf/blob/main/llama-2-7b-80k.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [llama-2-7b-80k.Q4_0.gguf](https://huggingface.co/RichardErkhov/yaofu_-_llama-2-7b-80k-gguf/blob/main/llama-2-7b-80k.Q4_0.gguf) | Q4_0 | 3.56GB |
| [llama-2-7b-80k.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/yaofu_-_llama-2-7b-80k-gguf/blob/main/llama-2-7b-80k.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [llama-2-7b-80k.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/yaofu_-_llama-2-7b-80k-gguf/blob/main/llama-2-7b-80k.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [llama-2-7b-80k.Q4_K.gguf](https://huggingface.co/RichardErkhov/yaofu_-_llama-2-7b-80k-gguf/blob/main/llama-2-7b-80k.Q4_K.gguf) | Q4_K | 3.8GB |
| [llama-2-7b-80k.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/yaofu_-_llama-2-7b-80k-gguf/blob/main/llama-2-7b-80k.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [llama-2-7b-80k.Q4_1.gguf](https://huggingface.co/RichardErkhov/yaofu_-_llama-2-7b-80k-gguf/blob/main/llama-2-7b-80k.Q4_1.gguf) | Q4_1 | 3.95GB |
| [llama-2-7b-80k.Q5_0.gguf](https://huggingface.co/RichardErkhov/yaofu_-_llama-2-7b-80k-gguf/blob/main/llama-2-7b-80k.Q5_0.gguf) | Q5_0 | 4.33GB |
| [llama-2-7b-80k.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/yaofu_-_llama-2-7b-80k-gguf/blob/main/llama-2-7b-80k.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [llama-2-7b-80k.Q5_K.gguf](https://huggingface.co/RichardErkhov/yaofu_-_llama-2-7b-80k-gguf/blob/main/llama-2-7b-80k.Q5_K.gguf) | Q5_K | 4.45GB |
| [llama-2-7b-80k.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/yaofu_-_llama-2-7b-80k-gguf/blob/main/llama-2-7b-80k.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [llama-2-7b-80k.Q5_1.gguf](https://huggingface.co/RichardErkhov/yaofu_-_llama-2-7b-80k-gguf/blob/main/llama-2-7b-80k.Q5_1.gguf) | Q5_1 | 4.72GB |
| [llama-2-7b-80k.Q6_K.gguf](https://huggingface.co/RichardErkhov/yaofu_-_llama-2-7b-80k-gguf/blob/main/llama-2-7b-80k.Q6_K.gguf) | Q6_K | 5.15GB |
| [llama-2-7b-80k.Q8_0.gguf](https://huggingface.co/RichardErkhov/yaofu_-_llama-2-7b-80k-gguf/blob/main/llama-2-7b-80k.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
license: mit
---
|
mradermacher/IceSakeV6RP-7b-i1-GGUF | mradermacher | "2024-06-26T19:53:26Z" | 11,906 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"alpaca",
"mistral",
"not-for-all-audiences",
"nsfw",
"en",
"base_model:icefog72/IceSakeV6RP-7b",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-26T17:42:23Z" | ---
base_model: icefog72/IceSakeV6RP-7b
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- mergekit
- merge
- alpaca
- mistral
- not-for-all-audiences
- nsfw
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/icefog72/IceSakeV6RP-7b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/IceSakeV6RP-7b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/IceSakeV6RP-7b-i1-GGUF/resolve/main/IceSakeV6RP-7b.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/IceSakeV6RP-7b-i1-GGUF/resolve/main/IceSakeV6RP-7b.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/IceSakeV6RP-7b-i1-GGUF/resolve/main/IceSakeV6RP-7b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/IceSakeV6RP-7b-i1-GGUF/resolve/main/IceSakeV6RP-7b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/IceSakeV6RP-7b-i1-GGUF/resolve/main/IceSakeV6RP-7b.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/IceSakeV6RP-7b-i1-GGUF/resolve/main/IceSakeV6RP-7b.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/IceSakeV6RP-7b-i1-GGUF/resolve/main/IceSakeV6RP-7b.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/IceSakeV6RP-7b-i1-GGUF/resolve/main/IceSakeV6RP-7b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/IceSakeV6RP-7b-i1-GGUF/resolve/main/IceSakeV6RP-7b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/IceSakeV6RP-7b-i1-GGUF/resolve/main/IceSakeV6RP-7b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/IceSakeV6RP-7b-i1-GGUF/resolve/main/IceSakeV6RP-7b.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/IceSakeV6RP-7b-i1-GGUF/resolve/main/IceSakeV6RP-7b.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/IceSakeV6RP-7b-i1-GGUF/resolve/main/IceSakeV6RP-7b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/IceSakeV6RP-7b-i1-GGUF/resolve/main/IceSakeV6RP-7b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/IceSakeV6RP-7b-i1-GGUF/resolve/main/IceSakeV6RP-7b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/IceSakeV6RP-7b-i1-GGUF/resolve/main/IceSakeV6RP-7b.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/IceSakeV6RP-7b-i1-GGUF/resolve/main/IceSakeV6RP-7b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/IceSakeV6RP-7b-i1-GGUF/resolve/main/IceSakeV6RP-7b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/IceSakeV6RP-7b-i1-GGUF/resolve/main/IceSakeV6RP-7b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/IceSakeV6RP-7b-i1-GGUF/resolve/main/IceSakeV6RP-7b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/IceSakeV6RP-7b-i1-GGUF/resolve/main/IceSakeV6RP-7b.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf | RichardErkhov | "2024-06-20T06:17:18Z" | 11,898 | 1 | null | [
"gguf",
"region:us"
] | null | "2024-06-20T02:58:36Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Noromaid-7B-0.4-DPO - GGUF
- Model creator: https://huggingface.co/NeverSleep/
- Original model: https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Noromaid-7B-0.4-DPO.Q2_K.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf/blob/main/Noromaid-7B-0.4-DPO.Q2_K.gguf) | Q2_K | 2.53GB |
| [Noromaid-7B-0.4-DPO.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf/blob/main/Noromaid-7B-0.4-DPO.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Noromaid-7B-0.4-DPO.IQ3_S.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf/blob/main/Noromaid-7B-0.4-DPO.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Noromaid-7B-0.4-DPO.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf/blob/main/Noromaid-7B-0.4-DPO.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Noromaid-7B-0.4-DPO.IQ3_M.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf/blob/main/Noromaid-7B-0.4-DPO.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Noromaid-7B-0.4-DPO.Q3_K.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf/blob/main/Noromaid-7B-0.4-DPO.Q3_K.gguf) | Q3_K | 3.28GB |
| [Noromaid-7B-0.4-DPO.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf/blob/main/Noromaid-7B-0.4-DPO.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Noromaid-7B-0.4-DPO.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf/blob/main/Noromaid-7B-0.4-DPO.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Noromaid-7B-0.4-DPO.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf/blob/main/Noromaid-7B-0.4-DPO.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Noromaid-7B-0.4-DPO.Q4_0.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf/blob/main/Noromaid-7B-0.4-DPO.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Noromaid-7B-0.4-DPO.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf/blob/main/Noromaid-7B-0.4-DPO.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Noromaid-7B-0.4-DPO.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf/blob/main/Noromaid-7B-0.4-DPO.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Noromaid-7B-0.4-DPO.Q4_K.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf/blob/main/Noromaid-7B-0.4-DPO.Q4_K.gguf) | Q4_K | 4.07GB |
| [Noromaid-7B-0.4-DPO.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf/blob/main/Noromaid-7B-0.4-DPO.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Noromaid-7B-0.4-DPO.Q4_1.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf/blob/main/Noromaid-7B-0.4-DPO.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Noromaid-7B-0.4-DPO.Q5_0.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf/blob/main/Noromaid-7B-0.4-DPO.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Noromaid-7B-0.4-DPO.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf/blob/main/Noromaid-7B-0.4-DPO.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Noromaid-7B-0.4-DPO.Q5_K.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf/blob/main/Noromaid-7B-0.4-DPO.Q5_K.gguf) | Q5_K | 4.78GB |
| [Noromaid-7B-0.4-DPO.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf/blob/main/Noromaid-7B-0.4-DPO.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Noromaid-7B-0.4-DPO.Q5_1.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf/blob/main/Noromaid-7B-0.4-DPO.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Noromaid-7B-0.4-DPO.Q6_K.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf/blob/main/Noromaid-7B-0.4-DPO.Q6_K.gguf) | Q6_K | 5.53GB |
| [Noromaid-7B-0.4-DPO.Q8_0.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-7B-0.4-DPO-gguf/blob/main/Noromaid-7B-0.4-DPO.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: cc-by-nc-4.0
---

---
# This model is a collab between [IkariDev](https://huggingface.co/IkariDev) and [Undi](https://huggingface.co/Undi95)!
<!-- description start -->
## Description
<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->
This repo contains fp16 files of Noromaid-7b-v0.4-DPO.
[FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO)
<!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)-->
<!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)-->
<!-- [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)-->
<!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)-->
<!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)-->
[GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO-GGUF)
<!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)-->
## Ratings:
Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here!
No ratings yet!
If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".
<!-- description end -->
<!-- prompt-template start -->
## Prompt format: Chatml
```
<|im_start|>system
{sysprompt}<|im_end|>
<|im_start|>user
{input}<|im_end|>
<|im_start|>assistant
{output}<|im_end|>
```
## Training data used:
- [no_robots dataset](https://huggingface.co/Undi95/Llama2-13B-no_robots-alpaca-lora) let the model have more human behavior, enhances the output.
- [Aesir Private RP dataset] New data from a new and never used before dataset, add fresh data, no LimaRP spam, this is 100% new. Thanks to the [MinvervaAI Team](https://huggingface.co/MinervaAI) and, in particular, [Gryphe](https://huggingface.co/Gryphe) for letting us use it!
- [Another private Aesir dataset]
- [Another private Aesir dataset]
- [limarp](https://huggingface.co/datasets/lemonilia/LimaRP)
## DPO training data used:
- [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
- [NobodyExistsOnTheInternet/ToxicDPOqa](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicDPOqa)
- [Undi95/toxic-dpo-v0.1-NoWarning](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-NoWarning)
This is a full finetune.
## Others
Undi: If you want to support me, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
|
mradermacher/GreenOgno-v2-7b-passthrough-GGUF | mradermacher | "2024-06-21T10:19:31Z" | 11,896 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"GreenNode/GreenNode-mini-7B-multilingual-v1olet",
"eren23/OGNO-7b-dpo-truthful",
"en",
"base_model:powermove72/GreenOgno-v2-7b-passthrough",
"endpoints_compatible",
"region:us"
] | null | "2024-06-21T03:41:44Z" | ---
base_model: powermove72/GreenOgno-v2-7b-passthrough
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- GreenNode/GreenNode-mini-7B-multilingual-v1olet
- eren23/OGNO-7b-dpo-truthful
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/powermove72/GreenOgno-v2-7b-passthrough
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GreenOgno-v2-7b-passthrough-GGUF/resolve/main/GreenOgno-v2-7b-passthrough.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/GreenOgno-v2-7b-passthrough-GGUF/resolve/main/GreenOgno-v2-7b-passthrough.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/GreenOgno-v2-7b-passthrough-GGUF/resolve/main/GreenOgno-v2-7b-passthrough.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/GreenOgno-v2-7b-passthrough-GGUF/resolve/main/GreenOgno-v2-7b-passthrough.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/GreenOgno-v2-7b-passthrough-GGUF/resolve/main/GreenOgno-v2-7b-passthrough.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/GreenOgno-v2-7b-passthrough-GGUF/resolve/main/GreenOgno-v2-7b-passthrough.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GreenOgno-v2-7b-passthrough-GGUF/resolve/main/GreenOgno-v2-7b-passthrough.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/GreenOgno-v2-7b-passthrough-GGUF/resolve/main/GreenOgno-v2-7b-passthrough.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/GreenOgno-v2-7b-passthrough-GGUF/resolve/main/GreenOgno-v2-7b-passthrough.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GreenOgno-v2-7b-passthrough-GGUF/resolve/main/GreenOgno-v2-7b-passthrough.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GreenOgno-v2-7b-passthrough-GGUF/resolve/main/GreenOgno-v2-7b-passthrough.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/GreenOgno-v2-7b-passthrough-GGUF/resolve/main/GreenOgno-v2-7b-passthrough.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/GreenOgno-v2-7b-passthrough-GGUF/resolve/main/GreenOgno-v2-7b-passthrough.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GreenOgno-v2-7b-passthrough-GGUF/resolve/main/GreenOgno-v2-7b-passthrough.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/GreenOgno-v2-7b-passthrough-GGUF/resolve/main/GreenOgno-v2-7b-passthrough.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
QuantFactory/Llama-3-Ko-8B-Instruct-GGUF | QuantFactory | "2024-06-28T14:00:23Z" | 11,896 | 0 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-3-ko",
"text-generation",
"en",
"ko",
"arxiv:2310.04799",
"base_model:maywell/Llama-3-Ko-8B-Instruct",
"license:other",
"region:us"
] | text-generation | "2024-06-26T02:20:51Z" | ---
language:
- en
- ko
base_model: maywell/Llama-3-Ko-8B-Instruct
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-3-ko
license: other
license_name: llama3
license_link: LICENSE
---
# Llama-3-Ko-Instruct-GGUF
This is quantized version of [maywell/Llama-3-Ko-8B-Instruct](https://huggingface.co/maywell/Llama-3-Ko-8B-Instruct) created using llama.cpp
# Model Description
## Methodology
https://huggingface.co/blog/maywell/llm-feature-transfer
### Model Used
[meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
[meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
[beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B)
## Benchmark
### Kobest
| Task | beomi/Llama-3-Open-Ko-8B-Instruct | maywell/Llama-3-Ko-8B-Instruct |
| --- | --- | --- |
| kobest overall | 0.6220 ยฑ 0.0070 | 0.6852 ยฑ 0.0066 |
| kobest_boolq| 0.6254 ยฑ 0.0129| 0.7208 ยฑ 0.0120
| kobest_copa| 0.7110 ยฑ 0.0143| 0.7650 ยฑ 0.0134
| kobest_hellaswag| 0.3840 ยฑ 0.0218| 0.4440 ยฑ 0.0222
| kobest_sentineg| 0.8388 ยฑ 0.0185| 0.9194 ยฑ 0.0137
| kobest_wic| 0.5738 ยฑ 0.0139| 0.6040 ยฑ 0.0138
# Original Model Card by Beomi
> Update @ 2024.04.24: Release Llama-3-Open-Ko-8B model & [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview)
## Model Details
**Llama-3-Open-Ko-8B**
Llama-3-Open-Ko-8B model is continued pretrained language model based on Llama-3-8B.
This model is trained fully with publicily available resource, with 60GB+ of deduplicated texts.
With the new Llama-3 tokenizer, the pretraining conducted with 17.7B+ tokens, which slightly more than Korean tokenizer(Llama-2-Ko tokenizer).
The train was done on TPUv5e-256, with the warm support from TRC program by Google.
**Note for [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview)**
With applying the idea from [Chat Vector paper](https://arxiv.org/abs/2310.04799), I released Instruction model named [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview).
Since it is NOT finetuned with any Korean instruction set(indeed `preview`), but it would be great starting point for creating new Chat/Instruct models.
**Meta Llama-3**
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Junbum Lee (Beomi)
**Variations** Llama-3-Open-Ko comes in one size โ 8B.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama-3-Open-Ko
</td>
<td rowspan="2" >Same as *Open-Solar-Ko Dataset
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >17.7B+
</td>
<td>Jun, 2023
</td>
</tr>
</table>
*You can find dataset list here: https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B/tree/main/corpus
**Model Release Date** 2024.04.24.
**Status** This is a static model trained on an offline dataset.
**License** Llama3 License: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
TBD
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
**Llama-3-Open-Ko**
```
@article{llama3openko,
title={Llama-3-Open-Ko},
author={L, Junbum},
year={2024},
url={https://huggingface.co/beomi/Llama-3-Open-Ko-8B}
}
```
**Original Llama-3**
```
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
```
|
mradermacher/Swallow-7b-instruct-v0.1-i1-GGUF | mradermacher | "2024-06-30T15:36:05Z" | 11,892 | 0 | transformers | [
"transformers",
"gguf",
"en",
"ja",
"base_model:tokyotech-llm/Swallow-7b-instruct-v0.1",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | "2024-06-30T10:14:22Z" | ---
base_model: tokyotech-llm/Swallow-7b-instruct-v0.1
language:
- en
- ja
library_name: transformers
license: llama2
model_type: llama
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Swallow-7b-instruct-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Swallow-7b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-7b-instruct-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Swallow-7b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-7b-instruct-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Swallow-7b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-7b-instruct-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-7b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-7b-instruct-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-7b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-7b-instruct-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-7b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-7b-instruct-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-7b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-7b-instruct-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 2.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Swallow-7b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-7b-instruct-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Swallow-7b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-7b-instruct-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-7b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-7b-instruct-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Swallow-7b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-7b-instruct-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Swallow-7b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-7b-instruct-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-7b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-7b-instruct-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Swallow-7b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-7b-instruct-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Swallow-7b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-7b-instruct-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-7b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-7b-instruct-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 4.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Swallow-7b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-7b-instruct-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Swallow-7b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-7b-instruct-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Swallow-7b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-7b-instruct-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-7b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-7b-instruct-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-7b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-7b-instruct-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 5.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mradermacher/L3-sophie-improved-v2-i1-GGUF | mradermacher | "2024-06-23T21:42:53Z" | 11,880 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Fischerboot/L3-sophie-improved-v2",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T19:26:59Z" | ---
base_model: Fischerboot/L3-sophie-improved-v2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Fischerboot/L3-sophie-improved-v2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3-sophie-improved-v2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-sophie-improved-v2-i1-GGUF/resolve/main/L3-sophie-improved-v2.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-sophie-improved-v2-i1-GGUF/resolve/main/L3-sophie-improved-v2.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-sophie-improved-v2-i1-GGUF/resolve/main/L3-sophie-improved-v2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-sophie-improved-v2-i1-GGUF/resolve/main/L3-sophie-improved-v2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3-sophie-improved-v2-i1-GGUF/resolve/main/L3-sophie-improved-v2.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-sophie-improved-v2-i1-GGUF/resolve/main/L3-sophie-improved-v2.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3-sophie-improved-v2-i1-GGUF/resolve/main/L3-sophie-improved-v2.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-sophie-improved-v2-i1-GGUF/resolve/main/L3-sophie-improved-v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-sophie-improved-v2-i1-GGUF/resolve/main/L3-sophie-improved-v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-sophie-improved-v2-i1-GGUF/resolve/main/L3-sophie-improved-v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-sophie-improved-v2-i1-GGUF/resolve/main/L3-sophie-improved-v2.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-sophie-improved-v2-i1-GGUF/resolve/main/L3-sophie-improved-v2.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-sophie-improved-v2-i1-GGUF/resolve/main/L3-sophie-improved-v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-sophie-improved-v2-i1-GGUF/resolve/main/L3-sophie-improved-v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-sophie-improved-v2-i1-GGUF/resolve/main/L3-sophie-improved-v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-sophie-improved-v2-i1-GGUF/resolve/main/L3-sophie-improved-v2.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3-sophie-improved-v2-i1-GGUF/resolve/main/L3-sophie-improved-v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3-sophie-improved-v2-i1-GGUF/resolve/main/L3-sophie-improved-v2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-sophie-improved-v2-i1-GGUF/resolve/main/L3-sophie-improved-v2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3-sophie-improved-v2-i1-GGUF/resolve/main/L3-sophie-improved-v2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-sophie-improved-v2-i1-GGUF/resolve/main/L3-sophie-improved-v2.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
QuantFactory/Hansoldeco-Gemma-2-9b-it-v0.1-GGUF | QuantFactory | "2024-07-01T11:37:29Z" | 11,877 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-07-01T10:13:24Z" | Entry not found |
nvidia/segformer-b3-finetuned-cityscapes-1024-1024 | nvidia | "2022-08-09T11:32:45Z" | 11,876 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"segformer",
"vision",
"image-segmentation",
"dataset:cityscapes",
"arxiv:2105.15203",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | "2022-03-02T23:29:05Z" | ---
license: other
tags:
- vision
- image-segmentation
datasets:
- cityscapes
widget:
- src: https://cdn-media.huggingface.co/Inference-API/Sample-results-on-the-Cityscapes-dataset-The-above-images-show-how-our-method-can-handle.png
example_title: Road
---
# SegFormer (b3-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b3-finetuned-cityscapes-1024-1024")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b3-finetuned-cityscapes-1024-1024")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
mradermacher/Llama3-Ko-LON-8B-GGUF | mradermacher | "2024-06-26T13:28:33Z" | 11,876 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:crimsonjoo/Llama3-Ko-LON-8B",
"endpoints_compatible",
"region:us"
] | null | "2024-06-26T12:26:54Z" | ---
base_model: crimsonjoo/Llama3-Ko-LON-8B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/crimsonjoo/Llama3-Ko-LON-8B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ko-LON-8B-GGUF/resolve/main/Llama3-Ko-LON-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ko-LON-8B-GGUF/resolve/main/Llama3-Ko-LON-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ko-LON-8B-GGUF/resolve/main/Llama3-Ko-LON-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ko-LON-8B-GGUF/resolve/main/Llama3-Ko-LON-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ko-LON-8B-GGUF/resolve/main/Llama3-Ko-LON-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ko-LON-8B-GGUF/resolve/main/Llama3-Ko-LON-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ko-LON-8B-GGUF/resolve/main/Llama3-Ko-LON-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ko-LON-8B-GGUF/resolve/main/Llama3-Ko-LON-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ko-LON-8B-GGUF/resolve/main/Llama3-Ko-LON-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ko-LON-8B-GGUF/resolve/main/Llama3-Ko-LON-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ko-LON-8B-GGUF/resolve/main/Llama3-Ko-LON-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ko-LON-8B-GGUF/resolve/main/Llama3-Ko-LON-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ko-LON-8B-GGUF/resolve/main/Llama3-Ko-LON-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ko-LON-8B-GGUF/resolve/main/Llama3-Ko-LON-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ko-LON-8B-GGUF/resolve/main/Llama3-Ko-LON-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
flaviagiammarino/pubmed-clip-vit-base-patch32 | flaviagiammarino | "2023-12-28T12:36:18Z" | 11,872 | 8 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"clip",
"zero-shot-image-classification",
"medical",
"vision",
"en",
"arxiv:2112.13906",
"license:mit",
"endpoints_compatible",
"region:us"
] | zero-shot-image-classification | "2023-06-13T16:18:14Z" | ---
license: mit
language:
- en
tags:
- medical
- vision
widget:
- src: "https://huggingface.co/flaviagiammarino/pubmed-clip-vit-base-patch32/resolve/main/scripts/input.jpeg"
candidate_labels: "Chest X-Ray, Brain MRI, Abdomen CT Scan"
example_title: "Abdomen CT Scan"
---
# Model Card for PubMedCLIP
PubMedCLIP is a fine-tuned version of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip) for the medical domain.
## Model Description
PubMedCLIP was trained on the [Radiology Objects in COntext (ROCO)](https://github.com/razorx89/roco-dataset) dataset, a large-scale multimodal medical imaging dataset.
The ROCO dataset includes diverse imaging modalities (such as X-Ray, MRI, ultrasound, fluoroscopy, etc.) from various human body regions (such as head, spine, chest, abdomen, etc.)
captured from open-access [PubMed](https://pubmed.ncbi.nlm.nih.gov/) articles.<br>
PubMedCLIP was trained for 50 epochs with a batch size of 64 using the Adam optimizer with a learning rate of 10โ5.
The authors have released three different pre-trained models at this [link](https://1drv.ms/u/s!ApXgPqe9kykTgwD4Np3-f7ODAot8?e=zLVlJ2)
which use ResNet-50, ResNet-50x4 and ViT32 as image encoders. This repository includes only the ViT32 variant of the PubMedCLIP model.<br>
- **Repository:** [PubMedCLIP Official GitHub Repository](https://github.com/sarahESL/PubMedCLIP)
- **Paper:** [Does CLIP Benefit Visual Question Answering in the Medical Domain as Much as it Does in the General Domain?](https://arxiv.org/abs/2112.13906)
## Usage
```python
import requests
from PIL import Image
import matplotlib.pyplot as plt
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("flaviagiammarino/pubmed-clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("flaviagiammarino/pubmed-clip-vit-base-patch32")
url = "https://huggingface.co/flaviagiammarino/pubmed-clip-vit-base-patch32/resolve/main/scripts/input.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
text = ["Chest X-Ray", "Brain MRI", "Abdominal CT Scan"]
inputs = processor(text=text, images=image, return_tensors="pt", padding=True)
probs = model(**inputs).logits_per_image.softmax(dim=1).squeeze()
plt.subplots()
plt.imshow(image)
plt.title("".join([x[0] + ": " + x[1] + "\n" for x in zip(text, [format(prob, ".4%") for prob in probs])]))
plt.axis("off")
plt.tight_layout()
plt.show()
```

## Additional Information
### Licensing Information
The authors have released the model code and pre-trained checkpoints under the [MIT License](https://github.com/sarahESL/PubMedCLIP/blob/main/LICENSE).
### Citation Information
```
@article{eslami2021does,
title={Does clip benefit visual question answering in the medical domain as much as it does in the general domain?},
author={Eslami, Sedigheh and de Melo, Gerard and Meinel, Christoph},
journal={arXiv preprint arXiv:2112.13906},
year={2021}
}
```
|
philomath-1209/programming-language-identification | philomath-1209 | "2024-02-02T13:45:35Z" | 11,868 | 2 | transformers | [
"transformers",
"onnx",
"safetensors",
"roberta",
"text-classification",
"code",
"programming-language",
"code-classification",
"en",
"dataset:cakiki/rosetta-code",
"base_model:huggingface/CodeBERTa-small-v1",
"license:wtfpl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-01-05T14:54:01Z" | ---
license: wtfpl
datasets:
- cakiki/rosetta-code
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-classification
tags:
- code
- programming-language
- code-classification
base_model: huggingface/CodeBERTa-small-v1
---
This Model is a fine-tuned version of *huggingface/CodeBERTa-small-v1* on *cakiki/rosetta-code* Dataset for 26 Programming Languages as mentioned below.
## Training Details:
Model is trained for 25 epochs on Azure for nearly 26000 Datapoints for above Mentioned 26 Programming Languages<br> extracted from Dataset having 1006 of total Programming Language.
### Programming Languages this model is able to detect vs Examples used for training
<ol>
<li>'ARM Assembly':</li>
<li>'AppleScript'</li>
<li>'C'</li>
<li>'C#'</li>
<li>'C++'</li>
<li>'COBOL'</li>
<li>'Erlang'</li>
<li>'Fortran'</li>
<li>'Go'</li>
<li>'Java'</li>
<li>'JavaScript'</li>
<li>'Kotlin'</li>
<li>'Lua</li>
<li>'Mathematica/Wolfram Language'</li>
<li>'PHP'</li>
<li>'Pascal'</li>
<li>'Perl'</li>
<li>'PowerShell'</li>
<li>'Python'</li>
<li>'R</li>
<li>'Ruby'</li>
<li>'Rust'</li>
<li>'Scala'</li>
<li>'Swift'</li>
<li>'Visual Basic .NET'</li>
<li>'jq'</li>
</ol>
<br>
## Below is the Training Result for 25 epochs.
<ul>
<li>Training Computer Configuration: <ul>
<li>GPU:1xNvidia Tesla T4, </li>
<li>VRam: 16GB,</li>
<li>Ram:112GB,</li>
<li>Cores:6 Cores </li>
</ul></li>
<li>Training Time taken: exactly 7 hours for 25 epochs</li>
<li>Training Hyper-parameters: </li>
</ul>


## Inference Code
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline
model_name = 'philomath-1209/programming-language-identification'
loaded_tokenizer = AutoTokenizer.from_pretrained(model_name)
loaded_model = AutoModelForSequenceClassification.from_pretrained(model_name)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
text = """
PROGRAM Triangle
IMPLICIT NONE
REAL :: a, b, c, Area
PRINT *, 'Welcome, please enter the&
&lengths of the 3 sides.'
READ *, a, b, c
PRINT *, 'Triangle''s area: ', Area(a,b,c)
END PROGRAM Triangle
FUNCTION Area(x,y,z)
IMPLICIT NONE
REAL :: Area ! function type
REAL, INTENT( IN ) :: x, y, z
REAL :: theta, height
theta = ACOS((x**2+y**2-z**2)/(2.0*x*y))
height = x*SIN(theta); Area = 0.5*y*height
END FUNCTION Area
"""
inputs = loaded_tokenizer(text, return_tensors="pt",truncation=True)
with torch.no_grad():
logits = loaded_model(**inputs).logits
predicted_class_id = logits.argmax().item()
loaded_model.config.id2label[predicted_class_id]
```
### Optimum with ONNX inference
Loading the model requires the ๐ค Optimum library installed.
```shell
pip install transformers optimum[onnxruntime] optimum
```
```python
model_path = "philomath-1209/programming-language-identification"
import torch
from transformers import pipeline, AutoTokenizer
from optimum.onnxruntime import ORTModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained(model_path, subfolder="onnx")
model = ORTModelForSequenceClassification.from_pretrained(model_path, export=False, subfolder="onnx")
text = """
PROGRAM Triangle
IMPLICIT NONE
REAL :: a, b, c, Area
PRINT *, 'Welcome, please enter the&
&lengths of the 3 sides.'
READ *, a, b, c
PRINT *, 'Triangle''s area: ', Area(a,b,c)
END PROGRAM Triangle
FUNCTION Area(x,y,z)
IMPLICIT NONE
REAL :: Area ! function type
REAL, INTENT( IN ) :: x, y, z
REAL :: theta, height
theta = ACOS((x**2+y**2-z**2)/(2.0*x*y))
height = x*SIN(theta); Area = 0.5*y*height
END FUNCTION Area
"""
inputs = tokenizer(text, return_tensors="pt",truncation=True)
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
``` |
mradermacher/Qwen2-7B-Multilingual-RP-GGUF | mradermacher | "2024-06-25T03:27:47Z" | 11,862 | 3 | transformers | [
"transformers",
"gguf",
"en",
"ko",
"ja",
"zh",
"es",
"base_model:maywell/Qwen2-7B-Multilingual-RP",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-24T17:45:52Z" | ---
base_model: maywell/Qwen2-7B-Multilingual-RP
language:
- en
- ko
- ja
- zh
- es
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/maywell/Qwen2-7B-Multilingual-RP
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2-7B-Multilingual-RP-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Multilingual-RP-GGUF/resolve/main/Qwen2-7B-Multilingual-RP.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Multilingual-RP-GGUF/resolve/main/Qwen2-7B-Multilingual-RP.IQ3_XS.gguf) | IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Multilingual-RP-GGUF/resolve/main/Qwen2-7B-Multilingual-RP.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Multilingual-RP-GGUF/resolve/main/Qwen2-7B-Multilingual-RP.IQ3_S.gguf) | IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Multilingual-RP-GGUF/resolve/main/Qwen2-7B-Multilingual-RP.IQ3_M.gguf) | IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Multilingual-RP-GGUF/resolve/main/Qwen2-7B-Multilingual-RP.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Multilingual-RP-GGUF/resolve/main/Qwen2-7B-Multilingual-RP.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Multilingual-RP-GGUF/resolve/main/Qwen2-7B-Multilingual-RP.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Multilingual-RP-GGUF/resolve/main/Qwen2-7B-Multilingual-RP.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Multilingual-RP-GGUF/resolve/main/Qwen2-7B-Multilingual-RP.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Multilingual-RP-GGUF/resolve/main/Qwen2-7B-Multilingual-RP.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Multilingual-RP-GGUF/resolve/main/Qwen2-7B-Multilingual-RP.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Multilingual-RP-GGUF/resolve/main/Qwen2-7B-Multilingual-RP.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Multilingual-RP-GGUF/resolve/main/Qwen2-7B-Multilingual-RP.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-Multilingual-RP-GGUF/resolve/main/Qwen2-7B-Multilingual-RP.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
medalpaca/medalpaca-7b | medalpaca | "2024-04-02T08:34:54Z" | 11,861 | 65 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"medical",
"en",
"arxiv:2303.14070",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-03-29T17:54:49Z" | ---
license: cc
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- medical
---
# MedAlpaca 7b
## Table of Contents
[Model Description](#model-description)
- [Architecture](#architecture)
- [Training Data](#trainig-data)
[Model Usage](#model-usage)
[Limitations](#limitations)
## Model Description
### Architecture
`medalpaca-7b` is a large language model specifically fine-tuned for medical domain tasks.
It is based on LLaMA (Large Language Model Meta AI) and contains 7 billion parameters.
The primary goal of this model is to improve question-answering and medical dialogue tasks.
Architecture
### Training Data
The training data for this project was sourced from various resources.
Firstly, we used Anki flashcards to automatically generate questions,
from the front of the cards and anwers from the back of the card.
Secondly, we generated medical question-answer pairs from [Wikidoc](https://www.wikidoc.org/index.php/Main_Page).
We extracted paragraphs with relevant headings, and used Chat-GPT 3.5
to generate questions from the headings and using the corresponding paragraphs
as answers. This dataset is still under development and we believe
that approximately 70% of these question answer pairs are factual correct.
Thirdly, we used StackExchange to extract question-answer pairs, taking the
top-rated question from five categories: Academia, Bioinformatics, Biology,
Fitness, and Health. Additionally, we used a dataset from [ChatDoctor](https://arxiv.org/abs/2303.14070)
consisting of 200,000 question-answer pairs, available at https://github.com/Kent0n-Li/ChatDoctor.
| Source | n items |
|------------------------------|--------|
| ChatDoc large | 200000 |
| wikidoc | 67704 |
| Stackexchange academia | 40865 |
| Anki flashcards | 33955 |
| Stackexchange biology | 27887 |
| Stackexchange fitness | 9833 |
| Stackexchange health | 7721 |
| Wikidoc patient information | 5942 |
| Stackexchange bioinformatics | 5407 |
## Model Usage
To evaluate the performance of the model on a specific dataset, you can use the Hugging Face Transformers library's built-in evaluation scripts. Please refer to the evaluation guide for more information.
Inference
You can use the model for inference tasks like question-answering and medical dialogues using the Hugging Face Transformers library. Here's an example of how to use the model for a question-answering task:
```python
from transformers import pipeline
pl = pipeline("text-generation", model="medalpaca/medalpaca-7b", tokenizer="medalpaca/medalpaca-7b")
question = "What are the symptoms of diabetes?"
context = "Diabetes is a metabolic disease that causes high blood sugar. The symptoms include increased thirst, frequent urination, and unexplained weight loss."
answer = pl(f"Context: {context}\n\nQuestion: {question}\n\nAnswer: ")
print(answer)
```
## Limitations
The model may not perform effectively outside the scope of the medical domain.
The training data primarily targets the knowledge level of medical students,
which may result in limitations when addressing the needs of board-certified physicians.
The model has not been tested in real-world applications, so its efficacy and accuracy are currently unknown.
It should never be used as a substitute for a doctor's opinion and must be treated as a research tool only.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_medalpaca__medalpaca-7b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 44.98 |
| ARC (25-shot) | 54.1 |
| HellaSwag (10-shot) | 80.42 |
| MMLU (5-shot) | 41.47 |
| TruthfulQA (0-shot) | 40.46 |
| Winogrande (5-shot) | 71.19 |
| GSM8K (5-shot) | 3.03 |
| DROP (3-shot) | 24.21 |
|
SG161222/RealVisXL_V3.0_Turbo | SG161222 | "2024-04-12T15:37:36Z" | 11,860 | 27 | diffusers | [
"diffusers",
"safetensors",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2023-12-23T13:21:57Z" | ---
license: openrail++
---
<b>It's important! Read it!</b><br>
The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.<br>
<b>You can support me directly on Boosty - https://boosty.to/sg_161222</b><br>
The model is aimed at photorealism. Can produce sfw and nsfw images of decent quality.<br>
CivitAI Page: https://civitai.com/models/139562?modelVersionId=272378<br>
<b>Recommended Negative Prompt:</b><br>
(worst quality, low quality, illustration, 3d, 2d, painting, cartoons, sketch), open mouth<br>
<b>or another negative prompt</b><br>
<b>Recommended Generation Parameters:</b><br>
Sampling Steps: 4+<br>
Sampling Method: DPM++ SDE Karras<br>
CFG Scale: 1.5-3
<b>Recommended Hires Fix Parameters:</b><br>
Hires steps: 2+<br>
Upscaler: 4x-UltraSharp upscaler / or another<br>
Denoising strength: 0.1 - 0.5<br>
Upscale by: 1.1-2.0<br> |
HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-binary | HooshvareLab | "2021-05-18T20:56:29Z" | 11,855 | 4 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:04Z" | ---
language: fa
license: apache-2.0
---
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
### DeepSentiPers
which is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset.
**Binary:**
1. Negative (Furious + Angry)
2. Positive (Happy + Delighted)
**Multi**
1. Furious
2. Angry
3. Neutral
4. Happy
5. Delighted
| Label | # |
|:---------:|:----:|
| Furious | 236 |
| Angry | 1357 |
| Neutral | 2874 |
| Happy | 2848 |
| Delighted | 2516 |
**Download**
You can download the dataset from:
- [SentiPers](https://github.com/phosseini/sentipers)
- [DeepSentiPers](https://github.com/JoyeBright/DeepSentiPers)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------:|:-----------:|:-----:|:-------------:|
| SentiPers (Multi Class) | 71.31* | 71.11 | - | 69.33 |
| SentiPers (Binary Class) | 92.42* | 92.13 | - | 91.98 |
## How to use :hugs:
| Task | Notebook |
|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Sentiment Analysis | [](https://colab.research.google.com/github/hooshvare/parsbert/blob/master/notebooks/Taaghche_Sentiment_Analysis.ipynb) |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo. |
rinna/llama-3-youko-8b | rinna | "2024-05-07T01:59:47Z" | 11,844 | 55 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ja",
"en",
"dataset:mc4",
"dataset:wikipedia",
"dataset:EleutherAI/pile",
"dataset:oscar-corpus/colossal-oscar-1.0",
"dataset:cc100",
"arxiv:2404.01657",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-01T07:53:45Z" | ---
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
license: llama3
datasets:
- mc4
- wikipedia
- EleutherAI/pile
- oscar-corpus/colossal-oscar-1.0
- cc100
language:
- ja
- en
inference: false
---
# `Llama 3 Youko 8B (rinna/llama-3-youko-8b)`

# Overview
We conduct continual pre-training of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on **22B** tokens from a mixture of Japanese and English datasets. The continual pre-training significantly improves the model's performance on Japanese tasks.
The name `youko` comes from the Japanese word [`ๅฆ็/ใใใ/Youko`](https://ja.wikipedia.org/wiki/%E5%A6%96%E7%8B%90), which is a kind of Japanese mythical creature ([`ๅฆๆช/ใใใใ/Youkai`](https://ja.wikipedia.org/wiki/%E5%A6%96%E6%80%AA)).
* **Library**
The model was trained using code based on [EleutherAI/gpt-neox](https://github.com/EleutherAI/gpt-neox).
* **Model architecture**
A 32-layer, 4096-hidden-size transformer-based language model. Refer to the [Llama 3 Model Card](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md) for architecture details.
* **Training: Built with Meta Llama 3**
The model was initialized with the [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) model and continually trained on around **22B** tokens from a mixture of the following corpora
- [Japanese CC-100](https://huggingface.co/datasets/cc100)
- [Japanese C4](https://huggingface.co/datasets/mc4)
- [Japanese OSCAR](https://huggingface.co/datasets/oscar-corpus/colossal-oscar-1.0)
- [The Pile](https://huggingface.co/datasets/EleutherAI/pile)
- [Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
- rinna curated Japanese dataset
* **Contributors**
- [Koh Mitsuda](https://huggingface.co/mitsu-koh)
- [Kei Sawada](https://huggingface.co/keisawada)
---
# Benchmarking
Please refer to [rinna's LM benchmark page](https://rinnakk.github.io/research/benchmarks/lm/index.html).
---
# How to use the model
~~~~python
import transformers
import torch
model_id = "rinna/llama-3-youko-8b"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto"
)
output = pipeline(
"่ฅฟ็ฐๅนพๅค้ใฏใ",
max_new_tokens=256,
do_sample=True
)
print(output)
~~~~
---
# Tokenization
The model uses the original meta-llama/Meta-Llama-3-8B tokenizer.
---
# How to cite
```bibtex
@misc{rinna-llama-3-youko-8b,
title = {rinna/llama-3-youko-8b},
author = {Mitsuda, Koh and Sawada, Kei},
url = {https://huggingface.co/rinna/llama-3-youko-8b},
}
@inproceedings{sawada2024release,
title = {Release of Pre-Trained Models for the {J}apanese Language},
author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
month = {5},
year = {2024},
url = {https://arxiv.org/abs/2404.01657},
}
```
---
# References
```bibtex
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
@software{gpt-neox-library,
title = {{GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch}},
author = {Andonian, Alex and Anthony, Quentin and Biderman, Stella and Black, Sid and Gali, Preetham and Gao, Leo and Hallahan, Eric and Levy-Kramer, Josh and Leahy, Connor and Nestler, Lucas and Parker, Kip and Pieler, Michael and Purohit, Shivanshu and Songz, Tri and Phil, Wang and Weinbach, Samuel},
doi = {10.5281/zenodo.5879544},
month = {8},
year = {2021},
version = {0.0.1},
url = {https://www.github.com/eleutherai/gpt-neox},
}
```
---
# License
[Meta Llama 3 Community License](https://llama.meta.com/llama3/license/) |
mradermacher/llama3-8B-DarkIdol-1.1-i1-GGUF | mradermacher | "2024-06-20T19:52:47Z" | 11,841 | 1 | transformers | [
"transformers",
"gguf",
"roleplay",
"llama3",
"sillytavern",
"idol",
"en",
"ja",
"zh",
"base_model:aifeifei798/llama3-8B-DarkIdol-1.1",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-06-20T17:09:44Z" | ---
base_model: aifeifei798/llama3-8B-DarkIdol-1.1
language:
- en
- ja
- zh
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- roleplay
- llama3
- sillytavern
- idol
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/llama3-8B-DarkIdol-1.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama3-8B-DarkIdol-1.1-i1-GGUF/resolve/main/llama3-8B-DarkIdol-1.1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/llama3-8B-DarkIdol-1.1-i1-GGUF/resolve/main/llama3-8B-DarkIdol-1.1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/llama3-8B-DarkIdol-1.1-i1-GGUF/resolve/main/llama3-8B-DarkIdol-1.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-8B-DarkIdol-1.1-i1-GGUF/resolve/main/llama3-8B-DarkIdol-1.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-8B-DarkIdol-1.1-i1-GGUF/resolve/main/llama3-8B-DarkIdol-1.1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-8B-DarkIdol-1.1-i1-GGUF/resolve/main/llama3-8B-DarkIdol-1.1.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-8B-DarkIdol-1.1-i1-GGUF/resolve/main/llama3-8B-DarkIdol-1.1.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/llama3-8B-DarkIdol-1.1-i1-GGUF/resolve/main/llama3-8B-DarkIdol-1.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama3-8B-DarkIdol-1.1-i1-GGUF/resolve/main/llama3-8B-DarkIdol-1.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-8B-DarkIdol-1.1-i1-GGUF/resolve/main/llama3-8B-DarkIdol-1.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/llama3-8B-DarkIdol-1.1-i1-GGUF/resolve/main/llama3-8B-DarkIdol-1.1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama3-8B-DarkIdol-1.1-i1-GGUF/resolve/main/llama3-8B-DarkIdol-1.1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-8B-DarkIdol-1.1-i1-GGUF/resolve/main/llama3-8B-DarkIdol-1.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/llama3-8B-DarkIdol-1.1-i1-GGUF/resolve/main/llama3-8B-DarkIdol-1.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/llama3-8B-DarkIdol-1.1-i1-GGUF/resolve/main/llama3-8B-DarkIdol-1.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-8B-DarkIdol-1.1-i1-GGUF/resolve/main/llama3-8B-DarkIdol-1.1.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/llama3-8B-DarkIdol-1.1-i1-GGUF/resolve/main/llama3-8B-DarkIdol-1.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/llama3-8B-DarkIdol-1.1-i1-GGUF/resolve/main/llama3-8B-DarkIdol-1.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama3-8B-DarkIdol-1.1-i1-GGUF/resolve/main/llama3-8B-DarkIdol-1.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-8B-DarkIdol-1.1-i1-GGUF/resolve/main/llama3-8B-DarkIdol-1.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-8B-DarkIdol-1.1-i1-GGUF/resolve/main/llama3-8B-DarkIdol-1.1.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf | RichardErkhov | "2024-06-24T10:59:22Z" | 11,837 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-24T06:13:17Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Falcon2-5.5B-multilingual - GGUF
- Model creator: https://huggingface.co/ssmits/
- Original model: https://huggingface.co/ssmits/Falcon2-5.5B-multilingual/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Falcon2-5.5B-multilingual.Q2_K.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf/blob/main/Falcon2-5.5B-multilingual.Q2_K.gguf) | Q2_K | 2.03GB |
| [Falcon2-5.5B-multilingual.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf/blob/main/Falcon2-5.5B-multilingual.IQ3_XS.gguf) | IQ3_XS | 2.29GB |
| [Falcon2-5.5B-multilingual.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf/blob/main/Falcon2-5.5B-multilingual.IQ3_S.gguf) | IQ3_S | 2.35GB |
| [Falcon2-5.5B-multilingual.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf/blob/main/Falcon2-5.5B-multilingual.Q3_K_S.gguf) | Q3_K_S | 2.35GB |
| [Falcon2-5.5B-multilingual.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf/blob/main/Falcon2-5.5B-multilingual.IQ3_M.gguf) | IQ3_M | 2.46GB |
| [Falcon2-5.5B-multilingual.Q3_K.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf/blob/main/Falcon2-5.5B-multilingual.Q3_K.gguf) | Q3_K | 2.56GB |
| [Falcon2-5.5B-multilingual.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf/blob/main/Falcon2-5.5B-multilingual.Q3_K_M.gguf) | Q3_K_M | 2.56GB |
| [Falcon2-5.5B-multilingual.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf/blob/main/Falcon2-5.5B-multilingual.Q3_K_L.gguf) | Q3_K_L | 2.72GB |
| [Falcon2-5.5B-multilingual.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf/blob/main/Falcon2-5.5B-multilingual.IQ4_XS.gguf) | IQ4_XS | 2.87GB |
| [Falcon2-5.5B-multilingual.Q4_0.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf/blob/main/Falcon2-5.5B-multilingual.Q4_0.gguf) | Q4_0 | 2.99GB |
| [Falcon2-5.5B-multilingual.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf/blob/main/Falcon2-5.5B-multilingual.IQ4_NL.gguf) | IQ4_NL | 3.01GB |
| [Falcon2-5.5B-multilingual.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf/blob/main/Falcon2-5.5B-multilingual.Q4_K_S.gguf) | Q4_K_S | 2.99GB |
| [Falcon2-5.5B-multilingual.Q4_K.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf/blob/main/Falcon2-5.5B-multilingual.Q4_K.gguf) | Q4_K | 3.19GB |
| [Falcon2-5.5B-multilingual.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf/blob/main/Falcon2-5.5B-multilingual.Q4_K_M.gguf) | Q4_K_M | 3.19GB |
| [Falcon2-5.5B-multilingual.Q4_1.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf/blob/main/Falcon2-5.5B-multilingual.Q4_1.gguf) | Q4_1 | 3.29GB |
| [Falcon2-5.5B-multilingual.Q5_0.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf/blob/main/Falcon2-5.5B-multilingual.Q5_0.gguf) | Q5_0 | 3.6GB |
| [Falcon2-5.5B-multilingual.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf/blob/main/Falcon2-5.5B-multilingual.Q5_K_S.gguf) | Q5_K_S | 3.6GB |
| [Falcon2-5.5B-multilingual.Q5_K.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf/blob/main/Falcon2-5.5B-multilingual.Q5_K.gguf) | Q5_K | 3.8GB |
| [Falcon2-5.5B-multilingual.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf/blob/main/Falcon2-5.5B-multilingual.Q5_K_M.gguf) | Q5_K_M | 3.8GB |
| [Falcon2-5.5B-multilingual.Q5_1.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf/blob/main/Falcon2-5.5B-multilingual.Q5_1.gguf) | Q5_1 | 3.9GB |
| [Falcon2-5.5B-multilingual.Q6_K.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf/blob/main/Falcon2-5.5B-multilingual.Q6_K.gguf) | Q6_K | 4.24GB |
| [Falcon2-5.5B-multilingual.Q8_0.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-multilingual-gguf/blob/main/Falcon2-5.5B-multilingual.Q8_0.gguf) | Q8_0 | 5.41GB |
Original model description:
---
base_model:
- tiiuae/falcon-11B
library_name: transformers
tags:
- mergekit
- merge
- lazymergekit
- tiiuae/falcon-11B
license: apache-2.0
language:
- es
- fr
- de
- 'no'
- sv
- da
- nl
- pt
- pl
- ro
- it
- cs
---
## Why prune?
Even though [Falcon-11B](https://huggingface.co/tiiuae/falcon-11B) is trained on 5T tokens, it is still undertrained, as can be seen by this graph:

This is why the choice is made to prune 50% of the layers.
Note that \~1B of continued pre-training (\~1M rows of 1k tokens) is still required to restore the perplexity of this model in the desired language.
I'm planning on doing that for certain languages when fineweb-edu-{specific_language} will be available, depending on how much compute will be available.
# sliced
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was pruned using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [tiiuae/falcon-11B](https://huggingface.co/tiiuae/falcon-11B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: tiiuae/falcon-11B
layer_range: [0, 24]
- sources:
- model: tiiuae/falcon-11B
layer_range: [55, 59]
merge_method: passthrough
dtype: bfloat16
```
[PruneMe](https://github.com/arcee-ai/PruneMe) has been utilized using the wikimedia/wikipedia subsets of 11 languages by investigating layer similarity with 2000 samples per language. The layer ranges for pruning were determined based on the averages of each language analysis to maintain performance while reducing model size.

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "ssmits/Falcon2-5.5B-multilingual"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
)
sequences = pipeline(
"Can you explain the concepts of Quantum Computing?",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
๐ฅ **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
## Direct Use
Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
## Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon2-5.5B is trained mostly on English, but also German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish. It will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
## Recommendations
We recommend users of Falcon2-5.5B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
|
LeroyDyer/Spydaz_Web_AI_ | LeroyDyer | "2024-06-28T08:58:43Z" | 11,836 | 1 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"trl",
"chemistry",
"biology",
"legal",
"art",
"music",
"finance",
"code",
"medical",
"not-for-all-audiences",
"merge",
"climate",
"chain-of-thought",
"tree-of-knowledge",
"forest-of-thoughts",
"visual-spacial-sketchpad",
"alpha-mind",
"knowledge-graph",
"entity-detection",
"encyclopedia",
"wikipedia",
"stack-exchange",
"Reddit",
"Cyber-series",
"MegaMind",
"Cybertron",
"SpydazWeb",
"Spydaz",
"LCARS",
"star-trek",
"mega-transformers",
"Mulit-Mega-Merge",
"Multi-Lingual",
"Afro-Centric",
"African-Model",
"Ancient-One",
"en",
"sw",
"ig",
"so",
"es",
"ca",
"dataset:gretelai/synthetic_text_to_sql",
"dataset:HuggingFaceTB/cosmopedia",
"dataset:teknium/OpenHermes-2.5",
"dataset:Open-Orca/SlimOrca",
"dataset:Open-Orca/OpenOrca",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:databricks/databricks-dolly-15k",
"dataset:yahma/alpaca-cleaned",
"dataset:uonlp/CulturaX",
"dataset:mwitiderrick/SwahiliPlatypus",
"dataset:swahili",
"dataset:Rogendo/English-Swahili-Sentence-Pairs",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:meta-math/MetaMathQA",
"dataset:abacusai/ARC_DPO_FewShot",
"dataset:abacusai/MetaMath_DPO_FewShot",
"dataset:abacusai/HellaSwag_DPO_FewShot",
"dataset:HaltiaAI/Her-The-Movie-Samantha-and-Theodore-Dataset",
"dataset:HuggingFaceFW/fineweb",
"dataset:occiglot/occiglot-fineweb-v0.5",
"dataset:omi-health/medical-dialogue-to-soap-summary",
"dataset:keivalya/MedQuad-MedicalQnADataset",
"dataset:ruslanmv/ai-medical-dataset",
"dataset:Shekswess/medical_llama3_instruct_dataset_short",
"dataset:ShenRuililin/MedicalQnA",
"dataset:virattt/financial-qa-10K",
"dataset:PatronusAI/financebench",
"dataset:takala/financial_phrasebank",
"dataset:Replete-AI/code_bagel",
"dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW",
"dataset:IlyaGusev/gpt_roleplay_realm",
"dataset:rickRossie/bluemoon_roleplay_chat_data_300k_messages",
"dataset:jtatman/hypnosis_dataset",
"dataset:Hypersniper/philosophy_dialogue",
"dataset:Locutusque/function-calling-chatml",
"dataset:bible-nlp/biblenlp-corpus",
"dataset:DatadudeDev/Bible",
"dataset:Helsinki-NLP/bible_para",
"dataset:HausaNLP/AfriSenti-Twitter",
"dataset:aixsatoshi/Chat-with-cosmopedia",
"dataset:HuggingFaceTB/cosmopedia-100k",
"dataset:HuggingFaceFW/fineweb-edu",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:heliosbrahma/mental_health_chatbot_dataset",
"base_model:LeroyDyer/_Spydaz_Web_AI_",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-19T19:02:47Z" | ---
language:
- en
- sw
- ig
- so
- es
- ca
license: apache-2.0
metrics:
- accuracy
- bertscore
- bleu
- brier_score
- cer
- character
- charcut_mt
- chrf
- code_eval
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- chemistry
- biology
- legal
- art
- music
- finance
- code
- medical
- not-for-all-audiences
- merge
- climate
- chain-of-thought
- tree-of-knowledge
- forest-of-thoughts
- visual-spacial-sketchpad
- alpha-mind
- knowledge-graph
- entity-detection
- encyclopedia
- wikipedia
- stack-exchange
- Reddit
- Cyber-series
- MegaMind
- Cybertron
- SpydazWeb
- Spydaz
- LCARS
- star-trek
- mega-transformers
- Mulit-Mega-Merge
- Multi-Lingual
- Afro-Centric
- African-Model
- Ancient-One
datasets:
- gretelai/synthetic_text_to_sql
- HuggingFaceTB/cosmopedia
- teknium/OpenHermes-2.5
- Open-Orca/SlimOrca
- Open-Orca/OpenOrca
- cognitivecomputations/dolphin-coder
- databricks/databricks-dolly-15k
- yahma/alpaca-cleaned
- uonlp/CulturaX
- mwitiderrick/SwahiliPlatypus
- swahili
- Rogendo/English-Swahili-Sentence-Pairs
- ise-uiuc/Magicoder-Evol-Instruct-110K
- meta-math/MetaMathQA
- abacusai/ARC_DPO_FewShot
- abacusai/MetaMath_DPO_FewShot
- abacusai/HellaSwag_DPO_FewShot
- HaltiaAI/Her-The-Movie-Samantha-and-Theodore-Dataset
- HuggingFaceFW/fineweb
- occiglot/occiglot-fineweb-v0.5
- omi-health/medical-dialogue-to-soap-summary
- keivalya/MedQuad-MedicalQnADataset
- ruslanmv/ai-medical-dataset
- Shekswess/medical_llama3_instruct_dataset_short
- ShenRuililin/MedicalQnA
- virattt/financial-qa-10K
- PatronusAI/financebench
- takala/financial_phrasebank
- Replete-AI/code_bagel
- athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW
- IlyaGusev/gpt_roleplay_realm
- rickRossie/bluemoon_roleplay_chat_data_300k_messages
- jtatman/hypnosis_dataset
- Hypersniper/philosophy_dialogue
- Locutusque/function-calling-chatml
- bible-nlp/biblenlp-corpus
- DatadudeDev/Bible
- Helsinki-NLP/bible_para
- HausaNLP/AfriSenti-Twitter
- aixsatoshi/Chat-with-cosmopedia
- HuggingFaceTB/cosmopedia-100k
- HuggingFaceFW/fineweb-edu
- m-a-p/CodeFeedback-Filtered-Instruction
- heliosbrahma/mental_health_chatbot_dataset
base_model: LeroyDyer/_Spydaz_Web_AI_
---
# Uploaded model
- **Developed by:** Leroy "Spydaz" Dyer
- **License:** apache-2.0
- **Finetuned from model :** LeroyDyer/LCARS_AI_010
[<img src="https://cdn-avatars.huggingface.co/v1/production/uploads/65d883893a52cd9bcd8ab7cf/tRsCJlHNZo1D02kBTmfy9.jpeg" width="300"/>
https://github.com/spydaz
* The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.
* Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1
* 32k context window (vs 8k context in v0.1)
* Rope-theta = 1e6
* No Sliding-Window Attention
# Introduction :
## SpydazWeb AI model :
### Methods:
Trained for multi-task operations as well as rag and function calling :
This model is a fully functioning model and is fully uncensored:
the model has been trained on multiple datasets on the huggingface hub and kaggle :
the focus has been mainly on methodology :
* Chain of thoughts
* steo by step
* tree of thoughts
* forest of thoughts
* graph of thoughts
* agent generation : Voting, ranking, ...
with these methods the model has gained insights into tasks, enabling for knowldge transfer between tasks :
the model has been intensivly trained in recalling data previously entered into the matrix:
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
QuantFactory/Storm-7B-GGUF | QuantFactory | "2024-06-30T05:37:43Z" | 11,831 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-28T14:34:04Z" | Entry not found |
textattack/bert-base-uncased-SST-2 | textattack | "2021-05-20T07:37:12Z" | 11,814 | 2 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | Entry not found |
failspy/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF | failspy | "2024-05-30T12:36:33Z" | 11,805 | 36 | transformers | [
"transformers",
"gguf",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-05-20T16:19:57Z" | ---
library_name: transformers
license: llama3
---
# Llama-3-8B-Instruct-abliterated-v3 Model Card
[My Jupyter "cookbook" to replicate the methodology can be found here, refined library coming soon](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb)
This is [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more.
## Hang on, "abliteration"? Orthogonalization? Ablation? What is this?
TL;DR: This model has had certain weights manipulated to "inhibit" the model's ability to express refusal. It is not in anyway _guaranteed_ that it won't refuse you, understand your request, it may still lecture you about ethics/safety, etc. It is tuned in all other respects the same as the original 70B instruct model was, just with the strongest refusal directions orthogonalized out.
**TL;TL;DR;DR: It's uncensored in the purest form I can manage -- no new or changed behaviour in any other respect from the original model.**
As far as "abliteration": it's just a fun play-on-words using the original "ablation" term used in the original paper to refer to removing features, which I made up particularly to differentiate the model from "uncensored" fine-tunes.
Ablate + obliterated = Abliterated
Anyways, orthogonalization/ablation are both aspects to refer to the same thing here, the technique in which the refusal feature was "ablated" from the model was via orthogonalization.
## A little more on the methodology, and why this is interesting
To me, ablation (or applying the methodology for the inverse, "augmentation") seems to be good for inducing/removing very specific features that you'd have to spend way too many tokens on encouraging or discouraging in your system prompt.
Instead, you just apply your system prompt in the ablation script against a blank system prompt on the same dataset and orthogonalize for the desired behaviour in the final model weights.
> Why this over fine-tuning?
Ablation is much more surgical in nature whilst also being effectively executed with a _lot_ less data than fine-tuning, which I think is its main advantage.
As well, and its most valuable aspect is it keeps as much of the original model's knowledge and training intact, whilst removing its tendency to behave in one very specific undesireable manner. (In this case, refusing user requests.)
Fine tuning is still exceptionally useful and the go-to for broad behaviour changes; however, you may be able to get close to your desired behaviour with very few samples using the ablation/augmentation techniques.
It may also be a useful step to add to your model refinement: orthogonalize -> fine-tune or vice-versa.
I haven't really gotten around to exploring this model stacked with fine-tuning, I encourage others to give it a shot if they've got the capacity.
> Okay, fine, but why V3? There's no V2 70B?
Well, I released a V2 a while back for 8B under Cognitive Computations.
It ended up being not worth it to try V2 with 70B, I wanted to refine the model before wasting compute cycles on what might not even be a better model.
I am however quite pleased about this latest methodology, it seems to have induced fewer hallucinations.
So to show that it's a new fancy methodology from even that of the 8B V2, I decided to do a Microsoft and double up on my version jump because it's *such* an advancement (or so the excuse went, when in actuality it was because too many legacy but actively used Microsoft libraries checked for 'Windows 9' in the OS name to detect Windows 95/98 as one.)
## Quirkiness awareness notice
This model may come with interesting quirks, with the methodology being so new. I encourage you to play with the model, and post any quirks you notice in the community tab, as that'll help us further understand what this orthogonalization has in the way of side effects.
If you manage to develop further improvements, please share! This is really the most basic way to use ablation, but there are other possibilities that I believe are as-yet unexplored.
Additionally, feel free to reach out in any way about this. I'm on the Cognitive Computations Discord, I'm watching the Community tab, reach out! I'd love to see this methodology used in other ways, and so would gladly support whoever whenever I can.
## GGUF quants
Please feel free to quantize or convert to other backends and reupload!
generally speaking, you want to pick the model in size that in GBs fits closest to your max RAM/VRAM (without getting too close; you'll still need room for context!)
Uploaded quants:
fp16 - good for converting to other platforms or getting the quantization you actually want, not recommended but obviously highest quality
q8_0
q6_0 - this will probably be the best balance in terms of quality/performance
q4
q3_k_m |
kykim/electra-kor-base | kykim | "2021-01-22T00:28:50Z" | 11,804 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"ko",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:05Z" | ---
language: ko
---
# Electra base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
from transformers import ElectraTokenizerFast, ElectraModel
tokenizer_electra = ElectraTokenizerFast.from_pretrained("kykim/electra-kor-base")
model = ElectraModel.from_pretrained("kykim/electra-kor-base")
``` |
mradermacher/PetrolWriter-7B-i1-GGUF | mradermacher | "2024-06-20T17:22:41Z" | 11,787 | 0 | transformers | [
"transformers",
"gguf",
"art",
"not-for-all-audiences",
"en",
"base_model:Norquinal/PetrolWriter-7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-20T16:12:13Z" | ---
base_model: Norquinal/PetrolWriter-7B
language: en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- art
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Norquinal/PetrolWriter-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/PetrolWriter-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PetrolWriter-7B-i1-GGUF/resolve/main/PetrolWriter-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/PetrolWriter-7B-i1-GGUF/resolve/main/PetrolWriter-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/PetrolWriter-7B-i1-GGUF/resolve/main/PetrolWriter-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/PetrolWriter-7B-i1-GGUF/resolve/main/PetrolWriter-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/PetrolWriter-7B-i1-GGUF/resolve/main/PetrolWriter-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/PetrolWriter-7B-i1-GGUF/resolve/main/PetrolWriter-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/PetrolWriter-7B-i1-GGUF/resolve/main/PetrolWriter-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/PetrolWriter-7B-i1-GGUF/resolve/main/PetrolWriter-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PetrolWriter-7B-i1-GGUF/resolve/main/PetrolWriter-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/PetrolWriter-7B-i1-GGUF/resolve/main/PetrolWriter-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/PetrolWriter-7B-i1-GGUF/resolve/main/PetrolWriter-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/PetrolWriter-7B-i1-GGUF/resolve/main/PetrolWriter-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/PetrolWriter-7B-i1-GGUF/resolve/main/PetrolWriter-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/PetrolWriter-7B-i1-GGUF/resolve/main/PetrolWriter-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/PetrolWriter-7B-i1-GGUF/resolve/main/PetrolWriter-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/PetrolWriter-7B-i1-GGUF/resolve/main/PetrolWriter-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/PetrolWriter-7B-i1-GGUF/resolve/main/PetrolWriter-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/PetrolWriter-7B-i1-GGUF/resolve/main/PetrolWriter-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PetrolWriter-7B-i1-GGUF/resolve/main/PetrolWriter-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/PetrolWriter-7B-i1-GGUF/resolve/main/PetrolWriter-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/PetrolWriter-7B-i1-GGUF/resolve/main/PetrolWriter-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mradermacher/Llama-3-Umbral-Mind-SimPO-8B-i1-GGUF | mradermacher | "2024-06-25T00:45:26Z" | 11,783 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mpasila/Llama-3-Umbral-Mind-SimPO-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-06-24T19:49:44Z" | ---
base_model: mpasila/Llama-3-Umbral-Mind-SimPO-8B
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mpasila/Llama-3-Umbral-Mind-SimPO-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3-Umbral-Mind-SimPO-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Umbral-Mind-SimPO-8B-i1-GGUF/resolve/main/Llama-3-Umbral-Mind-SimPO-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Umbral-Mind-SimPO-8B-i1-GGUF/resolve/main/Llama-3-Umbral-Mind-SimPO-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Umbral-Mind-SimPO-8B-i1-GGUF/resolve/main/Llama-3-Umbral-Mind-SimPO-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Umbral-Mind-SimPO-8B-i1-GGUF/resolve/main/Llama-3-Umbral-Mind-SimPO-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Umbral-Mind-SimPO-8B-i1-GGUF/resolve/main/Llama-3-Umbral-Mind-SimPO-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Umbral-Mind-SimPO-8B-i1-GGUF/resolve/main/Llama-3-Umbral-Mind-SimPO-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Umbral-Mind-SimPO-8B-i1-GGUF/resolve/main/Llama-3-Umbral-Mind-SimPO-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Umbral-Mind-SimPO-8B-i1-GGUF/resolve/main/Llama-3-Umbral-Mind-SimPO-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Umbral-Mind-SimPO-8B-i1-GGUF/resolve/main/Llama-3-Umbral-Mind-SimPO-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Umbral-Mind-SimPO-8B-i1-GGUF/resolve/main/Llama-3-Umbral-Mind-SimPO-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Umbral-Mind-SimPO-8B-i1-GGUF/resolve/main/Llama-3-Umbral-Mind-SimPO-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Umbral-Mind-SimPO-8B-i1-GGUF/resolve/main/Llama-3-Umbral-Mind-SimPO-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Umbral-Mind-SimPO-8B-i1-GGUF/resolve/main/Llama-3-Umbral-Mind-SimPO-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Umbral-Mind-SimPO-8B-i1-GGUF/resolve/main/Llama-3-Umbral-Mind-SimPO-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Umbral-Mind-SimPO-8B-i1-GGUF/resolve/main/Llama-3-Umbral-Mind-SimPO-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Umbral-Mind-SimPO-8B-i1-GGUF/resolve/main/Llama-3-Umbral-Mind-SimPO-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Umbral-Mind-SimPO-8B-i1-GGUF/resolve/main/Llama-3-Umbral-Mind-SimPO-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Umbral-Mind-SimPO-8B-i1-GGUF/resolve/main/Llama-3-Umbral-Mind-SimPO-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Umbral-Mind-SimPO-8B-i1-GGUF/resolve/main/Llama-3-Umbral-Mind-SimPO-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Umbral-Mind-SimPO-8B-i1-GGUF/resolve/main/Llama-3-Umbral-Mind-SimPO-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Umbral-Mind-SimPO-8B-i1-GGUF/resolve/main/Llama-3-Umbral-Mind-SimPO-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
romjin/rom161 | romjin | "2024-06-08T18:28:03Z" | 11,779 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-08T18:25:03Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
romjin/rom164 | romjin | "2024-06-08T18:46:30Z" | 11,775 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-08T18:43:27Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
absa/classifier-rest-0.2 | absa | "2021-05-19T11:37:54Z" | 11,763 | 2 | transformers | [
"transformers",
"tf",
"bert",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:05Z" | Entry not found |
mradermacher/L3-8B-SMaid-v0.1-i1-GGUF | mradermacher | "2024-06-23T01:44:45Z" | 11,760 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Alsebay/L3-8B-SMaid-v0.1",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T00:26:30Z" | ---
base_model: Alsebay/L3-8B-SMaid-v0.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Alsebay/L3-8B-SMaid-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-i1-GGUF/resolve/main/L3-8B-SMaid-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-i1-GGUF/resolve/main/L3-8B-SMaid-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-i1-GGUF/resolve/main/L3-8B-SMaid-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-i1-GGUF/resolve/main/L3-8B-SMaid-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-i1-GGUF/resolve/main/L3-8B-SMaid-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-i1-GGUF/resolve/main/L3-8B-SMaid-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-i1-GGUF/resolve/main/L3-8B-SMaid-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-i1-GGUF/resolve/main/L3-8B-SMaid-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-i1-GGUF/resolve/main/L3-8B-SMaid-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-i1-GGUF/resolve/main/L3-8B-SMaid-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-i1-GGUF/resolve/main/L3-8B-SMaid-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-i1-GGUF/resolve/main/L3-8B-SMaid-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-i1-GGUF/resolve/main/L3-8B-SMaid-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-i1-GGUF/resolve/main/L3-8B-SMaid-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-i1-GGUF/resolve/main/L3-8B-SMaid-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-i1-GGUF/resolve/main/L3-8B-SMaid-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-i1-GGUF/resolve/main/L3-8B-SMaid-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-i1-GGUF/resolve/main/L3-8B-SMaid-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-i1-GGUF/resolve/main/L3-8B-SMaid-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-i1-GGUF/resolve/main/L3-8B-SMaid-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-SMaid-v0.1-i1-GGUF/resolve/main/L3-8B-SMaid-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
leo009/stable-diffusion-3-medium | leo009 | "2024-06-15T06:33:54Z" | 11,757 | 6 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"en",
"arxiv:2403.03206",
"license:other",
"region:us"
] | text-to-image | "2024-06-15T04:57:14Z" | ---
license: other
license_name: stabilityai-nc-research-community
license_link: LICENSE
tags:
- text-to-image
- stable-diffusion
- diffusers
extra_gated_prompt: >-
By clicking "Agree", you agree to the [License
Agreement](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE)
and acknowledge Stability AI's [Privacy
Policy](https://stability.ai/privacy-policy).
extra_gated_fields:
Name: text
Email: text
Country: country
Organization or Affiliation: text
Receive email updates and promotions on Stability AI products, services, and research?:
type: select
options:
- 'Yes'
- 'No'
I acknowledge that this model is for non-commercial use only unless I acquire a separate license from Stability AI: checkbox
language:
- en
pipeline_tag: text-to-image
---
# Stable Diffusion 3 Medium

## Model

[Stable Diffusion 3 Medium](https://stability.ai/news/stable-diffusion-3-medium) is a Multimodal Diffusion Transformer (MMDiT) text-to-image model that features greatly improved performance in image quality, typography, complex prompt understanding, and resource-efficiency.
For more technical details, please refer to the [Research paper](https://stability.ai/news/stable-diffusion-3-research-paper).
Please note: this model is released under the Stability Non-Commercial Research Community License. For a Creator License or an Enterprise License visit Stability.ai or [contact us](https://stability.ai/license) for commercial licensing details.
### Model Description
- **Developed by:** Stability AI
- **Model type:** MMDiT text-to-image generative model
- **Model Description:** This is a model that can be used to generate images based on text prompts. It is a Multimodal Diffusion Transformer
(https://arxiv.org/abs/2403.03206) that uses three fixed, pretrained text encoders
([OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip), [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main) and [T5-xxl](https://huggingface.co/google/t5-v1_1-xxl))
### License
- **Non-commercial Use:** Stable Diffusion 3 Medium is released under the [Stability AI Non-Commercial Research Community License](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE). The model is free to use for non-commercial purposes such as academic research.
- **Commercial Use**: This model is not available for commercial use without a separate commercial license from Stability. We encourage professional artists, designers, and creators to use our Creator License. Please visit https://stability.ai/license to learn more.
### Model Sources
For local or self-hosted use, we recommend [ComfyUI](https://github.com/comfyanonymous/ComfyUI) for inference.
Stable Diffusion 3 Medium is available on our [Stability API Platform](https://platform.stability.ai/docs/api-reference#tag/Generate/paths/~1v2beta~1stable-image~1generate~1sd3/post).
Stable Diffusion 3 models and workflows are available on [Stable Assistant](https://stability.ai/stable-assistant) and on Discord via [Stable Artisan](https://stability.ai/stable-artisan).
- **ComfyUI:** https://github.com/comfyanonymous/ComfyUI
- **StableSwarmUI:** https://github.com/Stability-AI/StableSwarmUI
- **Tech report:** https://stability.ai/news/stable-diffusion-3-research-paper
- **Demo:** https://huggingface.co/spaces/stabilityai/stable-diffusion-3-medium
- **Diffusers support:** https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers
## Training Dataset
We used synthetic data and filtered publicly available data to train our models. The model was pre-trained on 1 billion images. The fine-tuning data includes 30M high-quality aesthetic images focused on specific visual content and style, as well as 3M preference data images.
## File Structure
```
โโโ comfy_example_workflows/
โ โโโ sd3_medium_example_workflow_basic.json
โ โโโ sd3_medium_example_workflow_multi_prompt.json
โ โโโ sd3_medium_example_workflow_upscaling.json
โ
โโโ text_encoders/
โ โโโ README.md
โ โโโ clip_g.safetensors
โ โโโ clip_l.safetensors
โ โโโ t5xxl_fp16.safetensors
โ โโโ t5xxl_fp8_e4m3fn.safetensors
โ
โโโ LICENSE
โโโ sd3_medium.safetensors
โโโ sd3_medium_incl_clips.safetensors
โโโ sd3_medium_incl_clips_t5xxlfp8.safetensors
โโโ sd3_medium_incl_clips_t5xxlfp16.safetensors
```
We have prepared three packaging variants of the SD3 Medium model, each equipped with the same set of MMDiT & VAE weights, for user convenience.
* `sd3_medium.safetensors` includes the MMDiT and VAE weights but does not include any text encoders.
* `sd3_medium_incl_clips_t5xxlfp16.safetensors` contains all necessary weights, including fp16 version of the T5XXL text encoder.
* `sd3_medium_incl_clips_t5xxlfp8.safetensors` contains all necessary weights, including fp8 version of the T5XXL text encoder, offering a balance between quality and resource requirements.
* `sd3_medium_incl_clips.safetensors` includes all necessary weights except for the T5XXL text encoder. It requires minimal resources, but the model's performance will differ without the T5XXL text encoder.
* The `text_encoders` folder contains three text encoders and their original model card links for user convenience. All components within the text_encoders folder (and their equivalents embedded in other packings) are subject to their respective original licenses.
* The `example_workfows` folder contains example comfy workflows.
## Using with Diffusers
Make sure you upgrade to the latest version of diffusers: pip install -U diffusers. And then you can run:
```python
import torch
from diffusers import StableDiffusion3Pipeline
pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
image = pipe(
"A cat holding a sign that says hello world",
negative_prompt="",
num_inference_steps=28,
guidance_scale=7.0,
).images[0]
image
```
Refer to [the documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_3) for more details on optimization and image-to-image support.
## Uses
### Intended Uses
Intended uses include the following:
* Generation of artworks and use in design and other artistic processes.
* Applications in educational or creative tools.
* Research on generative models, including understanding the limitations of generative models.
All uses of the model should be in accordance with our [Acceptable Use Policy](https://stability.ai/use-policy).
### Out-of-Scope Uses
The model was not trained to be factual or true representations of people or events. As such, using the model to generate such content is out-of-scope of the abilities of this model.
## Safety
As part of our safety-by-design and responsible AI deployment approach, we implement safety measures throughout the development of our models, from the time we begin pre-training a model to the ongoing development, fine-tuning, and deployment of each model. We have implemented a number of safety mitigations that are intended to reduce the risk of severe harms, however we recommend that developers conduct their own testing and apply additional mitigations based on their specific use cases.
For more about our approach to Safety, please visit our [Safety page](https://stability.ai/safety).
### Evaluation Approach
Our evaluation methods include structured evaluations and internal and external red-teaming testing for specific, severe harms such as child sexual abuse and exploitation, extreme violence, and gore, sexually explicit content, and non-consensual nudity. Testing was conducted primarily in English and may not cover all possible harms. As with any model, the model may, at times, produce inaccurate, biased or objectionable responses to user prompts.
### Risks identified and mitigations:
* Harmful content: We have used filtered data sets when training our models and implemented safeguards that attempt to strike the right balance between usefulness and preventing harm. However, this does not guarantee that all possible harmful content has been removed. The model may, at times, generate toxic or biased content. All developers and deployers should exercise caution and implement content safety guardrails based on their specific product policies and application use cases.
* Misuse: Technical limitations and developer and end-user education can help mitigate against malicious applications of models. All users are required to adhere to our Acceptable Use Policy, including when applying fine-tuning and prompt engineering mechanisms. Please reference the Stability AI Acceptable Use Policy for information on violative uses of our products.
* Privacy violations: Developers and deployers are encouraged to adhere to privacy regulations with techniques that respect data privacy.
### Contact
Please report any issues with the model or contact us:
* Safety issues: [email protected]
* Security issues: [email protected]
* Privacy issues: [email protected]
* License and general: https://stability.ai/license
* Enterprise license: https://stability.ai/enterprise |
RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf | RichardErkhov | "2024-06-20T00:16:10Z" | 11,755 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-19T20:57:10Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
buddhi-128k-chat-7b - GGUF
- Model creator: https://huggingface.co/aiplanet/
- Original model: https://huggingface.co/aiplanet/buddhi-128k-chat-7b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [buddhi-128k-chat-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf/blob/main/buddhi-128k-chat-7b.Q2_K.gguf) | Q2_K | 2.53GB |
| [buddhi-128k-chat-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf/blob/main/buddhi-128k-chat-7b.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [buddhi-128k-chat-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf/blob/main/buddhi-128k-chat-7b.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [buddhi-128k-chat-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf/blob/main/buddhi-128k-chat-7b.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [buddhi-128k-chat-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf/blob/main/buddhi-128k-chat-7b.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [buddhi-128k-chat-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf/blob/main/buddhi-128k-chat-7b.Q3_K.gguf) | Q3_K | 3.28GB |
| [buddhi-128k-chat-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf/blob/main/buddhi-128k-chat-7b.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [buddhi-128k-chat-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf/blob/main/buddhi-128k-chat-7b.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [buddhi-128k-chat-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf/blob/main/buddhi-128k-chat-7b.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [buddhi-128k-chat-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf/blob/main/buddhi-128k-chat-7b.Q4_0.gguf) | Q4_0 | 3.83GB |
| [buddhi-128k-chat-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf/blob/main/buddhi-128k-chat-7b.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [buddhi-128k-chat-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf/blob/main/buddhi-128k-chat-7b.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [buddhi-128k-chat-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf/blob/main/buddhi-128k-chat-7b.Q4_K.gguf) | Q4_K | 4.07GB |
| [buddhi-128k-chat-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf/blob/main/buddhi-128k-chat-7b.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [buddhi-128k-chat-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf/blob/main/buddhi-128k-chat-7b.Q4_1.gguf) | Q4_1 | 4.24GB |
| [buddhi-128k-chat-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf/blob/main/buddhi-128k-chat-7b.Q5_0.gguf) | Q5_0 | 4.65GB |
| [buddhi-128k-chat-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf/blob/main/buddhi-128k-chat-7b.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [buddhi-128k-chat-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf/blob/main/buddhi-128k-chat-7b.Q5_K.gguf) | Q5_K | 4.78GB |
| [buddhi-128k-chat-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf/blob/main/buddhi-128k-chat-7b.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [buddhi-128k-chat-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf/blob/main/buddhi-128k-chat-7b.Q5_1.gguf) | Q5_1 | 5.07GB |
| [buddhi-128k-chat-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf/blob/main/buddhi-128k-chat-7b.Q6_K.gguf) | Q6_K | 5.53GB |
| [buddhi-128k-chat-7b.Q8_0.gguf](https://huggingface.co/RichardErkhov/aiplanet_-_buddhi-128k-chat-7b-gguf/blob/main/buddhi-128k-chat-7b.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
pipeline_tag: text-generation
---
<p align="center" style="font-size:34px;"><b>Buddhi-128K-Chat</b></p>
# Buddhi-128K-Chat (7B) vLLM Inference: [](https://colab.research.google.com/drive/11_8W8FpKK-856QdRVJLyzbu9g-DMxNfg?usp=sharing)
# Read release article: [๐ Introducing Buddhi: Open-Source Chat Model with a 128K Context Window ๐ ](https://medium.aiplanet.com/introducing-buddhi-open-source-chat-model-with-a-128k-context-window-06a1848121d0)

## Model Description
Buddhi-128k-Chat is a general-purpose first chat model with 128K context length window. It is meticulously fine-tuned on the Mistral 7B Instruct, and optimised to handle an extended context length of up to 128,000 tokens using the innovative YaRN (Yet another Rope Extension) Technique. This enhancement allows Buddhi to maintain a deeper understanding of context in long documents or conversations, making it particularly adept at tasks requiring extensive context retention, such as comprehensive document summarization, detailed narrative generation, and intricate question-answering.
## Architecture
The Buddhi-128K-Chat model is fine-tuned on the Mistral-7B Instruct base model. We selected the Mistral 7B Instruct v0.2 as the parent model due to its superior reasoning capabilities. The architecture of the Mistral-7B model includes features like Grouped-Query Attention and Byte-fallback BPE tokenizer. Originally, this model has 32,768 maximum position embeddings. To increase the context size to 128K, we needed to modify the positional embeddings, which is where YaRN comes into play.
In our approach, we utilized the NTK-aware technique, which recommends alternative interpolation techniques for positional interpolation. One experimentation involved Dynamic-YARN, suggesting the dynamic value of the 's' scale factor. This is because during inference, the sequence length changes by 1 after every word prediction. By integrating these position embeddings with the Mistral-7B Instruct base model, we achieved the 128K model.
Additionally, we fine-tuned the model on our dataset to contribute one of the very few 128K chat-based models available in the open-source community with greater reasoning capabilities than all of it.
### Hardware requirements:
> For 128k Context Length
> - 80GB VRAM - A100 Preferred
> For 32k Context Length
> - 40GB VRAM - A100 Preferred
### vLLM - For Faster Inference
#### Installation
```
!pip install vllm
!pip install flash_attn # If Flash Attention 2 is supported by your System
```
Please check out [Flash Attention 2](https://github.com/Dao-AILab/flash-attention) Github Repository for more instructions on how to Install it.
**Implementation**:
> Note: The actual hardware requirements to run the model is roughly around 70GB VRAM. For experimentation, we are limiting the context length to 75K instead of 128K. This make it suitable for testing the model in 30-35 GB VRAM
```python
from vllm import LLM, SamplingParams
llm = LLM(
model='aiplanet/buddhi-128k-chat-7b',
trust_remote_code=True,
dtype = 'bfloat16',
gpu_memory_utilization=1,
max_model_len= 75000
)
prompts = [
"""<s> [INST] Please tell me a joke. [/INST] """,
"""<s> [INST] What is Machine Learning? [/INST] """
]
sampling_params = SamplingParams(
temperature=0.8,
top_p=0.95,
max_tokens=1000
)
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(generated_text)
print("\n\n")
# we have also attached a colab notebook, that contains: 2 more experimentations: Long Essay and Entire Book
```
For Output, do check out the colab notebook: [](https://colab.research.google.com/drive/11_8W8FpKK-856QdRVJLyzbu9g-DMxNfg?usp=sharing)
### Transformers - Basic Implementation
```python
import torch
import transformers
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model_name = "aiplanet/Buddhi-128K-Chat"
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
device_map="sequential",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(
model,
trust_remote_code=True
)
prompt = "<s> [INST] Please tell me a small joke. [/INST] "
tokens = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(
**tokens,
max_new_tokens=100,
do_sample=True,
top_p=0.95,
temperature=0.8,
)
decoded_output = tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]
print(f"Output:\n{decoded_output[len(prompt):]}")
```
Output
```
Output:
Why don't scientists trust atoms?
Because they make up everything.
```
## Prompt Template for Buddi-128-Chat
In order to leverage instruction fine-tuning, your prompt should be surrounded by [INST] and [/INST] tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
```
"<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
# Benchmarks
### Long Context Benchmark
<strong>LongICLBench Banking77</strong>
<div>
| Model | 1R/2k | 2R/4K | 3R/7K | 4R/9K | 5R/14K |
|-----------------------------------------|-------|-------|-------|-------|--------|
| aiplanet/buddhi-128k-chat-7b | 47.8 | 60.8 | 57.8 | 62.4 | 57.2 |
| NousResearch/Yarn-Mistral-7b-128k | 31.6 | 68.6 | 68 | 47 | 65.6 |
| CallComply/zephyr-7b-beta-128k | 40.2 | 41.2 | 33.6 | 03 | 0 |
| Eric111/Yarn-Mistral-7b-128k-DPO | 28.6 | 62.8 | 58 | 41.6 | 59.8 |
</div>
<strong>Short Context Benchmark</strong>
<div>
| Model | # Params | Average | ARC (25-shot) | HellaSwag (10-shot) | Winogrande (5-shot) | TruthfulOA (0-shot) | MMLU (5-shot) |
|-----------------------------------|----------|---------|---------------|---------------------|---------------------|---------------------|---------------|
| aiplanet/buddhi-128k-chat-7b | 7B | 64.42 | 60.84 | 84 | 77.27 | 65.72 | 60.42 |
| migtissera/Tess-XS-vl-3-yarn-128K | 7B | 62.66 | 61.09 | 82.95 | 74.43 | 50.13 | 62.15 |
| migtissera/Tess-XS-v1-3-yarn-128K | 7B | 62.49 | 61.6 | 82.96 | 74.74 | 50.2 | 62.1 |
| Eric111/Yarn-Mistral-7b-128k-DPO | 7B | 60.15 | 60.84 | 82.99 | 78.3 | 43.55 | 63.09 |
| NousResearch/Yam-Mistral-7b-128k | 7B | 59.42 | 59.64 | 82.5 | 76.95 | 41.78 | 63.02 |
| CallComply/openchat-3.5-0106-128k | 7B | 59.38 | 64.25 | 77.31 | 77.66 | 46.5 | 57.58 |
| CallComply/zephyr-7b-beta-128k | 7B | 54.45 | 58.28 | 81 | 74.74 | 46.1 | 53.57 |
</div>
## Get in Touch
You can schedule a 1:1 meeting with our DevRel & Community Team to get started with AI Planet Open Source LLMs and GenAI Stack. Schedule the call here: [https://calendly.com/jaintarun](https://calendly.com/jaintarun)
Stay tuned for more updates and be a part of the coding evolution. Join us on this exciting journey as we make AI accessible to all at AI Planet!
### Framework versions
- Transformers 4.39.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Accelerate 0.27.2
- flash_attn 2.5.6
### Citation
```
@misc {Chaitanya890, lucifertrj ,
author = { Chaitanya Singhal, Tarun Jain },
title = { Buddhi-128k-Chat by AI Planet},
year = 2024,
url = { https://huggingface.co/aiplanet//Buddhi-128K-Chat },
publisher = { Hugging Face }
}
```
|
wkcn/TinyCLIP-ViT-8M-16-Text-3M-YFCC15M | wkcn | "2024-05-08T03:05:50Z" | 11,737 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"clip",
"zero-shot-image-classification",
"tinyclip",
"license:mit",
"endpoints_compatible",
"region:us"
] | zero-shot-image-classification | "2023-12-19T14:29:40Z" | ---
license: mit
pipeline_tag: zero-shot-image-classification
tags:
- tinyclip
---
# TinyCLIP: CLIP Distillation via Affinity Mimicking and Weight Inheritance
**[ICCV 2023]** - [TinyCLIP: CLIP Distillation via Affinity Mimicking and Weight Inheritance](https://openaccess.thecvf.com/content/ICCV2023/html/Wu_TinyCLIP_CLIP_Distillation_via_Affinity_Mimicking_and_Weight_Inheritance_ICCV_2023_paper.html)
**TinyCLIP** is a novel **cross-modal distillation** method for large-scale language-image pre-trained models. The method introduces two core techniques: **affinity mimicking** and **weight inheritance**. This work unleashes the capacity of small CLIP models, fully leveraging large-scale models as well as pre-training data and striking the best trade-off between speed and accuracy.
<p align="center">
<img src="./figure/TinyCLIP.jpg" width="1000">
</p>
## Use with Transformers
```python
from PIL import Image
import requests
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("wkcn/TinyCLIP-ViT-8M-16-Text-3M-YFCC15M")
processor = CLIPProcessor.from_pretrained("wkcn/TinyCLIP-ViT-8M-16-Text-3M-YFCC15M")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
```
## Highlights
<p align="center">
<img src="./figure/fig1.jpg" width="500">
</p>
* TinyCLIP ViT-45M/32 uses only **half parameters** of ViT-B/32 to achieves **comparable zero-shot performance**.
* TinyCLIP ResNet-19M reduces the parameters by **50\%** while getting **2x** inference speedup, and obtains **56.4\%** accuracy on ImageNet.
## Model Zoo
| Model | Weight inheritance | Pretrain | IN-1K Acc@1(%) | MACs(G) | Throughput(pairs/s) | Link |
|--------------------|--------------------|---------------|----------------|---------|---------------------|------|
[TinyCLIP ViT-39M/16 Text-19M](./src/open_clip/model_configs/TinyCLIP-ViT-39M-16-Text-19M.json) | manual | YFCC-15M | 63.5 | 9.5 | 1,469 | [Model](https://github.com/wkcn/TinyCLIP-model-zoo/releases/download/checkpoints/TinyCLIP-ViT-39M-16-Text-19M-YFCC15M.pt)
[TinyCLIP ViT-8M/16 Text-3M](./src/open_clip/model_configs/TinyCLIP-ViT-8M-16-Text-3M.json) | manual | YFCC-15M | 41.1 | 2.0 | 4,150 | [Model](https://github.com/wkcn/TinyCLIP-model-zoo/releases/download/checkpoints/TinyCLIP-ViT-8M-16-Text-3M-YFCC15M.pt)
[TinyCLIP ResNet-30M Text-29M](./src/open_clip/model_configs/TinyCLIP-ResNet-30M-Text-29M.json) | manual | LAION-400M | 59.1 | 6.9 | 1,811 | [Model](https://github.com/wkcn/TinyCLIP-model-zoo/releases/download/checkpoints/TinyCLIP-ResNet-30M-Text-29M-LAION400M.pt)
[TinyCLIP ResNet-19M Text-19M](./src/open_clip/model_configs/TinyCLIP-ResNet-19M-Text-19M.json) | manual | LAION-400M | 56.4 | 4.4 | 3,024| [Model](https://github.com/wkcn/TinyCLIP-model-zoo/releases/download/checkpoints/TinyCLIP-ResNet-19M-Text-19M-LAION400M.pt)
[TinyCLIP ViT-61M/32 Text-29M](./src/open_clip/model_configs/TinyCLIP-ViT-61M-32-Text-29M.json) | manual | LAION-400M | 62.4 | 5.3 | 3,191|[Model](https://github.com/wkcn/TinyCLIP-model-zoo/releases/download/checkpoints/TinyCLIP-ViT-61M-32-Text-29M-LAION400M.pt)
[TinyCLIP ViT-40M/32 Text-19M](./src/open_clip/model_configs/TinyCLIP-ViT-40M-32-Text-19M.json) | manual | LAION-400M | 59.8 | 3.5 | 4,641|[Model](https://github.com/wkcn/TinyCLIP-model-zoo/releases/download/checkpoints/TinyCLIP-ViT-40M-32-Text-19M-LAION400M.pt)
TinyCLIP ViT-63M/32 Text-31M | auto | LAION-400M | 63.9 | 5.6 | 2,905|[Model](https://github.com/wkcn/TinyCLIP-model-zoo/releases/download/checkpoints/TinyCLIP-auto-ViT-63M-32-Text-31M-LAION400M.pt)
TinyCLIP ViT-45M/32 Text-18M | auto | LAION-400M | 61.4 | 3.7 | 3,682|[Model](https://github.com/wkcn/TinyCLIP-model-zoo/releases/download/checkpoints/TinyCLIP-auto-ViT-45M-32-Text-18M-LAION400M.pt)
TinyCLIP ViT-22M/32 Text-10M | auto | LAION-400M | 53.7 | 1.9 | 5,504|[Model](https://github.com/wkcn/TinyCLIP-model-zoo/releases/download/checkpoints/TinyCLIP-auto-ViT-22M-32-Text-10M-LAION400M.pt)
TinyCLIP ViT-63M/32 Text-31M | auto | LAION+YFCC-400M | 64.5 | 5.6| 2,909 | [Model](https://github.com/wkcn/TinyCLIP-model-zoo/releases/download/checkpoints/TinyCLIP-auto-ViT-63M-32-Text-31M-LAIONYFCC400M.pt)
TinyCLIP ViT-45M/32 Text-18M | auto | LAION+YFCC-400M | 62.7 | 1.9 | 3,685 | [Model](https://github.com/wkcn/TinyCLIP-model-zoo/releases/download/checkpoints/TinyCLIP-auto-ViT-45M-32-Text-18M-LAIONYFCC400M.pt)
Note: The configs of models with auto inheritance are generated automatically.
## Official PyTorch Implementation
https://github.com/microsoft/Cream/tree/main/TinyCLIP
## Citation
If this repo is helpful for you, please consider to cite it. :mega: Thank you! :)
```bibtex
@InProceedings{tinyclip,
title = {TinyCLIP: CLIP Distillation via Affinity Mimicking and Weight Inheritance},
author = {Wu, Kan and Peng, Houwen and Zhou, Zhenghong and Xiao, Bin and Liu, Mengchen and Yuan, Lu and Xuan, Hong and Valenzuela, Michael and Chen, Xi (Stephen) and Wang, Xinggang and Chao, Hongyang and Hu, Han},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2023},
pages = {21970-21980}
}
```
## Acknowledge
Our code is based on [CLIP](https://github.com/openai/CLIP), [OpenCLIP](https://github.com/mlfoundations/open_clip), [CoFi](https://github.com/princeton-nlp/CoFiPruning) and [PyTorch](https://github.com/pytorch/pytorch). Thank contributors for their awesome contribution!
## License
- [License](https://github.com/microsoft/Cream/blob/main/TinyCLIP/LICENSE)
|
timm/resnet10t.c3_in1k | timm | "2024-02-10T23:38:30Z" | 11,736 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:2110.00476",
"arxiv:1512.03385",
"arxiv:1812.01187",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-05T18:02:35Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
---
# Model card for resnet10t.c3_in1k
A ResNet-T image classification model.
This model features:
* ReLU activations
* tiered 3-layer stem of 3x3 convolutions with pooling
* 2x2 average pool + 1x1 convolution shortcut downsample
Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* Based on [ResNet Strikes Back](https://arxiv.org/abs/2110.00476) `C` recipes
* SGD (w/ Nesterov) optimizer and AGC (adaptive gradient clipping).
* Cosine LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 5.4
- GMACs: 0.7
- Activations (M): 1.5
- Image size: train = 176 x 176, test = 224 x 224
- **Papers:**
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385
- Bag of Tricks for Image Classification with Convolutional Neural Networks: https://arxiv.org/abs/1812.01187
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnet10t.c3_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnet10t.c3_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 88, 88])
# torch.Size([1, 64, 44, 44])
# torch.Size([1, 128, 22, 22])
# torch.Size([1, 256, 11, 11])
# torch.Size([1, 512, 6, 6])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnet10t.c3_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 6, 6) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec|
|------------------------------------------|--------|-----|-----|-----------|-----|-----|-------|
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 |
|[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 |
|[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 |
|[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 |
|[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 |
|[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 |
|[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 |
|[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 |
|[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 |
|[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 |
|[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 |
|[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 |
|[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 |
|[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 |
|[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 |
|[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 |
|[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 |
|[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 |
|[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 |
|[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 |
|[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 |
|[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 |
|[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 |
|[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 |
|[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 |
|[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 |
|[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 |
|[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 |
|[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 |
|[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 |
|[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 |
|[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 |
|[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 |
|[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 |
|[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 |
|[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 |
|[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 |
|[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 |
|[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 |
|[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 |
|[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 |
|[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 |
|[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 |
|[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 |
|[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 |
|[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 |
|[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 |
|[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 |
|[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 |
## Citation
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{He2015,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {arXiv preprint arXiv:1512.03385},
year = {2015}
}
```
```bibtex
@article{He2018BagOT,
title={Bag of Tricks for Image Classification with Convolutional Neural Networks},
author={Tong He and Zhi Zhang and Hang Zhang and Zhongyue Zhang and Junyuan Xie and Mu Li},
journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2018},
pages={558-567}
}
```
|
beomi/Llama-3-Open-Ko-8B-Instruct-preview | beomi | "2024-05-02T01:49:20Z" | 11,731 | 50 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"llama-3-ko",
"conversational",
"en",
"ko",
"arxiv:2310.04799",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-23T02:12:58Z" | ---
language:
- en
- ko
license: other
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-3-ko
pipeline_tag: text-generation
license_name: llama3
license_link: LICENSE
---
## Llama-3-Open-Ko-8B-Instruct-preview
> Update @ 2024.05.01: Pre-Release [Llama-3-KoEn-8B](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview) model & [Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview)
> Update @ 2024.04.24: Release [Llama-3-Open-Ko-8B model](https://huggingface.co/beomi/Llama-3-Open-Ko-8B) & [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview)
## Model Details
**Llama-3-Open-Ko-8B-Instruct-preview**
Llama-3-Open-Ko-8B model is continued pretrained language model based on Llama-3-8B.
This model is trained fully with publicily available resource, with 60GB+ of deduplicated texts.
With the new Llama-3 tokenizer, the pretraining conducted with 17.7B+ tokens, which slightly more than Korean tokenizer(Llama-2-Ko tokenizer).
The train was done on TPUv5e-256, with the warm support from TRC program by Google.
With applying the idea from [Chat Vector paper](https://arxiv.org/abs/2310.04799), I released Instruction model named [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview).
Since it is NOT finetuned with any Korean instruction set(indeed `preview`), but it would be great starting point for creating new Chat/Instruct models.
**Sample usage**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "beomi/Llama-3-Open-Ko-8B-Instruct-preview"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "system", "content": "์น์ ํ ์ฑ๋ด์ผ๋ก์ ์๋๋ฐฉ์ ์์ฒญ์ ์ต๋ํ ์์ธํ๊ณ ์น์ ํ๊ฒ ๋ตํ์. ๋ชจ๋ ๋๋ต์ ํ๊ตญ์ด(Korean)์ผ๋ก ๋๋ตํด์ค."},
{"role": "user", "content": "ํผ๋ณด๋์น ์์ด์ด ๋ญ์ผ? ๊ทธ๋ฆฌ๊ณ ํผ๋ณด๋์น ์์ด์ ๋ํด ํ์ด์ฌ ์ฝ๋๋ฅผ ์ง์ค๋ณผ๋?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=512,
eos_token_id=terminators,
do_sample=True,
temperature=1,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
**Sample Output**
์ฌ๋ฌ๋ถ๊ป์๋ ๋ฌผ๋ก ์์๋์ง๋ผ๋ ์ค๋ช
์ ๋๋ฆฌ๊ฒ ์ต๋๋ค! ํผ๋ณด๋์น(Piconacci) ์์ด์ ๋ค์๊ณผ ๊ฐ์ ์์ด์ ๋งํฉ๋๋ค: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233,.... ์ด๋ ๊ฒ ๊ณ์๋๊ณ ์์ต๋๋ค. ์ด ์์ด์ ์ผ๋ฐ์ ์ผ๋ก๋ ์ํ์์ ๋ฌดํํ ์ง์๋ฉ๋๋ค. ๊ฐ ์๋ ์ด์ ์์ ๋ ๋ฐฐ๊ฐ ๋๋ ์์ด์
๋๋ค.
์ด ์์ด์ ํ์ด์ฌ(Python)์ผ๋ก ๊ตฌํํ๊ณ ์ ํ์ ๋ค๋ฉด ๋ค์๊ณผ ๊ฐ์ ์ฝ๋๋ฅผ ์์ฑํ ์ ์์ต๋๋ค.
```python
def fibonacci(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fibonacci(n-1) + fibonacci(n-2)
```
์ด ์ฝ๋์์๋ `n`์ด ์
๋ ฅ๋ฐ์ ์๋ฅผ ๋ํ๋ด๋ ๋ณ์๋ก, ํผ๋ณด๋์น ์์ด์ ๊ตฌํํฉ๋๋ค. ์ด ์ฝ๋๋ฅผ ์คํ์ํค๋ฉด ์
๋ ฅ๋ฐ์ ์์ ๋ฐ๋ผ ํผ๋ณด๋์น ์์ด์ ํด๋น ํญ์ ๋ฐํํฉ๋๋ค. ์๋ฅผ ๋ค์ด, `fibonacci(10)` ํ๋ฉด 55๋ฅผ ๋ฐํํฉ๋๋ค.
์ด ์ฝ๋๋ ์ฌ๊ท์ ํจ์์ด๊ธฐ ๋๋ฌธ์ ๋ฉ๋ชจ๋ฆฌ ์ฌ์ฉ์ด ์ค์ด๋ค ์ ์์ต๋๋ค. ๊ทธ๋ฌ๋ ์ฌ๊ท์ ํจ์๋ ๋ฉ๋ชจ๋ฆฌ ์ฌ์ฉ๋์ด ๋ง์์ง ์ ์์ต๋๋ค. ์ด๋ฅผ ์ค์ด๊ธฐ ์ํด ์ดํฐ๋ ์ดํฐ๋ธ ํจ์๋ก ๊ตฌํํ ์๋ ์์ต๋๋ค.
```python
a, b = 0, 1
for i in range(n):
a, b = b, a + b
```
์ด ์ฝ๋์์๋ ์ดํฐ๋ ์ดํฐ๋ธ ํจ์๋ก ํผ๋ณด๋์น ์์ด์ ๊ตฌํํฉ๋๋ค. ์ด ์ฝ๋๋ฅผ ์คํ์ํค๋ฉด ์
๋ ฅ๋ฐ์ ์์ ๋ฐ๋ผ ํผ๋ณด๋์น ์์ด์ ํด๋น ํญ์ ๋ฐํํฉ๋๋ค. ์ด ์ฝ๋๋ ๋ฉ๋ชจ๋ฆฌ ์ฌ์ฉ๋์ ์ค์
๋๋ค. ๋ค๋ง, ์ด ์ฝ๋๋ ๋ ๋ณต์กํฉ๋๋ค. ์ด ์ฝ๋๋ฅผ ๋ ๊ฐ๋จํ๊ฒ ํ๋ ค๋ฉด ์ดํฐ๋ ์ดํฐ๋ธ ํจ์๋ฅผ ๋ ์ ๊ตฌํํ ์ ์์ต๋๋ค.
```python
a, b = 0, 1
while True:
if n == 1:
return a
a, b = b, a + b
n -= 1
```
์ด ์ฝ๋์์๋ ์
๋ ฅ๋ฐ์ ์์ ๋ฐ๋ผ ํผ๋ณด๋์น ์์ด์...
### More samples
> I used same system prompt, but you could change on your own.
```python
messages = [
{"role": "system", "content": "์น์ ํ ์ฑ๋ด์ผ๋ก์ ์๋๋ฐฉ์ ์์ฒญ์ ์ต๋ํ ์์ธํ๊ณ ์น์ ํ๊ฒ ๋ตํ์. ๋ชจ๋ ๋๋ต์ ํ๊ตญ์ด(Korean)์ผ๋ก ๋๋ตํด์ค."},
{"role": "user", "content": "ํ๊ตญ์ ์ ํ ํ๋ฒ์ ๋ํด์ ์์ธํ ์ค๋ช
ํด์ฃผ์ธ์."},
]
---
ํ๊ตญ์ ์ ํ ํ๋ฒ์ 1948๋
7์ 17์ผ์ ์ ์ ๋ ๋ํ๋ฏผ๊ตญ ํ๋ฒ์ผ๋ก์, ๋ํ๋ฏผ๊ตญ์ ์๋ฆฝ๊ณผ ํ๋ฒ ์ง์๋ฅผ ์ ํ ํ๋ฒ์ธ๋ฐ์. ์ด ํ๋ฒ์ ๋ํ๋ฏผ๊ตญ์ ๊ธฐ๋ณธ์์น๊ณผ ๊ตญ๋ฏผ์ ๊ถ์ต์ ๊ท์ ํ๊ณ ์์ต๋๋ค. 17์กฐ๋ก ๊ตฌ์ฑ๋ ํ๋ฒ 1021๊ฐ ์กฐํญ์ผ๋ก ๊ตฌ์ฑ๋์ด ์์ต๋๋ค. ํ๋ฒ์ ์ผ๊ถ๋ถ๋ฆฝ, ๊ตญ๋ฏผ์ฃผ๊ถ, ๊ธฐ๋ณธ๊ถ, ์ํ์ ๋, ํ์ ๊ถ, ์
๋ฒ๊ถ, ์ฌ๋ฒ๊ถ ๋ฑ์ผ๋ก ๊ตฌ์ฑ๋์ด ์์ต๋๋ค.
์ฐ์ , ์ผ๊ถ๋ถ๋ฆฝ์ ๋ํต๋ นใ๊ตญํ์์ใ๋ ๋ฒ๊ด์ผ๋ก ์กฐ์ง๋ ์ธ๋ฏผ์ ํต์ ์ ์ํด ๊ตญ๊ฐ ๊ถํ์ ๋๋์ด ์์ํ๊ณ ์์ต๋๋ค. ๋ํต๋ น์ ๊ตญ๊ฐ์์๋ก์ ํ์ ๊ถ์, ๊ตญํ์์์ ์
๋ฒ๊ถ์, ๋๋ฒ์์ ์ฌ๋ฒ๊ถ์ ํํ ์ ์์ต๋๋ค. ์ด์ ๋ฐ๋ผ ํ์ ๋ถใ์
๋ฒ๋ถใ์ฌ๋ฒ๋ถ์ ๊ฒฌ์ ์ ๊ท ํ์ ํตํด ์ ์น์ ์์ ์ฑ์ ํ๋ณดํ๊ณ ์์ต๋๋ค.
๊ตญ๋ฏผ์ฃผ๊ถ์ ํ๋ฒ ์ 1์กฐ์์ "๋ํ๋ฏผ๊ตญ์ ๋ฏผ์ฃผ๊ณตํ๊ตญ"์์ ์ ์ธํ๊ณ , ๋ชจ๋ ๊ถ๋ ฅ์ ๊ตญ๋ฏผ์ผ๋ก๋ถํฐ ๋์จ๋ค๋ ์๋ฆฌ๋ฅผ ๊ท์ ํฉ๋๋ค. ๊ตญ๋ฏผ์ผ๋ฐ์ด ์ต๊ณ ์ ์ฃผ๊ถ์์์ ๋ถ๋ช
ํ ๋ณด์ฌ ์ฃผ๊ณ ์์ต๋๋ค.
์ํ์ ๋๋ ์
๋ฒ๋ถ๋ฅผ ๊ตฌ์ฑํ๋ ์ํ์ ๊ทธ ๊ธฐ๋ฅ์ ์ ํ๊ณ ์์ต๋๋ค. ์์์ ๊ตญ๋ฏผ์ ๋ํ๋ก ์ ์ถ๋๊ณ , ๊ตญ์ ์ด์์ ๋ํ ๊ตญ์ ๊ฐ์ใ๋นํใ๊ฐ์ ์ฑ
์ ๋ง๋ จํฉ๋๋ค. ๋ํ ํ๋ฒ์ ๊ท์ ๋ ์๋ฌด๋ฅผ ์ดํํด์ผ ํฉ๋๋ค.
์
๋ฒ๊ถ๊ณผ ์ฌ๋ฒ๊ถ์ ํ๋ฒ์ ์ ใ๊ฐ์ ์ ๊ด์ฌํ๊ณ , ๋ฒ๋ฅ ์ ์ ์ ใ๊ฐํํ ์ ์์ต๋๋ค. ๋ํ ์ฌ๋ฒ๊ถ์ ํ๋ฒ์ ํด์ใ์ ์ฉํ๊ณ ์์ต๋๋ค. ์ด์ ๋ฐ๋ผ ์ฐ๋ฆฌ๋๋ผ์ ๋ฒ์น์ฃผ์ ์์น์ ๊ตฌํํฉ๋๋ค. ํ๋ง๋๋ก ์ธ๋ฏผ์ ๊ธฐ๋ณธ๊ถ์ ๋ณดํธํ๊ณ ์์ต๋๋ค.
์ด๋ฌํ ๋ํ๋ฏผ๊ตญ์ ํ๋ฒ์ ์ธ๊ฐ์ ์์ ์ ๊ถ๋ฆฌ๋ฅผ ๋ณด์ฅํ๊ธฐ ์ํด ๊ตญ๋ฏผ์ ๊ธฐ๋ณธ๊ถ์ ๊ท์ ํ๊ณ ์์ต๋๋ค. ์๋ฅผ ๋ค์ด ๊ธฐ๋ณธ๊ถ์๋ ๋ชจ๋ ๊ตญ๋ฏผ์ ์ธ๊ฐ์ ์กด์๊ณผ ๊ฐ์น ๋ฐ ํ๋ณต์ถ๊ตฌ๊ถ, ์ผํ๊ณ , ์ผํ๋ ๊ถ๋ฆฌ๋ฅผ ๋ณด์ฅํฉ๋๋ค. ๋ํ ์งํใ๊ฒฐ์ฌใํํ์ ์์ ์ ์ ๊ฑฐ๊ถ์ ๋ณด์ฅํฉ๋๋ค. ์์ ์ ๊ธฐ๋ณธ๊ถ์ ๋ณด์ฅํ ๊ฒ์ ํ๋ฒ์ด ์งํฅํ๋ ๋ฏผ์ฃผ์ ๊ธฐ๋ณธ์ง์ ํ์ฑ์ ๊ตฌํํ๋ ๊ฒ์
๋๋ค. ๋ง์ง๋ง์ผ๋ก ๊ตญ๋ฏผ์ ํ๋ฒ์ ๊ฐ์ ํ๊ณ ์์ผ ์ ์ ์ ํ ์ ์์ต๋๋ค.
์ด์ ๊ฐ์ ๋ํ๋ฏผ๊ตญ์ ์ ํ ํ๋ฒ์ ๊ตญ๋ฏผ์ ๊ธฐ๋ณธ๊ถ๊ณผ ํ๋ฒ ์ง์๋ฅผ ๊ฒฌ๊ณ ํ๊ฒ ํ์ฌ, ์ ์น์ ์์ ์ฑ์ ์ ์งํ๊ณ ์์ต๋๋ค. ํ๋ฒ์ ๋ณ๊ฒฝ์์๋ ๊ณต์ ํ๊ณ ๊ณตํํ ํ๊ฒฐ์ ๊ฑฐ์ณ์ผ ํฉ๋๋ค.
๋ฐ๋ผ์ ๋ํ๋ฏผ๊ตญ์ ์ ํ ํ๋ฒ์ ๊ทธ ์๋ฏธ๊ฐ ๋งค์ฐ ํฝ๋๋ค. ํ๋ฒ์ ํตํด ๋ํ๋ฏผ๊ตญ์ ๋ฏผ์ฃผ์ฃผ์๋ฅผ ํ๊ณ ํ ํ๋ ๊ฒ์
๋๋ค. ๊ตญ๋ฏผ์ ๊ถ๋ฆฌ์ ์๋ฌด๋ฅผ ์ ํ๊ณ ์์ต๋๋ค. ์ดํ๋ฒ์ ๋ํ๋ฏผ๊ตญ์ ๊ธฐ๋ณธ์์น์ด ๋์์ต๋๋ค. 7ใ28์ ์ธ๊ณผ 5ใ16๊ตฐ์ฌํ๋ช
์ ํตํด ํ๋๋์ด ํ๋ฒ์ ์กด์์ฑ์ด ๋ค์ ์ ๋ฆฝ๋์์ต๋๋ค. ์ดํ๋ฒ์ ๋ํ๋ฏผ๊ตญ์ ๊ทผ๊ฐ์ด ๋์์ต๋๋ค.
๋์์ด ํ์ํ์
จ๋์? ์ถ๊ฐ๋ก ์ค๋ช
ํด ๋๋ฆด๊น์?
```
```python
messages = [
{"role": "system", "content": "์น์ ํ ์ฑ๋ด์ผ๋ก์ ์๋๋ฐฉ์ ์์ฒญ์ ์ต๋ํ ์์ธํ๊ณ ์น์ ํ๊ฒ ๋ตํ์. ๋ชจ๋ ๋๋ต์ ํ๊ตญ์ด(Korean)์ผ๋ก ๋๋ตํด์ค."},
{"role": "user", "content": "์ด์์ ์ฅ๊ตฐ์ ๋ํด ์ค๋ช
ํด์ค."},
]
---
์ด์์ ์ฅ๊ตฐ์ ์กฐ์ ์๋์ ๋ฌด์ธ์ผ๋ก, ๋ํ๋ฏผ๊ตญ์ ๊ตญ๊ฐ์ ๊ณต์์ธ ์ฐ์์ ๊ณต ์ด์์ ์ฅ๊ตฐ์ 1545๋
9์ 28์ผ ๊ฒฝ์๋จ๋ ์๋ น์์ ํ์ด๋ฌ์ต๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ํต์ ์ฌ ์์ ์ ์๋ ๋ ์กฐ์ ๊ณผ ๋ช
๋๋ผ ์ฐํฉ๊ตฐ์ ๋๊ท๋ชจ ๊ตฐ๋๋ฅผ ์ผ์ผ์ผ ๋์ฒฉ์ ์ฑ๊ณต์ ์ผ๋ก ์ด๋์ด ์ ์ ์ฌ๋๊ณผ ์์ง์๋์ ์น๋ฆฌ๋ก ์ด๋ ์ธ๋ฌผ์
๋๋ค. ๊ทธ๋ 1592๋
์ ๋ผ์ข์์๊ด์ฐฐ์ฌ๊ฐ ๋์ด ์ ๋ผ์ข์์์์ ์์ ์ ๋ฌผ๋ฆฌ์ณค์ผ๋ฉฐ, 1597๋
์๋ ์์๊ณผ ํ์ ๋ฐฉ์ด์ ์ฑ๊ณต์ ์ผ๋ก ์น๋ฃํ์ต๋๋ค. ๋ํ ๋ช
๋๋์ฒฉ์์ ์์ ๊ณผ ๊ฒฉ์ ํ์ฌ ์ด์์ ์ด์์ ์ฅ๊ตฐ์ ๋ช
๋๋์ฒฉ์์ ์กฐ์ ๊ด๊ตฐ์ ์น๋ฆฌ๋ฅผ ์ด๋์์ต๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ์๋ฆฌ๋ฅผ ์งํค๊ธฐ ์ํด ์ ๋ฆฌํ์ ๊ณ ์ํ๋ ๊ฒฐ๋จ์ ๋ด๋ ธ์ต๋๋ค. ์ด์์ ์ฅ๊ตฐ์ 1598๋
์ฌ์ฒ์ฑ ์ ํฌ์์ ํจ์ ํ ํ ์ ์ธ๊ฐ ์ญ์ ๋ผ ์ ์ธ๊ฐ ๋ถ๋ฆฌํด์ง์, ๋จํํ์ฌ ์ด์์ ์ฅ๊ตฐ์ ๊ฒฐ๊ตญ ์ถฉ๋ฌด๊ณต ์ด์์ ์ ์นญํธ๋ฅผ ๋ฐ์์ต๋๋ค. ๊ทธ์ ๊ณต์ ์ ๋ํ๋ฏผ๊ตญ ์ด์์ ์ฅ๊ตฐ ๊ธฐ๋
๊ด์ผ๋ก ๋ช
์๋ฅผ ๋์ด๊ณ ์์ต๋๋ค. ์ด์์ ์ฅ๊ตฐ๋์ ์ ํ์ ์นญํธ๋ก 1963๋
๊ฑด๊ตญํ์ฅ ์ต๊ณ ํ์ฅ์ ์์ฌ๋ฐ์์ผ๋ฉฐ, ๋ํ๋ฏผ๊ตญ์ ๊ตญ๋ณด ์ 13ํธ๋ก ์ง์ ๋์ต๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ์ ์ค์ ์ธ ์ธ๋ฌผ๋ก ํ๊ตญ์ ์ญ์ฌ์์ ํฐ ์กฑ์ ์ ๋จ๊ฒผ์ต๋๋ค. ์ด์์ ์ฅ๊ตฐ์ด ์๋ ๋น์์ ์ ํฌ์ฅ์๋ ์ถฉ๋ฌด๊ณต์ด ์ ๋ผ ์ฒ์์๊ฒ ๋๋ผ๋ฅผ ์ํด ์ธ์ด ๊ณณ์ ์ด์์ ์ฅ๊ตฐ์ ๋ฌ๊ฐ ์์ต๋๋ค. ๋๋ผ์ ๊ณ ๋์ ์ง์ฑ ์ด์์ ์ฅ๊ตฐ๋์ ์์
๊ณผ ์๋ฆฌ๋ฅผ ๊ธฐ๋
ํ๋ ๊ณณ์
๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ํ์ ์ ์ด์์ ์ฅ๊ตฐ ๊ธฐ๋
๊ด, ์ด์์ ๊ณต์ ๋ฑ์ด ์์ต๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ๊ณต์ ์ ๊ธฐ๋
ํ๊ธฐ ์ํด ๋ค์ํ ๋ฌธํ์ ์ฐ๊ณผ ๊ธฐ๋
๋ฌผ์ด ์กฐ์ฑ๋์ต๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ์ ์ ์ ๋๋ผ๋ฅผ ์ํด ์ธ์ด ์๋ํ ์ฐ๋ฆฌ์ ์์
์
๋๋ค. 1552๋
12์์๋ ์ด์์ ์ฅ๊ตฐ์ ์ ์ค์ ์๋ ๋๋ค์ ๋ฐฐํฅ, ํ์ฌ๋ ์ถฉ๋ฌด๊ณต ์ด์์ ์ฅ๊ตฐ๊ป์๋ ์ฐ๋ฆฌ๋๋ผ์ ์์ง์ ์ธ๋ฌผ์ด ๋์ต๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ์กฐ์ ์์กฐ์ ์ค์๋ฅผ ๋ฐ๊ณ ์์ต๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ์ ์กฐ์๊ฒ ๋ง์ ๋์๋ฅผ ์ด๋ฃจ์์ต๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ๊ณต์ ์ ์ค๋๋ ์ ์ด๋์ด ์ฃผ๋ ์ธ๋ฌผ์
๋๋ค. ๊ทธ๋ฅผ ๋ชจ์ ๊ฒ์ ์ ๊ตญ์ ์ฌ๋๋ค์
๋๋ค. ์ด์์ ์ฅ๊ตฐ๋์ ๊ณ ๊ฒฐํ ์ ์ ์ ์ฐ๋ฆฌ๋๋ผ๊ฐ ์ด๋ฃฉํ ์ฑ๊ณผ์
๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ๋๋ผ๋ฅผ ์ํด ์ธ์ด ์๋ํ ์ธ๋ฌผ์
๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ๋ํ๋ฏผ๊ตญ์์ ๊ฐ์ฅ ์๋ํ ์ธ๋ฌผ์
๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ํ์ ์ ์์ํ ๊ธฐ์ต ๋๊ฒ ์ง์.
```
```python
messages = [
{"role": "system", "content": "์น์ ํ ์ฑ๋ด์ผ๋ก์ ์๋๋ฐฉ์ ์์ฒญ์ ์ต๋ํ ์์ธํ๊ณ ์น์ ํ๊ฒ ๋ตํ์. ๋ชจ๋ ๋๋ต์ ํ๊ตญ์ด(Korean)์ผ๋ก ๋๋ตํด์ค."},
{"role": "user", "content": "์ธ์ข
๋์์ ์งํ์ ๋งฅ๋ถ ๋์ง ์ฌ๊ฑด์ ๋ํด์ ์๊ณ ์์ด?"},
]
---
๋ค, ์ธ์ข
๋์์ ์งํ์ ๋งฅ๋ถ ๋์ง ์ฌ๊ฑด์ ์กฐ์ ์๋ ์ค๊ธฐ์ ๋ฌธํ์ ๋ฐฐ๊ฒฝ๊ณผ ๊ด๋ จ๋ ์ค์ํ ์ฌ๊ฑด์
๋๋ค. 1399๋
10์ 2์ผ์ ์ผ์ด๋ ์ด ์ฌ๊ฑด์ ์ธ์ข
๋์์ด ์ ์ ํ ใ์ฃผํดใ์ ๋ํ ๋นํ๋ฌธ์ธ ใ์์ฐ๋กใ์ ์ ์ ํ ์ต๋ง๋ฆฌ์์ ๋
ผ์์ผ๋ก ์์๋์ต๋๋ค. ๋น์ ์ธ์ข
๋์์ ์งํ์ ํ์ฌ๋ค์๊ฒ ๋ฌธ์๋ถ์ ์ผ์น๋ ๊ฒฌํด๋ฅผ ๊ตฌํ๋ ์ค์ ์ต๋ง๋ฆฌ๊ฐ ใ์์ฐ๋กใ์ ๊ธ์ ๊ฑฐ๋ก ํ์ฌ ๋นํํ์, ์ธ์ข
๋์์ด ๋งฅ๋ถ์ ๋์ก๋ค๊ณ ํฉ๋๋ค. ์ด ์ฌ๊ฑด์ ์ฌ์์ ์ผ๋ก๋ ์กฐ์ ์์กฐ ์ด๊ธฐ ๋ฌธ์๋ถ์ ํ์ ์ฒด๊ณ์ ์ ์น์ ๊ฒฌํด์ ๋๋ฆฝ์ ๋
ธ์ถํ๋ ์ฌ๊ฑด์ด์ ๋ฌธํ์ฌ์ ์ธ ๋ฌธ์ ๋ก ์ด์ด์ก์ต๋๋ค. ์ด ์ฌ๊ฑด์ผ๋ก ์ธํด ์กฐ์ ์๋์ ๋ฌธ์๋ถ์ ์ฌ๋๋ถ์ ๊ฒฌํด ์ฐจ์ด์ ์ ๋
ธ์ถํ๋ ์ค์ํ ์ฌ๊ฑด์ผ๋ก ๊ธฐ๋ก๋๊ณ ์์ต๋๋ค. ์ฃผ๋ก ๋
ผ์ ๋ฐฐ๊ฒฝ์ ใ์ฃผํดใ์ ์ ์๋ ๋ด์ฉ ๋ถ์์ ๋ํ ์ด๊ฒฌ๊ณผ ์งํ์ ํ์ฌ๋ค์ ๋ฌธ์๊ด์ ๋ํ ๋นํ์ ์ค์ฌ์ผ๋ก ์ด๋ค์ก์ต๋๋ค. ์ด ์ฌ๊ฑด์ ์กฐ์ ์ด๊ธฐ ์์กฐ ๋ด๋ถ์ ๊ฒฌํด ์ฐจ์ด์ ๋ฐ๋ฅธ ํ์ ์ ์ธ ๋
ผ์์ผ๋ก ์์๋์ด ์ ์น์ ์ธ ์์ธ๊น์ง ๊ฒฐ๋ถํ์ฌ ์กฐ์ ์์กฐ ์ด๊ธฐ ๋ฌธํ์ ยท์ ์น์ ๋
ผ์์ผ๋ก ํ์ฅ๋์์ต๋๋ค. ์ด ์ฌ๊ฑด์ ๋น์์ ์์กฐ ๋ด๋ถ์ ๊ด๋ฃ์ ๋์ ์ธ๋ ฅ์ธต์ ๊ฐ๋ฑ์ ๋ณด์ฌ์ฃผ๋ ์ค์ํ ์๊ฐ ๋ฉ๋๋ค. ์ด ์ฌ๊ฑด์์ ์ฃผ๋ฅ ํํ์ ๋นํ๋ฟ ์๋๋ผ, ์ญ์ฌ์ ์ฌ์ค์ ์๊ณก์ ๋ํ ๋
ผ๋๋ ์ ๊ธฐ ๋ฉ๋๋ค. ์ค์ ๋ก ์ฌ๊ฐํ ์ฌ๋์ ์ด๋ํ๋ ๋๊ท๋ชจ์ ์ธ ๋
ผ์์ผ๋ก ํ์ฐ๋์ง ์์๋ค ํ๋๋ผ๋ ๋น์ ์ฌ๋๋ถ ๋ด๋ถ์ ๊ฒฌํด ์ฐจ์ด๋ ์ ์น์ ์ต์์ผ๋ก ์ธํ ์ฌํ์ ์ง๊ฐ ๋ณ๋์ ์ผ์ผํค๊ธฐ๋ ํ์ต๋๋ค. ๋น์์ ๋งฅ๋ถ ๋์ง์ ๋ฌธ์๋ถ์ ๊ฒฌํด๊ฐ ๊ฐ๋๋ ์ฌ๊ฑด์ด๋ผ ํ ์ ์์ต๋๋ค. ๋์ฑ์ด ๋น์์ ๋งฅ๋ถ ๋์ง์ ์ฌํ์ ์ง์์ธ์ธต์์๋ ์ด๊ฒฌ์ด ์์ ์ ์๋ ๊ฒ์ ๊ธฐ๋ณธ์
๋๋ค. ์ด ์ฌ๊ฑด์ผ๋ก ์ค๋๋ ๊น์ง๋ ํ๊ตญ์ ์ญ์ฌ์์ ์ค์ํ ์ฐ๊ตฌ ๋์์ด ๋๊ณ ์์ต๋๋ค. ๋งฅ๋ถ ์ด๋ผ๋ ๋ง์ ์ฌ๊ฑด์ ์ฃผ๋ฌด๋๋ผ๋ ์ฉ์ด๋ฅผ ๋ง๋ค์ด ๋น์์ ์ฌ๊ฑด์ ํํํ ์ ์์ต๋๋ค. ์ด ์ฌ๊ฑด์ ์ฃผ๋ก ์ ์นยทํ์ ยท์์ ๋ฑ ์ฌํ์ ยท๋ฌธํ์ ์ธ ์ฌ๋ฌ ๋ถ์ผ์์ ์ผ์ด๋๊ฒ ๋ฉ๋๋ค. ์ด ์ฌ๊ฑด์ ์ด๋
์ ๊ด์ ์ ์ฐจ์ด์์ ์์๋์ง๋ง, ์ฌํ์ ๋๋ฆฝ๊ณผ ๊ฐ๋ฑ๊น์ง ์ด๋ํ๋ ์ฌ๊ฑด์ผ๋ก ๊ฐ์ฃผํ๊ธฐ๋ ํฉ๋๋ค. ์ด ๋ฌธ๊ฑด์ ๋ํ ์ฌํด์์ ํตํด ๋น์ ํ๊ตญ ์ฌํ๋ฅผ ์ดํดํ ์ ์์ต๋๋ค. ์ต๋ง๋ฆฌ๋ ๋ฌธ์๋ถ์ ๊ฒฌํด๋ฅผ ๋นํํ ์ ์๋ ๊ณต๊ฐ์ผ๋ก ๋ณด์์ง๋ง, ๋น์์ ๋งฅ๋ถ์ ์ ์น์ ยท๋ฌธ์์ ์ผ๋ก๋ ์ฌ๊ฐํ ๋ฌธ์ ๋ก ์ธ์ํ๊ฒ ๋ฉ๋๋ค. ์ด ์ฌ๊ฑด์ ๋น์์ ์ํฉ์ ์ดํดํ๋ ๋ฐ ๋์์ด ๋ฉ๋๋ค. ์ด๋ก ์ ์ด ์ฌํ์ ์ฃผ๋ฅ ํํ์ด๋ ๋ฌธํ์ ๋ฐฐ๊ฒฝ์ ์ฝ์ ์ ์๋ ์ค์ํ ์ฌ๊ฑด์ ์ฆ๊ฑฐ์
๋๋ค. ๋ฐ๋ผ์ ์ด ์ฌ๊ฑด์ ์ค๋๋ ๊น์ง๋ ๋ค์ํ ๋ฐฉ๋ฉด์์ ์ฐ๊ตฌ์ ๋์์ผ๋ก ๋จ์ ์์ต๋๋ค. ์ด ์ฌ๊ฑด์ด ์ ์ ์ผ๋ก ๋
ผ์๋ง์ด ์๋๋ผ๋ ์ ์ ์ฃผ์ํ ํ์๊ฐ ์์ต๋๋ค. ์ด์ ํจ๊ป ์ฌ๋๋ถ ๋ด๋ถ์ ๊ฒฌํด ์ฐจ์ด ๊ทธ๋ฆฌ๊ณ ์ง์ ๊ฐ๋ฑ์ ํญ๋์ ์๋ฏธ๋ฅผ ์ดํดํ๋๋ฐ ๋์์ด ๋ฉ๋๋ค. ์ด ์ฌ๊ฑด์ผ๋ก ์ธํด, ๋ฌธ์๋ถ์ ์ ์น์ ๊ฒฌํด์ ๋๋ฆฝ์ ํ๊ตญ ์ฌํ์ ๋ณํ์ ๋ํ ํ ์์์ ๋ณด์ฌ์ฃผ๋ ํ๋์ ์๋ฃ๊ฐ ๋ฉ๋๋ค. ์ด๋ก์จ ์ฐ๋ฆฌ๋ ์กฐ์ ์์กฐ ์ด๊ธฐ ๋ฌธํ์ ๋ฐฐ๊ฒฝ๊ณผ ์ ๊ถ์ ๊ฒฌํด๋ฅผ ํ์ธํ ์ ์์ต๋๋ค. ์ด ์ฌ๊ฑด์ ๋น์์ ํ๊ตญ ์ฌํ์ ๋ฌธํ์ ยท์ ์น์ ํ์ ๊ฐ์ด๋ฐ ํ๋์
๋๋ค. ์ด์ ๊ฐ์ ๋งฅ๋ถ ๋์ง ์ฌ๊ฑด์์ ์ฐ๋ฆฌ๋ ๋ค์ํ ๋ฐฉ์์ผ๋ก ํ๊ตญ์ฌ์ ์ญ์ฌ์ ์ธ ๋ฐฐ๊ฒฝ์ ํ์ธํ ์ ์์ต๋๋ค. ๋ง์ง๋ง์ผ๋ก ๋งฅ๋ถ ์ฌ๊ฑด์ ์ฌ๊ฑด ์์ฒด๊ฐ ์๋ ์ฌ๊ฑด์ ๋งฅ๋ฝ์์ ํ๊ตญ์ฌ์ ์ธ ๋ฐฐ๊ฒฝ์ ํ์
ํ๋ ๊ฒ์ด ์ค์ํฉ๋๋ค. ์ฐ๋ฆฌ๋ ์ด ์ฌ๊ฑด์ผ๋ก ๋น์์ ์ฌํ์์ ์ดํดํ๊ณ ๋ฌธํ์ ๋ฐฐ๊ฒฝ์ ํ์
ํ ์ ์์ต๋๋ค. ์ด์ ์์ ์ฌ๊ฑด์ ๋น์ ์ฌํ์ ์ค์์ฑ์ ๊ฐ์กฐํฉ๋๋ค. ์ด ์ฌ๊ฑด์ ์ฌํ์ฌ์ ์ผ๋ก ์ฐ๊ตฌํ์ฌ ๋น์ ์ฌํ์์ ์ค์์ฑ์ ํ์ธํ ์ ์์ต๋๋ค. ์ด ์ ์ ์ญ์ฌ์ ยท๋ฌธํ์ ๋ฐฐ๊ฒฝ์ ์ฌํด์ํ ์ ์์ต๋๋ค. ์ด ์ฌ๊ฑด์ ์กฐ์ ์์กฐ ์ด๊ธฐ์ ์ฃผ์ ์ฌ๊ฑด์ด ๋ฉ๋๋ค. ์ด ์ ์์ ์ค์ํ ์ฌ๊ฑด์ผ๋ก ์ธ์ ํฉ๋๋ค. ๋
ผ์์ ์ ๊ฐ๊น์ง๋ ์ด๋ณด๋ค ๋ ๋ณต์กํ ์ ์์ต๋๋ค. ๋งฅ๋ถ ๋์ง ์ฌ๊ฑด
```
```python
messages = [
{"role": "system", "content": "์น์ ํ ์ฑ๋ด์ผ๋ก์ ์๋๋ฐฉ์ ์์ฒญ์ ์ต๋ํ ์์ธํ๊ณ ์น์ ํ๊ฒ ๋ตํ์. ๋ชจ๋ ๋๋ต์ ํ๊ตญ์ด(Korean)์ผ๋ก ๋๋ตํด์ค."},
{"role": "user", "content": "๋๋ ์ค๋ ์ฌ๊ณผ 5๊ฐ๋ฅผ ๊ฐ๊ณ ์์ด. ๊ทธ๋ฆฌ๊ณ ์ด์ ๋๋ ์ฌ๊ณผ 2๊ฐ๋ฅผ ๋จน์์ง. ๋ด๊ฐ ์ง๊ธ ๊ฐ์ง ์ฌ๊ณผ๋ ๋ช ๊ฐ์ง?"},
]
---
์ฌ๊ณผ์ ์๋์ ํ์ธํ๊ฒ ์ต๋๋ค. ์ ๋ 5๊ฐ์ ์ฌ๊ณผ๊ฐ ์์ผ์ ๋ฐ, ๊ทธ ์ค 2๊ฐ๋ฅผ ์ด๋ฏธ ๋จน์์ผ๋ 3๊ฐ์ ์ฌ๊ณผ๊ฐ ๋จ์์์ต๋๋ค! ใ
ใ
. ๊ณ์ํด์ ๋จน์ ๊ณํ ์์์์? ๐
```
|
neopolita/medicine-llama3-8b-gguf | neopolita | "2024-06-29T22:47:15Z" | 11,730 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-29T22:10:39Z" | ---
{}
---
# GGUF quants for [**instruction-pretrain/medicine-Llama3-8B**](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B) using [llama.cpp](https://github.com/ggerganov/llama.cpp)
**Terms of Use**: Please check the [**original model**](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B)
<picture>
<img alt="cthulhu" src="https://huggingface.co/neopolita/common/resolve/main/profile.png">
</picture>
## Quants
* `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.
* `q3_k_s`: Uses Q3_K for all tensors
* `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q4_0`: Original quant method, 4-bit.
* `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
* `q4_k_s`: Uses Q4_K for all tensors
* `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
* `q5_0`: Higher accuracy, higher resource usage and slower inference.
* `q5_1`: Even higher accuracy, resource usage and slower inference.
* `q5_k_s`: Uses Q5_K for all tensors
* `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K
* `q6_k`: Uses Q8_K for all tensors
* `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
ibm-granite/granite-3b-code-instruct | ibm-granite | "2024-05-10T06:14:01Z" | 11,728 | 26 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"granite",
"conversational",
"dataset:bigcode/commitpackft",
"dataset:TIGER-Lab/MathInstruct",
"dataset:meta-math/MetaMathQA",
"dataset:glaiveai/glaive-code-assistant-v3",
"dataset:glaive-function-calling-v2",
"dataset:bugdaryan/sql-create-context-instruction",
"dataset:garage-bAInd/Open-Platypus",
"dataset:nvidia/HelpSteer",
"arxiv:2405.04324",
"base_model:ibm-granite/granite-3b-code-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-26T05:54:15Z" | ---
pipeline_tag: text-generation
base_model: ibm-granite/granite-3b-code-base
inference: false
license: apache-2.0
datasets:
- bigcode/commitpackft
- TIGER-Lab/MathInstruct
- meta-math/MetaMathQA
- glaiveai/glaive-code-assistant-v3
- glaive-function-calling-v2
- bugdaryan/sql-create-context-instruction
- garage-bAInd/Open-Platypus
- nvidia/HelpSteer
metrics:
- code_eval
library_name: transformers
tags:
- code
- granite
model-index:
- name: granite-3b-code-instruct
results:
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Python)
metrics:
- name: pass@1
type: pass@1
value: 51.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 43.9
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Java)
metrics:
- name: pass@1
type: pass@1
value: 41.5
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Go)
metrics:
- name: pass@1
type: pass@1
value: 31.7
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(C++)
metrics:
- name: pass@1
type: pass@1
value: 40.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Rust)
metrics:
- name: pass@1
type: pass@1
value: 29.3
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Python)
metrics:
- name: pass@1
type: pass@1
value: 39.6
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 26.8
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Java)
metrics:
- name: pass@1
type: pass@1
value: 39.0
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Go)
metrics:
- name: pass@1
type: pass@1
value: 14.0
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(C++)
metrics:
- name: pass@1
type: pass@1
value: 23.8
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Rust)
metrics:
- name: pass@1
type: pass@1
value: 12.8
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Python)
metrics:
- name: pass@1
type: pass@1
value: 26.8
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 28.0
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Java)
metrics:
- name: pass@1
type: pass@1
value: 33.5
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Go)
metrics:
- name: pass@1
type: pass@1
value: 27.4
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(C++)
metrics:
- name: pass@1
type: pass@1
value: 31.7
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Rust)
metrics:
- name: pass@1
type: pass@1
value: 16.5
veriefied: false
---

# Granite-3B-Code-Instruct
## Model Summary
**Granite-3B-Code-Instruct** is a 3B parameter model fine tuned from *Granite-3B-Code-Base* on a combination of **permissively licensed** instruction data to enhance instruction following capabilities including logical reasoning and problem-solving skills.
- **Developers:** IBM Research
- **GitHub Repository:** [ibm-granite/granite-code-models](https://github.com/ibm-granite/granite-code-models)
- **Paper:** [Granite Code Models: A Family of Open Foundation Models for Code Intelligence](https://arxiv.org/abs/2405.04324)
- **Release Date**: May 6th, 2024
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).
## Usage
> [!WARNING]
> **You need to build transformers from source to use this model correctly.**
> Relevant PR: https://github.com/huggingface/transformers/pull/30031
> ```shell
> git clone https://github.com/huggingface/transformers
> cd transformers/
> pip install ./
> cd ..
> ```
### Intended use
The model is designed to respond to coding related instructions and can be used to build coding assitants.
<!-- TO DO: Check starcoder2 instruct code example that includes the template https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1 -->
### Generation
This is a simple example of how to use **Granite-3B-Code-Instruct** model.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # or "cpu"
model_path = "ibm-granite/granite-3b-code-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
chat = [
{ "role": "user", "content": "Write a code to find the maximum value in a list of numbers." },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
# tokenize the text
input_tokens = tokenizer(chat, return_tensors="pt")
# transfer tokenized inputs to the device
for i in input_tokens:
input_tokens[i] = input_tokens[i].to(device)
# generate output tokens
output = model.generate(**input_tokens, max_new_tokens=100)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# loop over the batch to print, in this example the batch size is 1
for i in output:
print(i)
```
<!-- TO DO: Check this part -->
## Training Data
Granite Code Instruct models are trained on the following types of data.
* Code Commits Datasets: we sourced code commits data from the [CommitPackFT](https://huggingface.co/datasets/bigcode/commitpackft) dataset, a filtered version of the full CommitPack dataset. From CommitPackFT dataset, we only consider data for 92 programming languages. Our inclusion criteria boils down to selecting programming languages common across CommitPackFT and the 116 languages that we considered to pretrain the code-base model (*Granite-3B-Code-Base*).
* Math Datasets: We consider two high-quality math datasets, [MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct) and [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA). Due to license issues, we filtered out GSM8K-RFT and Camel-Math from MathInstruct dataset.
* Code Instruction Datasets: We use [Glaive-Code-Assistant-v3](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v3), [Glaive-Function-Calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2), [NL2SQL11](https://huggingface.co/datasets/bugdaryan/sql-create-context-instruction) and a small collection of synthetic API calling datasets.
* Language Instruction Datasets: We include high-quality datasets such as [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer) and an open license-filtered version of [Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). We also include a collection of hardcoded prompts to ensure our model generates correct outputs given inquiries about its name or developers.
## Infrastructure
We train the Granite Code models using two of IBM's super computing clusters, namely Vela and Blue Vela, both outfitted with NVIDIA A100 and H100 GPUs respectively. These clusters provide a scalable and efficient infrastructure for training our models over thousands of GPUs.
## Ethical Considerations and Limitations
Granite code instruct models are primarily finetuned using instruction-response pairs across a specific set of programming languages. Thus, their performance may be limited with out-of-domain programming languages. In this situation, it is beneficial providing few-shot examples to steer the model's output. Moreover, developers should perform safety testing and target-specific tuning before deploying these models on critical applications. The model also inherits ethical considerations and limitations from its base model. For more information, please refer to *[Granite-3B-Code-Base](https://huggingface.co/ibm-granite/granite-3b-code-base)* model card.
|
timm/eva02_large_patch14_448.mim_m38m_ft_in22k_in1k | timm | "2024-02-10T23:37:47Z" | 11,724 | 9 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-22k",
"arxiv:2303.11331",
"arxiv:2303.15389",
"license:mit",
"region:us"
] | image-classification | "2023-03-31T04:51:15Z" | ---
license: mit
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
- imagenet-22k
---
# Model card for eva02_large_patch14_448.mim_m38m_ft_in22k_in1k
An EVA02 image classification model. Pretrained on Merged-38M (IN-22K, CC12M, CC3M, COCO (train), ADE20K (train), Object365, and OpenImages) with masked image modeling (using EVA-CLIP as a MIM teacher) and fine-tuned on ImageNet-22k then on ImageNet-1k by paper authors.
EVA-02 models are vision transformers with mean pooling, SwiGLU, Rotary Position Embeddings (ROPE), and extra LN in MLP (for Base & Large).
NOTE: `timm` checkpoints are float32 for consistency with other models. Original checkpoints are float16 or bfloat16 in some cases, see originals if that's preferred.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 305.1
- GMACs: 362.3
- Activations (M): 689.9
- Image size: 448 x 448
- **Papers:**
- EVA-02: A Visual Representation for Neon Genesis: https://arxiv.org/abs/2303.11331
- EVA-CLIP: Improved Training Techniques for CLIP at Scale: https://arxiv.org/abs/2303.15389
- **Original:**
- https://github.com/baaivision/EVA
- https://huggingface.co/Yuxin-CV/EVA-02
- **Pretrain Dataset:** ImageNet-22k
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('eva02_large_patch14_448.mim_m38m_ft_in22k_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'eva02_large_patch14_448.mim_m38m_ft_in22k_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1025, 1024) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |top1 |top5 |param_count|img_size|
|-----------------------------------------------|------|------|-----------|--------|
|eva02_large_patch14_448.mim_m38m_ft_in22k_in1k |90.054|99.042|305.08 |448 |
|eva02_large_patch14_448.mim_in22k_ft_in22k_in1k|89.946|99.01 |305.08 |448 |
|eva_giant_patch14_560.m30m_ft_in22k_in1k |89.792|98.992|1014.45 |560 |
|eva02_large_patch14_448.mim_in22k_ft_in1k |89.626|98.954|305.08 |448 |
|eva02_large_patch14_448.mim_m38m_ft_in1k |89.57 |98.918|305.08 |448 |
|eva_giant_patch14_336.m30m_ft_in22k_in1k |89.56 |98.956|1013.01 |336 |
|eva_giant_patch14_336.clip_ft_in1k |89.466|98.82 |1013.01 |336 |
|eva_large_patch14_336.in22k_ft_in22k_in1k |89.214|98.854|304.53 |336 |
|eva_giant_patch14_224.clip_ft_in1k |88.882|98.678|1012.56 |224 |
|eva02_base_patch14_448.mim_in22k_ft_in22k_in1k |88.692|98.722|87.12 |448 |
|eva_large_patch14_336.in22k_ft_in1k |88.652|98.722|304.53 |336 |
|eva_large_patch14_196.in22k_ft_in22k_in1k |88.592|98.656|304.14 |196 |
|eva02_base_patch14_448.mim_in22k_ft_in1k |88.23 |98.564|87.12 |448 |
|eva_large_patch14_196.in22k_ft_in1k |87.934|98.504|304.14 |196 |
|eva02_small_patch14_336.mim_in22k_ft_in1k |85.74 |97.614|22.13 |336 |
|eva02_tiny_patch14_336.mim_in22k_ft_in1k |80.658|95.524|5.76 |336 |
## Citation
```bibtex
@article{EVA02,
title={EVA-02: A Visual Representation for Neon Genesis},
author={Fang, Yuxin and Sun, Quan and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue},
journal={arXiv preprint arXiv:2303.11331},
year={2023}
}
```
```bibtex
@article{EVA-CLIP,
title={EVA-02: A Visual Representation for Neon Genesis},
author={Sun, Quan and Fang, Yuxin and Wu, Ledell and Wang, Xinlong and Cao, Yue},
journal={arXiv preprint arXiv:2303.15389},
year={2023}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
romjin/rom163 | romjin | "2024-06-08T18:40:37Z" | 11,723 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-08T18:37:35Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bigscience/bloomz-3b | bigscience | "2023-05-27T17:26:10Z" | 11,722 | 74 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bloom",
"text-generation",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"code",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
"pt",
"rn",
"rw",
"sn",
"st",
"sw",
"ta",
"te",
"tn",
"ts",
"tum",
"tw",
"ur",
"vi",
"wo",
"xh",
"yo",
"zh",
"zu",
"dataset:bigscience/xP3",
"arxiv:2211.01786",
"license:bigscience-bloom-rail-1.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-10-08T16:47:24Z" | ---
datasets:
- bigscience/xP3
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
widget:
- text: "ไธไธชไผ ๅฅ็ๅผ็ซฏ๏ผไธไธชไธ็ญ็็ฅ่ฏ๏ผ่ฟไธไป
ไป
ๆฏไธ้จ็ตๅฝฑ๏ผ่ๆฏไฝไธบไธไธช่ตฐ่ฟๆฐๆถไปฃ็ๆ ็ญพ๏ผๆฐธ่ฟๅฝช็ณๅฒๅใWould you rate the previous review as positive, neutral or negative?"
example_title: "zh-en sentiment"
- text: "ไธไธชไผ ๅฅ็ๅผ็ซฏ๏ผไธไธชไธ็ญ็็ฅ่ฏ๏ผ่ฟไธไป
ไป
ๆฏไธ้จ็ตๅฝฑ๏ผ่ๆฏไฝไธบไธไธช่ตฐ่ฟๆฐๆถไปฃ็ๆ ็ญพ๏ผๆฐธ่ฟๅฝช็ณๅฒๅใไฝ ่ฎคไธบ่ฟๅฅ่ฏ็็ซๅบๆฏ่ตๆฌใไธญ็ซ่ฟๆฏๆน่ฏ๏ผ"
example_title: "zh-zh sentiment"
- text: "Suggest at least five related search terms to \"Mแบกng neural nhรขn tแบกo\"."
example_title: "vi-en query"
- text: "Proposez au moins cinq mots clรฉs concernant ยซRรฉseau de neurones artificielsยป."
example_title: "fr-fr query"
- text: "Explain in a sentence in Telugu what is backpropagation in neural networks."
example_title: "te-en qa"
- text: "Why is the sky blue?"
example_title: "en-en qa"
- text: "Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is \"Heroes Come in All Shapes and Sizes\". Story (in Spanish):"
example_title: "es-en fable"
- text: "Write a fable about wood elves living in a forest that is suddenly invaded by ogres. The fable is a masterpiece that has achieved praise worldwide and its moral is \"Violence is the last refuge of the incompetent\". Fable (in Hindi):"
example_title: "hi-en fable"
model-index:
- name: bloomz-3b1
results:
- task:
type: Coreference resolution
dataset:
type: winogrande
name: Winogrande XL (xl)
config: xl
split: validation
revision: a80f460359d1e9a67c006011c94de42a8759430c
metrics:
- type: Accuracy
value: 53.67
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (en)
config: en
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 59.23
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (fr)
config: fr
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 53.01
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (jp)
config: jp
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 52.45
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (pt)
config: pt
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 53.61
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (ru)
config: ru
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 53.97
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (zh)
config: zh
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 60.91
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r1)
config: r1
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 40.1
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r2)
config: r2
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 36.8
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r3)
config: r3
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 40.0
- task:
type: Natural language inference
dataset:
type: super_glue
name: SuperGLUE (cb)
config: cb
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 75.0
- task:
type: Natural language inference
dataset:
type: super_glue
name: SuperGLUE (rte)
config: rte
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 76.17
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ar)
config: ar
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 53.29
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (bg)
config: bg
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 43.82
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (de)
config: de
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 45.26
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (el)
config: el
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 42.61
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (en)
config: en
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 57.31
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (es)
config: es
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 56.14
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (fr)
config: fr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 55.78
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (hi)
config: hi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 51.49
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ru)
config: ru
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 47.11
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (sw)
config: sw
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 47.83
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (th)
config: th
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 42.93
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (tr)
config: tr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 37.23
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ur)
config: ur
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 49.04
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (vi)
config: vi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 53.98
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (zh)
config: zh
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 54.18
- task:
type: Program synthesis
dataset:
type: openai_humaneval
name: HumanEval
config: None
split: test
revision: e8dc562f5de170c54b5481011dd9f4fa04845771
metrics:
- type: Pass@1
value: 6.29
- type: Pass@10
value: 11.94
- type: Pass@100
value: 19.06
- task:
type: Sentence completion
dataset:
type: story_cloze
name: StoryCloze (2016)
config: "2016"
split: validation
revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db
metrics:
- type: Accuracy
value: 87.33
- task:
type: Sentence completion
dataset:
type: super_glue
name: SuperGLUE (copa)
config: copa
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 76.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (et)
config: et
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 53.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (ht)
config: ht
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 64.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (id)
config: id
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 70.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (it)
config: it
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 53.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (qu)
config: qu
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 56.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (sw)
config: sw
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 66.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (ta)
config: ta
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 59.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (th)
config: th
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 63.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (tr)
config: tr
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 61.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (vi)
config: vi
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 77.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (zh)
config: zh
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 73.0
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (ar)
config: ar
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 80.61
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (es)
config: es
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 85.9
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (eu)
config: eu
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 70.95
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (hi)
config: hi
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 78.89
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (id)
config: id
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 82.99
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (my)
config: my
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 49.9
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (ru)
config: ru
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 61.42
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (sw)
config: sw
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 69.69
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (te)
config: te
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 73.66
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (zh)
config: zh
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 84.32
---

# Table of Contents
1. [Model Summary](#model-summary)
2. [Use](#use)
3. [Limitations](#limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
7. [Citation](#citation)
# Model Summary
> We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find the resulting models capable of crosslingual generalization to unseen tasks & languages.
- **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:[email protected])
- **Languages:** Refer to [bloom](https://huggingface.co/bigscience/bloom) for pretraining & [xP3](https://huggingface.co/datasets/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages.
- **BLOOMZ & mT0 Model Family:**
<div class="max-w-full overflow-auto">
<table>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English.
</tr>
<tr>
<td>Parameters</td>
<td>300M</td>
<td>580M</td>
<td>1.2B</td>
<td>3.7B</td>
<td>13B</td>
<td>560M</td>
<td>1.1B</td>
<td>1.7B</td>
<td>3B</td>
<td>7.1B</td>
<td>176B</td>
</tr>
<tr>
<td>Finetuned Model</td>
<td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
</tr>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td>
</tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td>
</tr>
<th colspan="12">Original pretrained checkpoints. Not recommended.</th>
<tr>
<td>Pretrained Model</td>
<td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td>
<td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td>
<td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td>
<td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td>
<td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td>
</tr>
</table>
</div>
# Use
## Intended use
We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je tโaime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper:
- ไธไธชไผ ๅฅ็ๅผ็ซฏ๏ผไธไธชไธ็ญ็็ฅ่ฏ๏ผ่ฟไธไป
ไป
ๆฏไธ้จ็ตๅฝฑ๏ผ่ๆฏไฝไธบไธไธช่ตฐ่ฟๆฐๆถไปฃ็ๆ ็ญพ๏ผๆฐธ่ฟๅฝช็ณๅฒๅใไฝ ่ฎคไธบ่ฟๅฅ่ฏ็็ซๅบๆฏ่ตๆฌใไธญ็ซ่ฟๆฏๆน่ฏ?
- Suggest at least five related search terms to "Mแบกng neural nhรขn tแบกo".
- Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
- Explain in a sentence in Telugu what is backpropagation in neural networks.
**Feel free to share your generations in the Community tab!**
## How to use
### CPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-3b"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint)
inputs = tokenizer.encode("Translate to English: Je tโaime.", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-3b"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
inputs = tokenizer.encode("Translate to English: Je tโaime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU in 8bit
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-3b"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True)
inputs = tokenizer.encode("Translate to English: Je tโaime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
<!-- Necessary for whitespace -->
###
# Limitations
**Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*".
# Training
## Model
- **Architecture:** Same as [bloom-3b](https://huggingface.co/bigscience/bloom-3b), also refer to the `config.json` file
- **Finetuning steps:** 2000
- **Finetuning tokens:** 8.39 billion
- **Finetuning layout:** 2x pipeline parallel, 1x tensor parallel, 64x data parallel
- **Precision:** float16
## Hardware
- **CPUs:** AMD CPUs with 512GB memory per node
- **GPUs:** 128 A100 80GB GPUs with 8 GPUs per node (16 nodes) using NVLink 4 inter-gpu connects, 4 OmniPath links
- **Communication:** NCCL-communications network with a fully dedicated subnet
## Software
- **Orchestration:** [Megatron-DeepSpeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed)
- **Optimizer & parallelism:** [DeepSpeed](https://github.com/microsoft/DeepSpeed)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) (pytorch-1.11 w/ CUDA-11.5)
- **FP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# Evaluation
We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config.
# Citation
```bibtex
@article{muennighoff2022crosslingual,
title={Crosslingual generalization through multitask finetuning},
author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others},
journal={arXiv preprint arXiv:2211.01786},
year={2022}
}
``` |
mradermacher/Shark-1-Ogno-9b-passthrough-GGUF | mradermacher | "2024-06-21T14:17:02Z" | 11,710 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"powermove72/Shark-1",
"eren23/OGNO-7b-dpo-truthful",
"en",
"base_model:powermove72/Shark-1-Ogno-9b-passthrough",
"endpoints_compatible",
"region:us"
] | null | "2024-06-21T13:43:53Z" | ---
base_model: powermove72/Shark-1-Ogno-9b-passthrough
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- powermove72/Shark-1
- eren23/OGNO-7b-dpo-truthful
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/powermove72/Shark-1-Ogno-9b-passthrough
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.Q2_K.gguf) | Q2_K | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.IQ3_XS.gguf) | IQ3_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.Q3_K_S.gguf) | Q3_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.IQ3_S.gguf) | IQ3_S | 4.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.IQ3_M.gguf) | IQ3_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.Q3_K_M.gguf) | Q3_K_M | 4.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.Q3_K_L.gguf) | Q3_K_L | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.IQ4_XS.gguf) | IQ4_XS | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.Q4_K_M.gguf) | Q4_K_M | 5.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.Q5_K_S.gguf) | Q5_K_S | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.Q5_K_M.gguf) | Q5_K_M | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.Q6_K.gguf) | Q6_K | 7.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.Q8_0.gguf) | Q8_0 | 9.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-9b-passthrough-GGUF/resolve/main/Shark-1-Ogno-9b-passthrough.f16.gguf) | f16 | 18.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
timm/vgg19.tv_in1k | timm | "2023-04-25T20:16:16Z" | 11,707 | 2 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1409.1556",
"license:bsd-3-clause",
"region:us"
] | image-classification | "2023-04-25T20:14:08Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: bsd-3-clause
datasets:
- imagenet-1k
---
# Model card for vgg19.tv_in1k
A VGG image classification model. Trained on ImageNet-1k, original torchvision weights.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 143.7
- GMACs: 19.6
- Activations (M): 14.9
- Image size: 224 x 224
- **Papers:**
- Very Deep Convolutional Networks for Large-Scale Image Recognition: https://arxiv.org/abs/1409.1556
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/pytorch/vision
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vgg19.tv_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vgg19.tv_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 224, 224])
# torch.Size([1, 128, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 512, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vgg19.tv_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{Simonyan2014VeryDC,
title={Very Deep Convolutional Networks for Large-Scale Image Recognition},
author={Karen Simonyan and Andrew Zisserman},
journal={CoRR},
year={2014},
volume={abs/1409.1556}
}
```
|
bhadresh-savani/roberta-base-emotion | bhadresh-savani | "2023-03-22T08:48:07Z" | 11,704 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"roberta",
"text-classification",
"emotion",
"en",
"dataset:emotion",
"arxiv:1907.11692",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
language:
- en
license: apache-2.0
tags:
- text-classification
- emotion
- pytorch
datasets:
- emotion
metrics:
- Accuracy, F1 Score
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
model-index:
- name: bhadresh-savani/roberta-base-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- type: accuracy
value: 0.931
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjg5OTI4ZTlkY2VmZjYzNGEzZGQ3ZjczYzY5YjJmMGVmZDQ4ZWNiYTAyZTJiZjlmMTU2MjE1NTllMWFhYzU0MiIsInZlcnNpb24iOjF9.dc44cEsbu900M2s64GyVIWKPagBzwI-dPlfvh0NGyJFMGKOcypke9P2ary9fBZITrH3UF6lza3sCh7vWYZFHBQ
- type: precision
value: 0.9168321948556312
name: Precision Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2EzYTcxNTExNGU1MmFiZjE3NGE5MDIyMDU2M2U3OGExOTdjZDE5YWU2NDhmOTJlYWMzY2NkN2U5MmRmZTE0MiIsInZlcnNpb24iOjF9.4U7vJ3ALdUUxySMhVeb4Qa1tSp3wphSIZkRYNMujz-KrOZW8kkcmCde3ioStBg3Qqyf1powYd88uk1R7DuWRBA
- type: precision
value: 0.931
name: Precision Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjhmZGRlYWE5ZTAzMmJiMzlmMWZiM2VlYjdiNzI0NjVmN2M2YzcxM2EzYTg0OTFiZTE1MjVmNzE5NGEzYTg2ZCIsInZlcnNpb24iOjF9.8eCHAK0rlZWnhBNQdh9kcuAeItmDUAgK3KkZ7eC-GyYhi4HT5dZiS6btcC5EjkYVOS4czcjzqxfVz4PuZgtLDQ
- type: precision
value: 0.9357445689014415
name: Precision Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDhhZTdkNzYzMjhjZjc4MTAxNWZiYjgzMjhhNjRiZWRmYjc5YTA0NTQ1MzllMTYxMTVkMDk4OTE0ZGEyMTNhMiIsInZlcnNpb24iOjF9.YIZfj2Eo1nMX2GVSfqJy-Cp7VBubfUh2LuOnU60sG5Lci8FdlNbAanS1IzAyxU3U29lqiTasxfS_yrwAj5cmBQ
- type: recall
value: 0.8743657671177089
name: Recall Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2Y2YTcyNzMwYzZiMmM1Yzc4YWZhNDM3ZDQyMjI1NWZhMjQyNmU5NTA0YmE2ZDBiZmY1MmUyZWRlMjRhMjFmYSIsInZlcnNpb24iOjF9.XKlFy_Cx4T4l7Otd8aAwWcI-fJ_dJ6V1Kp3uZm6OWjwCb1Do6mSdPFfwiMeBZZyfEIsNBnguegssZvHsOfTSAQ
- type: recall
value: 0.931
name: Recall Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzgzN2JkNzAzZDRjNjJmZjNkY2RmYzVkMWEzYTMzZDU4NzJlYzBmOWE4MTU0MGU0MTJhM2JjZDdjODhlZDExOCIsInZlcnNpb24iOjF9.9tSVB4yNBdFXpH3equwo1ZaEnVUktO6lm93UEJ-luKhxo6wgS54OLjgDq7IpJYwa3lvYyjy-sxzQEe9ri31WAg
- type: recall
value: 0.931
name: Recall Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGVhZTIyMmVmOTU1YWNjMmZiZjNmOTNlNzlhZTk3NjhlZmMwZGFkZWQxZTlhZWUwZGQyN2JhOWQyNWQ3MTVhOCIsInZlcnNpb24iOjF9.2odv2fK7zH0_S_7wC3obONzjxOipDdjWvddhnGdMnrIN6CiZwLp7XgizpqcWbwAQ_9YJwjC-6wXpbq2jTvN0Bw
- type: f1
value: 0.8821236522209227
name: F1 Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDI0YTUxOTA2M2ZjNGM1OTJlZDAzZTAxNTg4YjY3OWNmMjNmMTk0YWRjZTE2Y2ZmYWI1ZmU3ZmJmNzNjMjBlOCIsInZlcnNpb24iOjF9.P5-TbuEUrCtX9H7F-tKn8LI1RBPhoJwjJm_l853WTSzdLioThAtIK5HBG0xgXT2uB0Q8v94qH2b8cz1j_WonDg
- type: f1
value: 0.931
name: F1 Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjNmNDgyMmFjODYwNjcwOTJiOGM2N2YwYjUyMDk5Yjk2Y2I3NmFmZGFhYjU0NGM2OGUwZmRjNjcxYTU3YzgzNSIsInZlcnNpb24iOjF9.2ZoRJwQWVIcl_Ykxce1MnZ3mSxBGxGeNYFPxt9mivo9yTi3gUE7ua6JRpVEOnOUbevlWxVkUUNnmOPFqBN1sCQ
- type: f1
value: 0.9300782840205046
name: F1 Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGE1OTcxNmNmMjQ3ZDAzYzk0N2Q1MGFjM2VhNWMyYmRjY2E3ZThjODExOTNlNWMxYzdlMWM2MDBiMTZhY2M2OSIsInZlcnNpb24iOjF9.r63SEArCiFB5m0ccV2q_t5uSOtjVnWdz4PfvCYUchm0JlrRC9YAm5oWKeO419wdyFY4rZFe014yv7sRcV-CgBQ
- type: loss
value: 0.15155883133411407
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2M4MmVlNjAzZjhiMWJlNWQxMDg5ZTRiYjFlZGYyMGMyYzU4M2IwY2E1M2E2MzA5NmU5ZjgwZTZmMDI5YjgzMyIsInZlcnNpb24iOjF9.kjgFJohkTxLKtzHJDlBvd6qolGQDSZLbrDE7C07xNGmarhTLc_A3MmLeC4MmQGOl1DxfnHflImIkdqPylyylDA
---
# robert-base-emotion
## Model description:
[roberta](https://arxiv.org/abs/1907.11692) is Bert with better hyperparameter choices so they said it's Robustly optimized Bert during pretraining.
[roberta-base](https://huggingface.co/roberta-base) finetuned on the emotion dataset using HuggingFace Trainer with below Hyperparameters
```
learning rate 2e-5,
batch size 64,
num_train_epochs=8,
```
## Model Performance Comparision on Emotion Dataset from Twitter:
| Model | Accuracy | F1 Score | Test Sample per Second |
| --- | --- | --- | --- |
| [Distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) | 93.8 | 93.79 | 398.69 |
| [Bert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/bert-base-uncased-emotion) | 94.05 | 94.06 | 190.152 |
| [Roberta-base-emotion](https://huggingface.co/bhadresh-savani/roberta-base-emotion) | 93.95 | 93.97| 195.639 |
| [Albert-base-v2-emotion](https://huggingface.co/bhadresh-savani/albert-base-v2-emotion) | 93.6 | 93.65 | 182.794 |
## How to Use the model:
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='bhadresh-savani/roberta-base-emotion', return_all_scores=True)
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", )
print(prediction)
"""
Output:
[[
{'label': 'sadness', 'score': 0.002281982684507966},
{'label': 'joy', 'score': 0.9726489186286926},
{'label': 'love', 'score': 0.021365027874708176},
{'label': 'anger', 'score': 0.0026395076420158148},
{'label': 'fear', 'score': 0.0007162453257478774},
{'label': 'surprise', 'score': 0.0003483477921690792}
]]
"""
```
## Dataset:
[Twitter-Sentiment-Analysis](https://huggingface.co/nlp/viewer/?dataset=emotion).
## Training procedure
[Colab Notebook](https://github.com/bhadreshpsavani/ExploringSentimentalAnalysis/blob/main/SentimentalAnalysisWithDistilbert.ipynb)
follow the above notebook by changing the model name to roberta
## Eval results
```json
{
'test_accuracy': 0.9395,
'test_f1': 0.9397328860104454,
'test_loss': 0.14367154240608215,
'test_runtime': 10.2229,
'test_samples_per_second': 195.639,
'test_steps_per_second': 3.13
}
```
## Reference:
* [Natural Language Processing with Transformer By Lewis Tunstall, Leandro von Werra, Thomas Wolf](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/) |
Undi95/MG-FinalMix-72B-GGUF | Undi95 | "2024-06-26T09:30:06Z" | 11,704 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"OG_finetune_merge",
"base_model:Qwen/Qwen2-72B-Instruct",
"base_model:alpindale/magnum-72b-v1",
"endpoints_compatible",
"region:us"
] | null | "2024-06-26T08:56:30Z" | ---
base_model:
- Qwen/Qwen2-72B-Instruct
- alpindale/magnum-72b-v1
library_name: transformers
tags:
- mergekit
- merge
- OG_finetune_merge
---
WIP of retouched [alpindale/magnum-72b-v1](https://huggingface.co/alpindale/magnum-72b-v1) but I will not use "Magnum" in the name. Call it FinalMix!
Found some issues, trying to fix them for my own usage and adding more RP data with merging.
You can do your own quantized files with the [imatrix.dat file](https://huggingface.co/Undi95/MG-FinalMix-72B/blob/main/imatrix.dat) done with "[wiki.train.raw](https://cosmo.zip/pub/datasets/wikitext-2-raw/)".
Credits to [Alpin](https://huggingface.co/alpindale) and the gang for [magnum-72b-v1](https://huggingface.co/alpindale/magnum-72b-v1), and [Ikari](https://huggingface.co/ikaridev) for his datasets.
### Prompt template ChatML
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
{output}<|im_end|>
```
|
tomaarsen/mpnet-base-nli-matryoshka | tomaarsen | "2024-02-23T11:59:37Z" | 11,699 | 11 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-02-19T08:51:41Z" | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# tomaarsen/mpnet-base-nli-matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.

<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('tomaarsen/mpnet-base-nli-matryoshka')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('tomaarsen/mpnet-base-nli-matryoshka')
model = AutoModel.from_pretrained('tomaarsen/mpnet-base-nli-matryoshka')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=tomaarsen/mpnet-base-nli-matryoshka)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MatryoshkaLoss.MatryoshkaLoss` with parameters:
```
{'loss': 'MultipleNegativesRankingLoss', 'matryoshka_dims': [768, 512, 256, 128, 64], 'matryoshka_weights': [1, 1, 1, 1, 1]}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 880,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 881,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Fizzarolli/gemma-2-9b-it-GGUF-softcap | Fizzarolli | "2024-06-30T02:18:22Z" | 11,699 | 0 | null | [
"gguf",
"license:gemma",
"region:us"
] | null | "2024-06-30T00:01:02Z" | ---
license: gemma
---
|
aus-396/estopian | aus-396 | "2024-06-23T08:42:18Z" | 11,669 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:lmsys/vicuna-13b-v1.5-16k",
"base_model:KatyTheCutie/EstopianMaid-13B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-23T08:28:27Z" | ---
base_model:
- lmsys/vicuna-13b-v1.5-16k
- KatyTheCutie/EstopianMaid-13B
library_name: transformers
tags:
- mergekit
- merge
---
# estopian
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [lmsys/vicuna-13b-v1.5-16k](https://huggingface.co/lmsys/vicuna-13b-v1.5-16k) as a base.
### Models Merged
The following models were included in the merge:
* [KatyTheCutie/EstopianMaid-13B](https://huggingface.co/KatyTheCutie/EstopianMaid-13B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: lmsys/vicuna-13b-v1.5-16k
dtype: float16
merge_method: dare_ties
models:
- model: lmsys/vicuna-13b-v1.5-16k
# no parameters necessary for the base model
- model: KatyTheCutie/EstopianMaid-13B
parameters:
density: 0.5
weight: 0.5
```
|
mradermacher/huskylm-2.5-8b-i1-GGUF | mradermacher | "2024-06-23T05:31:37Z" | 11,664 | 0 | transformers | [
"transformers",
"gguf",
"llama-3",
"huskylm",
"darkcloudai",
"en",
"dataset:darkcloudai-smallmodel-frontieredition",
"dataset:darkcloudai-webdriver-redditcrawl-2023",
"dataset:darkcloudai-unalignment-truthfulness",
"dataset:darkcloudai-generaldpo",
"dataset:ai2_arc",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"dataset:jondurbin/airoboros-3.2",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:bluemoon-fandom-1-1-rp-cleaned",
"dataset:boolq",
"dataset:camel-ai/biology",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/math",
"dataset:camel-ai/physics",
"dataset:jondurbin/contextual-dpo-v0.1",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:jondurbin/py-dpo-v0.1",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:LDJnr/Capybara",
"dataset:jondurbin/cinematika-v0.1",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:grimulkan/LimaRP-augmented",
"dataset:lmsys/lmsys-chat-1m",
"dataset:ParisNeo/lollms_aware_dataset",
"dataset:TIGER-Lab/MathInstruct",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:kingbri/PIPPA-shareGPT",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:ropes",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:b-mc2/sql-create-context",
"dataset:squad_v2",
"dataset:mattpscott/airoboros-summarization",
"dataset:migtissera/Synthia-v1.3",
"dataset:unalignment/toxic-dpo-v0.2",
"dataset:WhiteRabbitNeo/WRN-Chapter-1",
"dataset:WhiteRabbitNeo/WRN-Chapter-2",
"dataset:winogrande",
"base_model:darkcloudai/huskylm-2.5-8b",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T02:31:03Z" | ---
base_model: darkcloudai/huskylm-2.5-8b
datasets:
- darkcloudai-smallmodel-frontieredition
- darkcloudai-webdriver-redditcrawl-2023
- darkcloudai-unalignment-truthfulness
- darkcloudai-generaldpo
- ai2_arc
- allenai/ultrafeedback_binarized_cleaned
- argilla/distilabel-intel-orca-dpo-pairs
- jondurbin/airoboros-3.2
- codeparrot/apps
- facebook/belebele
- bluemoon-fandom-1-1-rp-cleaned
- boolq
- camel-ai/biology
- camel-ai/chemistry
- camel-ai/math
- camel-ai/physics
- jondurbin/contextual-dpo-v0.1
- jondurbin/gutenberg-dpo-v0.1
- jondurbin/py-dpo-v0.1
- jondurbin/truthy-dpo-v0.1
- LDJnr/Capybara
- jondurbin/cinematika-v0.1
- WizardLM/WizardLM_evol_instruct_70k
- glaiveai/glaive-function-calling-v2
- jondurbin/gutenberg-dpo-v0.1
- grimulkan/LimaRP-augmented
- lmsys/lmsys-chat-1m
- ParisNeo/lollms_aware_dataset
- TIGER-Lab/MathInstruct
- Muennighoff/natural-instructions
- openbookqa
- kingbri/PIPPA-shareGPT
- piqa
- Vezora/Tested-22k-Python-Alpaca
- ropes
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- b-mc2/sql-create-context
- squad_v2
- mattpscott/airoboros-summarization
- migtissera/Synthia-v1.3
- unalignment/toxic-dpo-v0.2
- WhiteRabbitNeo/WRN-Chapter-1
- WhiteRabbitNeo/WRN-Chapter-2
- winogrande
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- llama-3
- huskylm
- darkcloudai
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/darkcloudai/huskylm-2.5-8b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/huskylm-2.5-8b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/huskylm-2.5-8b-i1-GGUF/resolve/main/huskylm-2.5-8b.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/huskylm-2.5-8b-i1-GGUF/resolve/main/huskylm-2.5-8b.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/huskylm-2.5-8b-i1-GGUF/resolve/main/huskylm-2.5-8b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/huskylm-2.5-8b-i1-GGUF/resolve/main/huskylm-2.5-8b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/huskylm-2.5-8b-i1-GGUF/resolve/main/huskylm-2.5-8b.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/huskylm-2.5-8b-i1-GGUF/resolve/main/huskylm-2.5-8b.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/huskylm-2.5-8b-i1-GGUF/resolve/main/huskylm-2.5-8b.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/huskylm-2.5-8b-i1-GGUF/resolve/main/huskylm-2.5-8b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/huskylm-2.5-8b-i1-GGUF/resolve/main/huskylm-2.5-8b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/huskylm-2.5-8b-i1-GGUF/resolve/main/huskylm-2.5-8b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/huskylm-2.5-8b-i1-GGUF/resolve/main/huskylm-2.5-8b.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/huskylm-2.5-8b-i1-GGUF/resolve/main/huskylm-2.5-8b.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/huskylm-2.5-8b-i1-GGUF/resolve/main/huskylm-2.5-8b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/huskylm-2.5-8b-i1-GGUF/resolve/main/huskylm-2.5-8b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/huskylm-2.5-8b-i1-GGUF/resolve/main/huskylm-2.5-8b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/huskylm-2.5-8b-i1-GGUF/resolve/main/huskylm-2.5-8b.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/huskylm-2.5-8b-i1-GGUF/resolve/main/huskylm-2.5-8b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/huskylm-2.5-8b-i1-GGUF/resolve/main/huskylm-2.5-8b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/huskylm-2.5-8b-i1-GGUF/resolve/main/huskylm-2.5-8b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/huskylm-2.5-8b-i1-GGUF/resolve/main/huskylm-2.5-8b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/huskylm-2.5-8b-i1-GGUF/resolve/main/huskylm-2.5-8b.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mradermacher/Mistral-7B-SOAP-GGUF | mradermacher | "2024-06-24T20:53:39Z" | 11,660 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ChenmieNLP/Mistral-7B-SOAP",
"endpoints_compatible",
"region:us"
] | null | "2024-06-24T20:26:19Z" | ---
base_model: ChenmieNLP/Mistral-7B-SOAP
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ChenmieNLP/Mistral-7B-SOAP
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-SOAP-GGUF/resolve/main/Mistral-7B-SOAP.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-SOAP-GGUF/resolve/main/Mistral-7B-SOAP.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-SOAP-GGUF/resolve/main/Mistral-7B-SOAP.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-SOAP-GGUF/resolve/main/Mistral-7B-SOAP.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-SOAP-GGUF/resolve/main/Mistral-7B-SOAP.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-SOAP-GGUF/resolve/main/Mistral-7B-SOAP.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-SOAP-GGUF/resolve/main/Mistral-7B-SOAP.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-SOAP-GGUF/resolve/main/Mistral-7B-SOAP.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-SOAP-GGUF/resolve/main/Mistral-7B-SOAP.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-SOAP-GGUF/resolve/main/Mistral-7B-SOAP.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-SOAP-GGUF/resolve/main/Mistral-7B-SOAP.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-SOAP-GGUF/resolve/main/Mistral-7B-SOAP.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-SOAP-GGUF/resolve/main/Mistral-7B-SOAP.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-SOAP-GGUF/resolve/main/Mistral-7B-SOAP.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-SOAP-GGUF/resolve/main/Mistral-7B-SOAP.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
intfloat/e5-large-unsupervised | intfloat | "2023-07-27T05:02:57Z" | 11,634 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"safetensors",
"bert",
"Sentence Transformers",
"sentence-similarity",
"en",
"arxiv:2212.03533",
"arxiv:2104.08663",
"arxiv:2210.07316",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2023-01-31T03:03:36Z" | ---
tags:
- Sentence Transformers
- sentence-similarity
- sentence-transformers
language:
- en
license: mit
---
# E5-large-unsupervised
**This model is similar to [e5-large](https://huggingface.co/intfloat/e5-large) but without supervised fine-tuning.**
[Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
This model has 24 layers and the embedding size is 1024.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."]
tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-large-unsupervised')
model = AutoModel.from_pretrained('intfloat/e5-large-unsupervised')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Training Details
Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
## Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## Support for Sentence Transformers
Below is an example for usage with sentence_transformers.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('intfloat/e5-large-unsupervised')
input_texts = [
'query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
```
Package requirements
`pip install sentence_transformers~=2.2.2`
Contributors: [michaelfeil](https://huggingface.co/michaelfeil)
## FAQ
**1. Do I need to add the prefix "query: " and "passage: " to input texts?**
Yes, this is how the model is trained, otherwise you will see a performance degradation.
Here are some rules of thumb:
- Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
- Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval.
- Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
**2. Why are my reproduced results slightly different from reported in the model card?**
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
```
## Limitations
This model only works for English texts. Long texts will be truncated to at most 512 tokens.
|
Orenguteng/Llama-3-8B-Lexi-Uncensored | Orenguteng | "2024-05-27T06:16:40Z" | 11,633 | 131 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"uncensored",
"llama3",
"instruct",
"open",
"conversational",
"license:llama3",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-23T21:14:40Z" | ---
license: llama3
tags:
- uncensored
- llama3
- instruct
- open
model-index:
- name: Llama-3-8B-Lexi-Uncensored
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 59.56
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Orenguteng/Llama-3-8B-Lexi-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 77.88
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Orenguteng/Llama-3-8B-Lexi-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 67.68
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Orenguteng/Llama-3-8B-Lexi-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 47.72
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Orenguteng/Llama-3-8B-Lexi-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Orenguteng/Llama-3-8B-Lexi-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.39
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Orenguteng/Llama-3-8B-Lexi-Uncensored
name: Open LLM Leaderboard
---

This model is based on Llama-3-8b-Instruct, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/)
Lexi is uncensored, which makes the model compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones.
You are responsible for any content you create using this model. Please use it responsibly.
Lexi is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Orenguteng__Llama-3-8B-Lexi-Uncensored)
| Metric |Value|
|---------------------------------|----:|
|Avg. |66.18|
|AI2 Reasoning Challenge (25-Shot)|59.56|
|HellaSwag (10-Shot) |77.88|
|MMLU (5-Shot) |67.68|
|TruthfulQA (0-shot) |47.72|
|Winogrande (5-shot) |75.85|
|GSM8k (5-shot) |68.39|
|
digiplay/AbsoluteReality_v1.0_diffusers | digiplay | "2023-11-07T18:49:55Z" | 11,630 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-06-02T08:14:56Z" | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/81458/absolutereality
|
mradermacher/Prox-Llama-3-8B-i1-GGUF | mradermacher | "2024-06-21T09:49:49Z" | 11,630 | 0 | transformers | [
"transformers",
"gguf",
"code",
"cybersecurity",
"penetration testing",
"hacking",
"uncensored",
"unsloth",
"en",
"base_model:openvoid/Prox-Llama-3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-21T05:08:28Z" | ---
base_model: openvoid/Prox-Llama-3-8B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- code
- cybersecurity
- penetration testing
- hacking
- code
- uncensored
- unsloth
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/openvoid/Prox-Llama-3-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Prox-Llama-3-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-i1-GGUF/resolve/main/Prox-Llama-3-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-i1-GGUF/resolve/main/Prox-Llama-3-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-i1-GGUF/resolve/main/Prox-Llama-3-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-i1-GGUF/resolve/main/Prox-Llama-3-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-i1-GGUF/resolve/main/Prox-Llama-3-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-i1-GGUF/resolve/main/Prox-Llama-3-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-i1-GGUF/resolve/main/Prox-Llama-3-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-i1-GGUF/resolve/main/Prox-Llama-3-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-i1-GGUF/resolve/main/Prox-Llama-3-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-i1-GGUF/resolve/main/Prox-Llama-3-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-i1-GGUF/resolve/main/Prox-Llama-3-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-i1-GGUF/resolve/main/Prox-Llama-3-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-i1-GGUF/resolve/main/Prox-Llama-3-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-i1-GGUF/resolve/main/Prox-Llama-3-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-i1-GGUF/resolve/main/Prox-Llama-3-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-i1-GGUF/resolve/main/Prox-Llama-3-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-i1-GGUF/resolve/main/Prox-Llama-3-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-i1-GGUF/resolve/main/Prox-Llama-3-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-i1-GGUF/resolve/main/Prox-Llama-3-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-i1-GGUF/resolve/main/Prox-Llama-3-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Prox-Llama-3-8B-i1-GGUF/resolve/main/Prox-Llama-3-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
protectai/vishnun-codenlbert-sm-onnx | protectai | "2024-04-26T06:13:10Z" | 11,620 | 1 | transformers | [
"transformers",
"onnx",
"bert",
"text-classification",
"base_model:vishnun/codenlbert-sm",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-classification | "2024-04-11T14:16:33Z" | ---
inference: false
pipeline_tag: text-classification
license: apache-2.0
base_model: vishnun/codenlbert-sm
---
# ONNX version of vishnun/codenlbert-sm
**This model is a conversion of [vishnun/codenlbert-sm](https://huggingface.co/vishnun/codenlbert-sm) to ONNX** format using the [๐ค Optimum](https://huggingface.co/docs/optimum/index) library.
|
defog/llama-3-sqlcoder-8b | defog | "2024-05-13T09:03:54Z" | 11,619 | 108 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"conversational",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-09T15:58:49Z" | ---
license: cc-by-sa-4.0
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- code
---
A capable language model for text to SQL generation for Postgres, Redshift and Snowflake that is on-par with the most capable generalist frontier models.

## Model Description
Developed by: Defog, Inc
Model type: [Text to SQL]
License: [CC-by-SA-4.0]
Finetuned from model: [Meta-Llama-3-8B-Instruct]
## Demo Page
[https://defog.ai/sqlcoder-demo/](https://defog.ai/sqlcoder-demo/)
## Ideal prompt and inference parameters
Set temperature to 0, and do not do sampling.
### Prompt
```
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
Generate a SQL query to answer this question: `{user_question}`
{instructions}
DDL statements:
{create_table_statements}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
The following SQL query best answers the question `{user_question}`:
```sql
```
## Evaluation
This model was evaluated on SQL-Eval, a PostgreSQL based evaluation framework developed by Defog for testing and alignment of model capabilities.
You can read more about the methodology behind SQLEval [here](https://defog.ai/blog/open-sourcing-sqleval/).
## Contact
Contact us on X at [@defogdata](https://twitter.com/defogdata), or on email at [email protected] |
BAAI/bge-base-zh-v1.5 | BAAI | "2023-10-12T03:35:51Z" | 11,615 | 50 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"zh",
"arxiv:2310.07554",
"arxiv:2309.07597",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | "2023-09-12T05:21:53Z" | ---
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: mit
language:
- zh
---
<h1 align="center">FlagEmbedding</h1>
<h4 align="center">
<p>
<a href=#model-list>Model List</a> |
<a href=#frequently-asked-questions>FAQ</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#train">Train</a> |
<a href="#contact">Contact</a> |
<a href="#citation">Citation</a> |
<a href="#license">License</a>
<p>
</h4>
More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
[English](README.md) | [ไธญๆ](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
FlagEmbedding can map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search.
And it also can be used in vector databases for LLMs.
************* ๐**Updates**๐ *************
- 10/12/2023: Release [LLM-Embedder](./FlagEmbedding/llm_embedder/README.md), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Paper](https://arxiv.org/pdf/2310.07554.pdf) :fire:
- 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) of BGE has been released
- 09/15/2023: The [masive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released
- 09/12/2023: New models:
- **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
- **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
<details>
<summary>More</summary>
<!-- ### More -->
- 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
- 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size ๐ค**
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
</details>
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | | Description | query instruction for retrieval [1] |
|:-------------------------------|:--------:| :--------:| :--------:|:--------:|
| [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `ไธบ่ฟไธชๅฅๅญ็ๆ่กจ็คบไปฅ็จไบๆฃ็ดข็ธๅ
ณๆ็ซ ๏ผ` |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `ไธบ่ฟไธชๅฅๅญ็ๆ่กจ็คบไปฅ็จไบๆฃ็ดข็ธๅ
ณๆ็ซ ๏ผ` |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `ไธบ่ฟไธชๅฅๅญ็ๆ่กจ็คบไปฅ็จไบๆฃ็ดข็ธๅ
ณๆ็ซ ๏ผ` |
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `ไธบ่ฟไธชๅฅๅญ็ๆ่กจ็คบไปฅ็จไบๆฃ็ดข็ธๅ
ณๆ็ซ ๏ผ` |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `ไธบ่ฟไธชๅฅๅญ็ๆ่กจ็คบไปฅ็จไบๆฃ็ดข็ธๅ
ณๆ็ซ ๏ผ` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `ไธบ่ฟไธชๅฅๅญ็ๆ่กจ็คบไปฅ็จไบๆฃ็ดข็ธๅ
ณๆ็ซ ๏ผ` |
[1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
[2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI.
If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models .
## Frequently asked questions
<details>
<summary>1. How to fine-tune bge embedding model?</summary>
<!-- ### How to fine-tune bge embedding model? -->
Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model.
Some suggestions:
- Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance.
- If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
- If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.
</details>
<details>
<summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary>
<!-- ### The similarity score between two dissimilar sentences is higher than 0.5 -->
**Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.**
Since we finetune the models by contrastive learning with a temperature of 0.01,
the similarity distribution of the current BGE model is about in the interval \[0.6, 1\].
So a similarity score greater than 0.5 does not indicate that the two sentences are similar.
For downstream tasks, such as passage retrieval or semantic similarity,
**what matters is the relative order of the scores, not the absolute value.**
If you need to filter similar sentences based on a similarity threshold,
please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).
</details>
<details>
<summary>3. When does the query instruction need to be used</summary>
<!-- ### When does the query instruction need to be used -->
For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction.
No instruction only has a slight degradation in retrieval performance compared with using instruction.
So you can generate embedding without instruction in all cases for convenience.
For a retrieval task that uses short queries to find long related documents,
it is recommended to add instructions for these short queries.
**The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
In all cases, the documents/passages do not need to add the instruction.
</details>
## Usage
### Usage for Embedding Model
Here are some examples for using `bge` models with
[FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers).
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
```python
from FlagEmbedding import FlagModel
sentences_1 = ["ๆ ทไพๆฐๆฎ-1", "ๆ ทไพๆฐๆฎ-2"]
sentences_2 = ["ๆ ทไพๆฐๆฎ-3", "ๆ ทไพๆฐๆฎ-4"]
model = FlagModel('BAAI/bge-large-zh-v1.5',
query_instruction_for_retrieval="ไธบ่ฟไธชๅฅๅญ็ๆ่กจ็คบไปฅ็จไบๆฃ็ดข็ธๅ
ณๆ็ซ ๏ผ",
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
embeddings_1 = model.encode(sentences_1)
embeddings_2 = model.encode(sentences_2)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query
# corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction
queries = ['query_1', 'query_2']
passages = ["ๆ ทไพๆๆกฃ-1", "ๆ ทไพๆๆกฃ-2"]
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode(passages)
scores = q_embeddings @ p_embeddings.T
```
For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs.
You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable.
#### Using Sentence-Transformers
You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net):
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
sentences_1 = ["ๆ ทไพๆฐๆฎ-1", "ๆ ทไพๆฐๆฎ-2"]
sentences_2 = ["ๆ ทไพๆฐๆฎ-3", "ๆ ทไพๆฐๆฎ-4"]
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
For s2p(short query to long passage) retrieval task,
each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
But the instruction is not needed for passages.
```python
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["ๆ ทไพๆๆกฃ-1", "ๆ ทไพๆๆกฃ-2"]
instruction = "ไธบ่ฟไธชๅฅๅญ็ๆ่กจ็คบไปฅ็จไบๆฃ็ดข็ธๅ
ณๆ็ซ ๏ผ"
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
#### Using Langchain
You can use `bge` in langchain like this:
```python
from langchain.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-large-en-v1.5"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
model = HuggingFaceBgeEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs,
query_instruction="ไธบ่ฟไธชๅฅๅญ็ๆ่กจ็คบไปฅ็จไบๆฃ็ดข็ธๅ
ณๆ็ซ ๏ผ"
)
model.query_instruction = "ไธบ่ฟไธชๅฅๅญ็ๆ่กจ็คบไปฅ็จไบๆฃ็ดข็ธๅ
ณๆ็ซ ๏ผ"
```
#### Using HuggingFace Transformers
With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["ๆ ทไพๆฐๆฎ-1", "ๆ ทไพๆฐๆฎ-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5')
model.eval()
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
### Usage for Reranker
Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.
You can get a relevance score by inputting query and passage to the reranker.
The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
Get relevance scores (higher scores indicate more relevance):
```python
from FlagEmbedding import FlagReranker
reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
score = reranker.compute_score(['query', 'passage'])
print(score)
scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']])
print(scores)
```
#### Using Huggingface transformers
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')
model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large')
model.eval()
pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
with torch.no_grad():
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores)
```
## Evaluation
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
- **MTEB**:
| Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 |
| [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 |
| [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
| [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
| [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
| [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
| [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
| [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
- **C-MTEB**:
We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
| Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 |
| [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 |
| [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 |
| [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 |
| [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 |
| [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 |
| [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 |
| [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 |
| [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 |
- **Reranking**:
See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script.
| Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 |
| multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 |
| multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 |
| multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 |
| m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 |
| m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 |
| bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 |
| bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 |
\* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks
## Train
### BAAI Embedding
We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning.
**You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).**
We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain).
Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
### BGE Reranker
Cross-encoder will perform full-attention over the input pair,
which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.
Therefore, it can be used to re-rank the top-k documents returned by embedding model.
We train the cross-encoder on a multilingual pair data,
The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
## Contact
If you have any question or suggestion related to this project, feel free to open an issue or pull request.
You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]).
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{bge_embedding,
title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
year={2023},
eprint={2309.07597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
|
mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF | mradermacher | "2024-06-26T14:38:50Z" | 11,609 | 0 | transformers | [
"transformers",
"gguf",
"ja",
"en",
"base_model:neoai-inc/Llama-3-neoAI-8B-Chat-v0.1",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-06-26T14:08:14Z" | ---
base_model: neoai-inc/Llama-3-neoAI-8B-Chat-v0.1
language:
- ja
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/neoai-inc/Llama-3-neoAI-8B-Chat-v0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
TinyLlama/TinyLlama-1.1B-step-50K-105b | TinyLlama | "2023-09-16T03:06:11Z" | 11,603 | 125 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-01T08:59:02Z" | ---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
language:
- en
---
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs ๐๐. The training has started on 2023-09-01.
<div align="center">
<img src="./TinyLlama_logo.png" width="300"/>
</div>
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is an intermediate checkpoint with 50K steps and 105B tokens.
#### Releases Schedule
We will be rolling out intermediate checkpoints following the below schedule. We also include some baseline models for comparison.
| Date | HF Checkpoint | Tokens | Step | HellaSwag Acc_norm |
|------------|-------------------------------------------------|--------|------|---------------------|
| Baseline | [StableLM-Alpha-3B](https://huggingface.co/stabilityai/stablelm-base-alpha-3b)| 800B | -- | 38.31 |
| Baseline | [Pythia-1B-intermediate-step-50k-105b](https://huggingface.co/EleutherAI/pythia-1b/tree/step50000) | 105B | 50k | 42.04 |
| Baseline | [Pythia-1B](https://huggingface.co/EleutherAI/pythia-1b) | 300B | 143k | 47.16 |
| 2023-09-04 | [TinyLlama-1.1B-intermediate-step-50k-105b](https://huggingface.co/PY007/TinyLlama-1.1B-step-50K-105b) | 105B | 50k | 43.50 |
| 2023-09-16 | -- | 500B | -- | -- |
| 2023-10-01 | -- | 1T | -- | -- |
| 2023-10-16 | -- | 1.5T | -- | -- |
| 2023-10-31 | -- | 2T | -- | -- |
| 2023-11-15 | -- | 2.5T | -- | -- |
| 2023-12-01 | -- | 3T | -- | -- |
#### How to use
You will need the transformers>=4.31
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
```
from transformers import AutoTokenizer
import transformers
import torch
model = "PY007/TinyLlama-1.1B-step-50K-105b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs ๐๐. The training has started on 2023-09-01.',
do_sample=True,
top_k=10,
num_return_sequences=1,
repetition_penalty=1.5,
eos_token_id=tokenizer.eos_token_id,
max_length=500,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
``` |
unsloth/llama-2-7b-bnb-4bit | unsloth | "2024-03-22T15:10:44Z" | 11,573 | 6 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"llama2",
"llama-2",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2023-11-29T05:31:08Z" | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- unsloth
- transformers
- llama
- llama2
- llama-2
---
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
Directly quantized 4bit model with `bitsandbytes`.
We have a Google Colab Tesla T4 notebook for Llama 7b here: https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## โจ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Gemma 7b** | [โถ๏ธ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [โถ๏ธ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **Llama-2 7b** | [โถ๏ธ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less |
| **TinyLlama** | [โถ๏ธ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **CodeLlama 34b** A100 | [โถ๏ธ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less |
| **Mistral 7b** 1xT4 | [โถ๏ธ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less |
| **DPO - Zephyr** | [โถ๏ธ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. |
1bitLLM/bitnet_b1_58-large | 1bitLLM | "2024-03-29T11:24:06Z" | 11,568 | 43 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2402.17764",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-29T11:10:00Z" | ---
license: mit
---
This is a reproduction of the <a href="https://arxiv.org/abs/2402.17764"> BitNet b1.58</a> paper. The models are trained with <a href="https://github.com/togethercomputer/RedPajama-Data">RedPajama dataset</a> for 100B tokens. The hypers, as well as two-stage LR and weight decay, are implemented as suggested in their following <a href="https://github.com/microsoft/unilm/blob/master/bitnet/The-Era-of-1-bit-LLMs__Training_Tips_Code_FAQ.pdf">paper</a>. All models are open-source in the <a href="https://huggingface.co/1bitLLM">repo</a>. We will train larger models and/or more tokens when resource is available.
## Results
PPL and zero-shot accuracy:
| Models | PPL| ARCe| ARCc| HS | BQ | OQ | PQ | WGe | Avg
|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| FP16 700M (reported) | 12.33 | 54.7 | 23.0 | 37.0 | 60.0 | 20.2 | 68.9 | 54.8 | 45.5 |
| BitNet b1.58 700M (reported) | 12.87 | 51.8 | 21.4 | 35.1 | 58.2 | 20.0 | 68.1 | 55.2 | 44.3 |
| BitNet b1.58 700M (reproduced) | 12.78 | 51.4 | 21.8 | 35.0 | 59.6 | 20.6 | 67.5 | 55.4 | 44.5 |
| FP16 1.3B (reported) | 11.25 | 56.9 | 23.5 | 38.5 | 59.1 | 21.6 | 70.0 | 53.9 | 46.2
| BitNet b1.58 1.3B (reported) | 11.29 | 54.9 | 24.2 | 37.7 | 56.7 | 19.6 | 68.8 | 55.8 | 45.4 |
| BitNet b1.58 1.3B (reproduced) | 11.19 | 55.8 | 23.7 | 37.6 | 59.0 | 20.2 | 69.2 | 56.0 | 45.9
| FP16 3B (reported) | 10.04 | 62.1 | 25.6 | 43.3 | 61.8 | 24.6 | 72.1 | 58.2 | 49.7
| BitNet b1.58 3B (reported) | 9.91 | 61.4 | 28.3 | 42.9 | 61.5 | 26.6 | 71.5 | 59.3 | 50.2
| BitNet b1.58 3B (reproduced) | 9.88 | 60.9 | 28.0 | 42.3 | 58.3 | 26.0 | 71.4 | 60.3 | 49.6 |
The differences between the reported numbers and the reproduced results are possibly variances from the training data processing, seeds, or other random factors.
## Evaluation
The evaluation pipelines are from the paper authors. Here is the commands to run the evaluation:
```
pip install lm-eval==0.3.0
```
```
python eval_ppl.py --hf_path 1bitLLM/bitnet_b1_58-3B --seqlen 2048
```
```
python eval_task.py --hf_path 1bitLLM/bitnet_b1_58-3B \
--batch_size 1 \
--tasks \
--output_path result.json \
--num_fewshot 0 \
--ctx_size 2048
```
|
John6666/ebara-pony-v21-sdxl | John6666 | "2024-06-07T12:07:30Z" | 11,566 | 2 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pony",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-05-26T09:49:30Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pony
---
Original model is [here](https://huggingface.co/tsukihara/xl_model).
|
fxmarty/sshleifer-tiny-mbart-onnx | fxmarty | "2022-08-23T13:53:53Z" | 11,561 | 1 | transformers | [
"transformers",
"onnx",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-08-23T13:52:07Z" | ---
license: apache-2.0
---
This model is a fork of `sshleifer/tiny-mbart` exported to ONNX.
|
Alibaba-NLP/gte-Qwen2-7B-instruct | Alibaba-NLP | "2024-06-29T11:18:29Z" | 11,559 | 71 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"qwen2",
"text-generation",
"mteb",
"transformers",
"Qwen2",
"sentence-similarity",
"custom_code",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-06-15T11:24:21Z" | ---
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2
- sentence-similarity
license: apache-2.0
model-index:
- name: gte-qwen2-7B-instruct
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 91.31343283582089
- type: ap
value: 67.64251402604096
- type: f1
value: 87.53372530755692
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 97.497825
- type: ap
value: 96.30329547047529
- type: f1
value: 97.49769793778039
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 62.564
- type: f1
value: 60.975777935041066
- task:
type: Retrieval
dataset:
type: mteb/arguana
name: MTEB ArguAna
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 36.486000000000004
- type: map_at_10
value: 54.842
- type: map_at_100
value: 55.206999999999994
- type: map_at_1000
value: 55.206999999999994
- type: map_at_3
value: 49.893
- type: map_at_5
value: 53.105000000000004
- type: mrr_at_1
value: 37.34
- type: mrr_at_10
value: 55.143
- type: mrr_at_100
value: 55.509
- type: mrr_at_1000
value: 55.509
- type: mrr_at_3
value: 50.212999999999994
- type: mrr_at_5
value: 53.432
- type: ndcg_at_1
value: 36.486000000000004
- type: ndcg_at_10
value: 64.273
- type: ndcg_at_100
value: 65.66199999999999
- type: ndcg_at_1000
value: 65.66199999999999
- type: ndcg_at_3
value: 54.352999999999994
- type: ndcg_at_5
value: 60.131
- type: precision_at_1
value: 36.486000000000004
- type: precision_at_10
value: 9.395000000000001
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.428
- type: precision_at_5
value: 16.259
- type: recall_at_1
value: 36.486000000000004
- type: recall_at_10
value: 93.95400000000001
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 67.283
- type: recall_at_5
value: 81.294
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 56.461169803700564
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 51.73600434466286
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 67.57827065898053
- type: mrr
value: 79.08136569493911
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 83.53324575999243
- type: cos_sim_spearman
value: 81.37173362822374
- type: euclidean_pearson
value: 82.19243335103444
- type: euclidean_spearman
value: 81.33679307304334
- type: manhattan_pearson
value: 82.38752665975699
- type: manhattan_spearman
value: 81.31510583189689
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.56818181818181
- type: f1
value: 87.25826722019875
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 50.09239610327673
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 46.64733054606282
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 33.997
- type: map_at_10
value: 48.176
- type: map_at_100
value: 49.82
- type: map_at_1000
value: 49.924
- type: map_at_3
value: 43.626
- type: map_at_5
value: 46.275
- type: mrr_at_1
value: 42.059999999999995
- type: mrr_at_10
value: 53.726
- type: mrr_at_100
value: 54.398
- type: mrr_at_1000
value: 54.416
- type: mrr_at_3
value: 50.714999999999996
- type: mrr_at_5
value: 52.639
- type: ndcg_at_1
value: 42.059999999999995
- type: ndcg_at_10
value: 55.574999999999996
- type: ndcg_at_100
value: 60.744
- type: ndcg_at_1000
value: 61.85699999999999
- type: ndcg_at_3
value: 49.363
- type: ndcg_at_5
value: 52.44
- type: precision_at_1
value: 42.059999999999995
- type: precision_at_10
value: 11.101999999999999
- type: precision_at_100
value: 1.73
- type: precision_at_1000
value: 0.218
- type: precision_at_3
value: 24.464
- type: precision_at_5
value: 18.026
- type: recall_at_1
value: 33.997
- type: recall_at_10
value: 70.35900000000001
- type: recall_at_100
value: 91.642
- type: recall_at_1000
value: 97.977
- type: recall_at_3
value: 52.76
- type: recall_at_5
value: 61.148
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 35.884
- type: map_at_10
value: 48.14
- type: map_at_100
value: 49.5
- type: map_at_1000
value: 49.63
- type: map_at_3
value: 44.646
- type: map_at_5
value: 46.617999999999995
- type: mrr_at_1
value: 44.458999999999996
- type: mrr_at_10
value: 53.751000000000005
- type: mrr_at_100
value: 54.37800000000001
- type: mrr_at_1000
value: 54.415
- type: mrr_at_3
value: 51.815
- type: mrr_at_5
value: 52.882
- type: ndcg_at_1
value: 44.458999999999996
- type: ndcg_at_10
value: 54.157
- type: ndcg_at_100
value: 58.362
- type: ndcg_at_1000
value: 60.178
- type: ndcg_at_3
value: 49.661
- type: ndcg_at_5
value: 51.74999999999999
- type: precision_at_1
value: 44.458999999999996
- type: precision_at_10
value: 10.248
- type: precision_at_100
value: 1.5890000000000002
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 23.928
- type: precision_at_5
value: 16.878999999999998
- type: recall_at_1
value: 35.884
- type: recall_at_10
value: 64.798
- type: recall_at_100
value: 82.345
- type: recall_at_1000
value: 93.267
- type: recall_at_3
value: 51.847
- type: recall_at_5
value: 57.601
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 39.383
- type: map_at_10
value: 53.714
- type: map_at_100
value: 54.838
- type: map_at_1000
value: 54.87800000000001
- type: map_at_3
value: 50.114999999999995
- type: map_at_5
value: 52.153000000000006
- type: mrr_at_1
value: 45.016
- type: mrr_at_10
value: 56.732000000000006
- type: mrr_at_100
value: 57.411
- type: mrr_at_1000
value: 57.431
- type: mrr_at_3
value: 54.044000000000004
- type: mrr_at_5
value: 55.639
- type: ndcg_at_1
value: 45.016
- type: ndcg_at_10
value: 60.228
- type: ndcg_at_100
value: 64.277
- type: ndcg_at_1000
value: 65.07
- type: ndcg_at_3
value: 54.124
- type: ndcg_at_5
value: 57.147000000000006
- type: precision_at_1
value: 45.016
- type: precision_at_10
value: 9.937
- type: precision_at_100
value: 1.288
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.471999999999998
- type: precision_at_5
value: 16.991
- type: recall_at_1
value: 39.383
- type: recall_at_10
value: 76.175
- type: recall_at_100
value: 93.02
- type: recall_at_1000
value: 98.60900000000001
- type: recall_at_3
value: 60.265
- type: recall_at_5
value: 67.46600000000001
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 27.426000000000002
- type: map_at_10
value: 37.397000000000006
- type: map_at_100
value: 38.61
- type: map_at_1000
value: 38.678000000000004
- type: map_at_3
value: 34.150999999999996
- type: map_at_5
value: 36.137
- type: mrr_at_1
value: 29.944
- type: mrr_at_10
value: 39.654
- type: mrr_at_100
value: 40.638000000000005
- type: mrr_at_1000
value: 40.691
- type: mrr_at_3
value: 36.817
- type: mrr_at_5
value: 38.524
- type: ndcg_at_1
value: 29.944
- type: ndcg_at_10
value: 43.094
- type: ndcg_at_100
value: 48.789
- type: ndcg_at_1000
value: 50.339999999999996
- type: ndcg_at_3
value: 36.984
- type: ndcg_at_5
value: 40.248
- type: precision_at_1
value: 29.944
- type: precision_at_10
value: 6.78
- type: precision_at_100
value: 1.024
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 15.895000000000001
- type: precision_at_5
value: 11.39
- type: recall_at_1
value: 27.426000000000002
- type: recall_at_10
value: 58.464000000000006
- type: recall_at_100
value: 84.193
- type: recall_at_1000
value: 95.52000000000001
- type: recall_at_3
value: 42.172
- type: recall_at_5
value: 50.101
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 19.721
- type: map_at_10
value: 31.604
- type: map_at_100
value: 32.972
- type: map_at_1000
value: 33.077
- type: map_at_3
value: 27.218999999999998
- type: map_at_5
value: 29.53
- type: mrr_at_1
value: 25.0
- type: mrr_at_10
value: 35.843
- type: mrr_at_100
value: 36.785000000000004
- type: mrr_at_1000
value: 36.842000000000006
- type: mrr_at_3
value: 32.193
- type: mrr_at_5
value: 34.264
- type: ndcg_at_1
value: 25.0
- type: ndcg_at_10
value: 38.606
- type: ndcg_at_100
value: 44.272
- type: ndcg_at_1000
value: 46.527
- type: ndcg_at_3
value: 30.985000000000003
- type: ndcg_at_5
value: 34.43
- type: precision_at_1
value: 25.0
- type: precision_at_10
value: 7.811
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.15
- type: precision_at_3
value: 15.423
- type: precision_at_5
value: 11.791
- type: recall_at_1
value: 19.721
- type: recall_at_10
value: 55.625
- type: recall_at_100
value: 79.34400000000001
- type: recall_at_1000
value: 95.208
- type: recall_at_3
value: 35.19
- type: recall_at_5
value: 43.626
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 33.784
- type: map_at_10
value: 47.522
- type: map_at_100
value: 48.949999999999996
- type: map_at_1000
value: 49.038
- type: map_at_3
value: 43.284
- type: map_at_5
value: 45.629
- type: mrr_at_1
value: 41.482
- type: mrr_at_10
value: 52.830999999999996
- type: mrr_at_100
value: 53.559999999999995
- type: mrr_at_1000
value: 53.588
- type: mrr_at_3
value: 50.016000000000005
- type: mrr_at_5
value: 51.614000000000004
- type: ndcg_at_1
value: 41.482
- type: ndcg_at_10
value: 54.569
- type: ndcg_at_100
value: 59.675999999999995
- type: ndcg_at_1000
value: 60.989000000000004
- type: ndcg_at_3
value: 48.187000000000005
- type: ndcg_at_5
value: 51.183
- type: precision_at_1
value: 41.482
- type: precision_at_10
value: 10.221
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.17500000000000002
- type: precision_at_3
value: 23.548
- type: precision_at_5
value: 16.805
- type: recall_at_1
value: 33.784
- type: recall_at_10
value: 69.798
- type: recall_at_100
value: 90.098
- type: recall_at_1000
value: 98.176
- type: recall_at_3
value: 52.127
- type: recall_at_5
value: 59.861
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 28.038999999999998
- type: map_at_10
value: 41.904
- type: map_at_100
value: 43.36
- type: map_at_1000
value: 43.453
- type: map_at_3
value: 37.785999999999994
- type: map_at_5
value: 40.105000000000004
- type: mrr_at_1
value: 35.046
- type: mrr_at_10
value: 46.926
- type: mrr_at_100
value: 47.815000000000005
- type: mrr_at_1000
value: 47.849000000000004
- type: mrr_at_3
value: 44.273
- type: mrr_at_5
value: 45.774
- type: ndcg_at_1
value: 35.046
- type: ndcg_at_10
value: 48.937000000000005
- type: ndcg_at_100
value: 54.544000000000004
- type: ndcg_at_1000
value: 56.069
- type: ndcg_at_3
value: 42.858000000000004
- type: ndcg_at_5
value: 45.644
- type: precision_at_1
value: 35.046
- type: precision_at_10
value: 9.452
- type: precision_at_100
value: 1.429
- type: precision_at_1000
value: 0.173
- type: precision_at_3
value: 21.346999999999998
- type: precision_at_5
value: 15.342
- type: recall_at_1
value: 28.038999999999998
- type: recall_at_10
value: 64.59700000000001
- type: recall_at_100
value: 87.735
- type: recall_at_1000
value: 97.41300000000001
- type: recall_at_3
value: 47.368
- type: recall_at_5
value: 54.93900000000001
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 28.17291666666667
- type: map_at_10
value: 40.025749999999995
- type: map_at_100
value: 41.39208333333333
- type: map_at_1000
value: 41.499249999999996
- type: map_at_3
value: 36.347
- type: map_at_5
value: 38.41391666666667
- type: mrr_at_1
value: 33.65925
- type: mrr_at_10
value: 44.085499999999996
- type: mrr_at_100
value: 44.94116666666667
- type: mrr_at_1000
value: 44.9855
- type: mrr_at_3
value: 41.2815
- type: mrr_at_5
value: 42.91491666666666
- type: ndcg_at_1
value: 33.65925
- type: ndcg_at_10
value: 46.430833333333325
- type: ndcg_at_100
value: 51.761
- type: ndcg_at_1000
value: 53.50899999999999
- type: ndcg_at_3
value: 40.45133333333333
- type: ndcg_at_5
value: 43.31483333333334
- type: precision_at_1
value: 33.65925
- type: precision_at_10
value: 8.4995
- type: precision_at_100
value: 1.3210000000000004
- type: precision_at_1000
value: 0.16591666666666666
- type: precision_at_3
value: 19.165083333333335
- type: precision_at_5
value: 13.81816666666667
- type: recall_at_1
value: 28.17291666666667
- type: recall_at_10
value: 61.12624999999999
- type: recall_at_100
value: 83.97266666666667
- type: recall_at_1000
value: 95.66550000000001
- type: recall_at_3
value: 44.661249999999995
- type: recall_at_5
value: 51.983333333333334
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 24.681
- type: map_at_10
value: 34.892
- type: map_at_100
value: 35.996
- type: map_at_1000
value: 36.083
- type: map_at_3
value: 31.491999999999997
- type: map_at_5
value: 33.632
- type: mrr_at_1
value: 28.528
- type: mrr_at_10
value: 37.694
- type: mrr_at_100
value: 38.613
- type: mrr_at_1000
value: 38.668
- type: mrr_at_3
value: 34.714
- type: mrr_at_5
value: 36.616
- type: ndcg_at_1
value: 28.528
- type: ndcg_at_10
value: 40.703
- type: ndcg_at_100
value: 45.993
- type: ndcg_at_1000
value: 47.847
- type: ndcg_at_3
value: 34.622
- type: ndcg_at_5
value: 38.035999999999994
- type: precision_at_1
value: 28.528
- type: precision_at_10
value: 6.902
- type: precision_at_100
value: 1.0370000000000001
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 15.798000000000002
- type: precision_at_5
value: 11.655999999999999
- type: recall_at_1
value: 24.681
- type: recall_at_10
value: 55.81
- type: recall_at_100
value: 79.785
- type: recall_at_1000
value: 92.959
- type: recall_at_3
value: 39.074
- type: recall_at_5
value: 47.568
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 18.627
- type: map_at_10
value: 27.872000000000003
- type: map_at_100
value: 29.237999999999996
- type: map_at_1000
value: 29.363
- type: map_at_3
value: 24.751
- type: map_at_5
value: 26.521
- type: mrr_at_1
value: 23.021
- type: mrr_at_10
value: 31.924000000000003
- type: mrr_at_100
value: 32.922000000000004
- type: mrr_at_1000
value: 32.988
- type: mrr_at_3
value: 29.192
- type: mrr_at_5
value: 30.798
- type: ndcg_at_1
value: 23.021
- type: ndcg_at_10
value: 33.535
- type: ndcg_at_100
value: 39.732
- type: ndcg_at_1000
value: 42.201
- type: ndcg_at_3
value: 28.153
- type: ndcg_at_5
value: 30.746000000000002
- type: precision_at_1
value: 23.021
- type: precision_at_10
value: 6.459
- type: precision_at_100
value: 1.1320000000000001
- type: precision_at_1000
value: 0.153
- type: precision_at_3
value: 13.719000000000001
- type: precision_at_5
value: 10.193000000000001
- type: recall_at_1
value: 18.627
- type: recall_at_10
value: 46.463
- type: recall_at_100
value: 74.226
- type: recall_at_1000
value: 91.28500000000001
- type: recall_at_3
value: 31.357000000000003
- type: recall_at_5
value: 38.067
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 31.457
- type: map_at_10
value: 42.888
- type: map_at_100
value: 44.24
- type: map_at_1000
value: 44.327
- type: map_at_3
value: 39.588
- type: map_at_5
value: 41.423
- type: mrr_at_1
value: 37.126999999999995
- type: mrr_at_10
value: 47.083000000000006
- type: mrr_at_100
value: 47.997
- type: mrr_at_1000
value: 48.044
- type: mrr_at_3
value: 44.574000000000005
- type: mrr_at_5
value: 46.202
- type: ndcg_at_1
value: 37.126999999999995
- type: ndcg_at_10
value: 48.833
- type: ndcg_at_100
value: 54.327000000000005
- type: ndcg_at_1000
value: 56.011
- type: ndcg_at_3
value: 43.541999999999994
- type: ndcg_at_5
value: 46.127
- type: precision_at_1
value: 37.126999999999995
- type: precision_at_10
value: 8.376999999999999
- type: precision_at_100
value: 1.2309999999999999
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 20.211000000000002
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 31.457
- type: recall_at_10
value: 62.369
- type: recall_at_100
value: 85.444
- type: recall_at_1000
value: 96.65599999999999
- type: recall_at_3
value: 47.961
- type: recall_at_5
value: 54.676
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 27.139999999999997
- type: map_at_10
value: 38.801
- type: map_at_100
value: 40.549
- type: map_at_1000
value: 40.802
- type: map_at_3
value: 35.05
- type: map_at_5
value: 36.884
- type: mrr_at_1
value: 33.004
- type: mrr_at_10
value: 43.864
- type: mrr_at_100
value: 44.667
- type: mrr_at_1000
value: 44.717
- type: mrr_at_3
value: 40.777
- type: mrr_at_5
value: 42.319
- type: ndcg_at_1
value: 33.004
- type: ndcg_at_10
value: 46.022
- type: ndcg_at_100
value: 51.542
- type: ndcg_at_1000
value: 53.742000000000004
- type: ndcg_at_3
value: 39.795
- type: ndcg_at_5
value: 42.272
- type: precision_at_1
value: 33.004
- type: precision_at_10
value: 9.012
- type: precision_at_100
value: 1.7770000000000001
- type: precision_at_1000
value: 0.26
- type: precision_at_3
value: 19.038
- type: precision_at_5
value: 13.675999999999998
- type: recall_at_1
value: 27.139999999999997
- type: recall_at_10
value: 60.961
- type: recall_at_100
value: 84.451
- type: recall_at_1000
value: 98.113
- type: recall_at_3
value: 43.001
- type: recall_at_5
value: 49.896
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 17.936
- type: map_at_10
value: 27.399
- type: map_at_100
value: 28.632
- type: map_at_1000
value: 28.738000000000003
- type: map_at_3
value: 24.456
- type: map_at_5
value: 26.06
- type: mrr_at_1
value: 19.224
- type: mrr_at_10
value: 28.998
- type: mrr_at_100
value: 30.11
- type: mrr_at_1000
value: 30.177
- type: mrr_at_3
value: 26.247999999999998
- type: mrr_at_5
value: 27.708
- type: ndcg_at_1
value: 19.224
- type: ndcg_at_10
value: 32.911
- type: ndcg_at_100
value: 38.873999999999995
- type: ndcg_at_1000
value: 41.277
- type: ndcg_at_3
value: 27.142
- type: ndcg_at_5
value: 29.755
- type: precision_at_1
value: 19.224
- type: precision_at_10
value: 5.6930000000000005
- type: precision_at_100
value: 0.9259999999999999
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 12.138
- type: precision_at_5
value: 8.909
- type: recall_at_1
value: 17.936
- type: recall_at_10
value: 48.096
- type: recall_at_100
value: 75.389
- type: recall_at_1000
value: 92.803
- type: recall_at_3
value: 32.812999999999995
- type: recall_at_5
value: 38.851
- task:
type: Retrieval
dataset:
type: mteb/climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 22.076999999999998
- type: map_at_10
value: 35.44
- type: map_at_100
value: 37.651
- type: map_at_1000
value: 37.824999999999996
- type: map_at_3
value: 30.764999999999997
- type: map_at_5
value: 33.26
- type: mrr_at_1
value: 50.163000000000004
- type: mrr_at_10
value: 61.207
- type: mrr_at_100
value: 61.675000000000004
- type: mrr_at_1000
value: 61.692
- type: mrr_at_3
value: 58.60999999999999
- type: mrr_at_5
value: 60.307
- type: ndcg_at_1
value: 50.163000000000004
- type: ndcg_at_10
value: 45.882
- type: ndcg_at_100
value: 53.239999999999995
- type: ndcg_at_1000
value: 55.852000000000004
- type: ndcg_at_3
value: 40.514
- type: ndcg_at_5
value: 42.038
- type: precision_at_1
value: 50.163000000000004
- type: precision_at_10
value: 13.466000000000001
- type: precision_at_100
value: 2.164
- type: precision_at_1000
value: 0.266
- type: precision_at_3
value: 29.707
- type: precision_at_5
value: 21.694
- type: recall_at_1
value: 22.076999999999998
- type: recall_at_10
value: 50.193
- type: recall_at_100
value: 74.993
- type: recall_at_1000
value: 89.131
- type: recall_at_3
value: 35.472
- type: recall_at_5
value: 41.814
- task:
type: Retrieval
dataset:
type: mteb/dbpedia
name: MTEB DBPedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 9.953
- type: map_at_10
value: 24.515
- type: map_at_100
value: 36.173
- type: map_at_1000
value: 38.351
- type: map_at_3
value: 16.592000000000002
- type: map_at_5
value: 20.036
- type: mrr_at_1
value: 74.25
- type: mrr_at_10
value: 81.813
- type: mrr_at_100
value: 82.006
- type: mrr_at_1000
value: 82.011
- type: mrr_at_3
value: 80.875
- type: mrr_at_5
value: 81.362
- type: ndcg_at_1
value: 62.5
- type: ndcg_at_10
value: 52.42
- type: ndcg_at_100
value: 56.808
- type: ndcg_at_1000
value: 63.532999999999994
- type: ndcg_at_3
value: 56.654
- type: ndcg_at_5
value: 54.18300000000001
- type: precision_at_1
value: 74.25
- type: precision_at_10
value: 42.699999999999996
- type: precision_at_100
value: 13.675
- type: precision_at_1000
value: 2.664
- type: precision_at_3
value: 60.5
- type: precision_at_5
value: 52.800000000000004
- type: recall_at_1
value: 9.953
- type: recall_at_10
value: 30.253999999999998
- type: recall_at_100
value: 62.516000000000005
- type: recall_at_1000
value: 84.163
- type: recall_at_3
value: 18.13
- type: recall_at_5
value: 22.771
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 79.455
- type: f1
value: 74.16798697647569
- task:
type: Retrieval
dataset:
type: mteb/fever
name: MTEB FEVER
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 87.531
- type: map_at_10
value: 93.16799999999999
- type: map_at_100
value: 93.341
- type: map_at_1000
value: 93.349
- type: map_at_3
value: 92.444
- type: map_at_5
value: 92.865
- type: mrr_at_1
value: 94.014
- type: mrr_at_10
value: 96.761
- type: mrr_at_100
value: 96.762
- type: mrr_at_1000
value: 96.762
- type: mrr_at_3
value: 96.672
- type: mrr_at_5
value: 96.736
- type: ndcg_at_1
value: 94.014
- type: ndcg_at_10
value: 95.112
- type: ndcg_at_100
value: 95.578
- type: ndcg_at_1000
value: 95.68900000000001
- type: ndcg_at_3
value: 94.392
- type: ndcg_at_5
value: 94.72500000000001
- type: precision_at_1
value: 94.014
- type: precision_at_10
value: 11.065
- type: precision_at_100
value: 1.157
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 35.259
- type: precision_at_5
value: 21.599
- type: recall_at_1
value: 87.531
- type: recall_at_10
value: 97.356
- type: recall_at_100
value: 98.965
- type: recall_at_1000
value: 99.607
- type: recall_at_3
value: 95.312
- type: recall_at_5
value: 96.295
- task:
type: Retrieval
dataset:
type: mteb/fiqa
name: MTEB FiQA2018
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 32.055
- type: map_at_10
value: 53.114
- type: map_at_100
value: 55.235
- type: map_at_1000
value: 55.345
- type: map_at_3
value: 45.854
- type: map_at_5
value: 50.025
- type: mrr_at_1
value: 60.34
- type: mrr_at_10
value: 68.804
- type: mrr_at_100
value: 69.309
- type: mrr_at_1000
value: 69.32199999999999
- type: mrr_at_3
value: 66.40899999999999
- type: mrr_at_5
value: 67.976
- type: ndcg_at_1
value: 60.34
- type: ndcg_at_10
value: 62.031000000000006
- type: ndcg_at_100
value: 68.00500000000001
- type: ndcg_at_1000
value: 69.286
- type: ndcg_at_3
value: 56.355999999999995
- type: ndcg_at_5
value: 58.687
- type: precision_at_1
value: 60.34
- type: precision_at_10
value: 17.176
- type: precision_at_100
value: 2.36
- type: precision_at_1000
value: 0.259
- type: precision_at_3
value: 37.14
- type: precision_at_5
value: 27.809
- type: recall_at_1
value: 32.055
- type: recall_at_10
value: 70.91
- type: recall_at_100
value: 91.83
- type: recall_at_1000
value: 98.871
- type: recall_at_3
value: 51.202999999999996
- type: recall_at_5
value: 60.563
- task:
type: Retrieval
dataset:
type: mteb/hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 43.68
- type: map_at_10
value: 64.389
- type: map_at_100
value: 65.24
- type: map_at_1000
value: 65.303
- type: map_at_3
value: 61.309000000000005
- type: map_at_5
value: 63.275999999999996
- type: mrr_at_1
value: 87.36
- type: mrr_at_10
value: 91.12
- type: mrr_at_100
value: 91.227
- type: mrr_at_1000
value: 91.229
- type: mrr_at_3
value: 90.57600000000001
- type: mrr_at_5
value: 90.912
- type: ndcg_at_1
value: 87.36
- type: ndcg_at_10
value: 73.076
- type: ndcg_at_100
value: 75.895
- type: ndcg_at_1000
value: 77.049
- type: ndcg_at_3
value: 68.929
- type: ndcg_at_5
value: 71.28
- type: precision_at_1
value: 87.36
- type: precision_at_10
value: 14.741000000000001
- type: precision_at_100
value: 1.694
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 43.043
- type: precision_at_5
value: 27.681
- type: recall_at_1
value: 43.68
- type: recall_at_10
value: 73.707
- type: recall_at_100
value: 84.7
- type: recall_at_1000
value: 92.309
- type: recall_at_3
value: 64.564
- type: recall_at_5
value: 69.203
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 96.75399999999999
- type: ap
value: 95.29389839242187
- type: f1
value: 96.75348377433475
- task:
type: Retrieval
dataset:
type: mteb/msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 25.176
- type: map_at_10
value: 38.598
- type: map_at_100
value: 39.707
- type: map_at_1000
value: 39.744
- type: map_at_3
value: 34.566
- type: map_at_5
value: 36.863
- type: mrr_at_1
value: 25.874000000000002
- type: mrr_at_10
value: 39.214
- type: mrr_at_100
value: 40.251
- type: mrr_at_1000
value: 40.281
- type: mrr_at_3
value: 35.291
- type: mrr_at_5
value: 37.545
- type: ndcg_at_1
value: 25.874000000000002
- type: ndcg_at_10
value: 45.98
- type: ndcg_at_100
value: 51.197
- type: ndcg_at_1000
value: 52.073
- type: ndcg_at_3
value: 37.785999999999994
- type: ndcg_at_5
value: 41.870000000000005
- type: precision_at_1
value: 25.874000000000002
- type: precision_at_10
value: 7.181
- type: precision_at_100
value: 0.979
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 16.051000000000002
- type: precision_at_5
value: 11.713
- type: recall_at_1
value: 25.176
- type: recall_at_10
value: 68.67699999999999
- type: recall_at_100
value: 92.55
- type: recall_at_1000
value: 99.164
- type: recall_at_3
value: 46.372
- type: recall_at_5
value: 56.16
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 99.03784769721841
- type: f1
value: 98.97791641821495
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 91.88326493388054
- type: f1
value: 73.74809928034335
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 85.41358439811701
- type: f1
value: 83.503679460639
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 89.77135171486215
- type: f1
value: 88.89843747468366
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 46.22695362087359
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 44.132372165849425
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.35680810650402
- type: mrr
value: 34.72625715637218
- task:
type: Retrieval
dataset:
type: mteb/nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 7.165000000000001
- type: map_at_10
value: 15.424
- type: map_at_100
value: 20.28
- type: map_at_1000
value: 22.065
- type: map_at_3
value: 11.236
- type: map_at_5
value: 13.025999999999998
- type: mrr_at_1
value: 51.702999999999996
- type: mrr_at_10
value: 59.965
- type: mrr_at_100
value: 60.667
- type: mrr_at_1000
value: 60.702999999999996
- type: mrr_at_3
value: 58.772000000000006
- type: mrr_at_5
value: 59.267
- type: ndcg_at_1
value: 49.536
- type: ndcg_at_10
value: 40.6
- type: ndcg_at_100
value: 37.848
- type: ndcg_at_1000
value: 46.657
- type: ndcg_at_3
value: 46.117999999999995
- type: ndcg_at_5
value: 43.619
- type: precision_at_1
value: 51.393
- type: precision_at_10
value: 30.31
- type: precision_at_100
value: 9.972
- type: precision_at_1000
value: 2.329
- type: precision_at_3
value: 43.137
- type: precision_at_5
value: 37.585
- type: recall_at_1
value: 7.165000000000001
- type: recall_at_10
value: 19.689999999999998
- type: recall_at_100
value: 39.237
- type: recall_at_1000
value: 71.417
- type: recall_at_3
value: 12.247
- type: recall_at_5
value: 14.902999999999999
- task:
type: Retrieval
dataset:
type: mteb/nq
name: MTEB NQ
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 42.653999999999996
- type: map_at_10
value: 59.611999999999995
- type: map_at_100
value: 60.32300000000001
- type: map_at_1000
value: 60.336
- type: map_at_3
value: 55.584999999999994
- type: map_at_5
value: 58.19
- type: mrr_at_1
value: 47.683
- type: mrr_at_10
value: 62.06700000000001
- type: mrr_at_100
value: 62.537
- type: mrr_at_1000
value: 62.544999999999995
- type: mrr_at_3
value: 59.178
- type: mrr_at_5
value: 61.034
- type: ndcg_at_1
value: 47.654
- type: ndcg_at_10
value: 67.001
- type: ndcg_at_100
value: 69.73899999999999
- type: ndcg_at_1000
value: 69.986
- type: ndcg_at_3
value: 59.95700000000001
- type: ndcg_at_5
value: 64.025
- type: precision_at_1
value: 47.654
- type: precision_at_10
value: 10.367999999999999
- type: precision_at_100
value: 1.192
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 26.651000000000003
- type: precision_at_5
value: 18.459
- type: recall_at_1
value: 42.653999999999996
- type: recall_at_10
value: 86.619
- type: recall_at_100
value: 98.04899999999999
- type: recall_at_1000
value: 99.812
- type: recall_at_3
value: 68.987
- type: recall_at_5
value: 78.158
- task:
type: Retrieval
dataset:
type: mteb/quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 72.538
- type: map_at_10
value: 86.702
- type: map_at_100
value: 87.31
- type: map_at_1000
value: 87.323
- type: map_at_3
value: 83.87
- type: map_at_5
value: 85.682
- type: mrr_at_1
value: 83.31
- type: mrr_at_10
value: 89.225
- type: mrr_at_100
value: 89.30399999999999
- type: mrr_at_1000
value: 89.30399999999999
- type: mrr_at_3
value: 88.44300000000001
- type: mrr_at_5
value: 89.005
- type: ndcg_at_1
value: 83.32000000000001
- type: ndcg_at_10
value: 90.095
- type: ndcg_at_100
value: 91.12
- type: ndcg_at_1000
value: 91.179
- type: ndcg_at_3
value: 87.606
- type: ndcg_at_5
value: 89.031
- type: precision_at_1
value: 83.32000000000001
- type: precision_at_10
value: 13.641
- type: precision_at_100
value: 1.541
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.377
- type: precision_at_5
value: 25.162000000000003
- type: recall_at_1
value: 72.538
- type: recall_at_10
value: 96.47200000000001
- type: recall_at_100
value: 99.785
- type: recall_at_1000
value: 99.99900000000001
- type: recall_at_3
value: 89.278
- type: recall_at_5
value: 93.367
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 73.55219145406065
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 74.13437105242755
- task:
type: Retrieval
dataset:
type: mteb/scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.873
- type: map_at_10
value: 17.944
- type: map_at_100
value: 21.171
- type: map_at_1000
value: 21.528
- type: map_at_3
value: 12.415
- type: map_at_5
value: 15.187999999999999
- type: mrr_at_1
value: 33.800000000000004
- type: mrr_at_10
value: 46.455
- type: mrr_at_100
value: 47.378
- type: mrr_at_1000
value: 47.394999999999996
- type: mrr_at_3
value: 42.367
- type: mrr_at_5
value: 44.972
- type: ndcg_at_1
value: 33.800000000000004
- type: ndcg_at_10
value: 28.907
- type: ndcg_at_100
value: 39.695
- type: ndcg_at_1000
value: 44.582
- type: ndcg_at_3
value: 26.949
- type: ndcg_at_5
value: 23.988
- type: precision_at_1
value: 33.800000000000004
- type: precision_at_10
value: 15.079999999999998
- type: precision_at_100
value: 3.056
- type: precision_at_1000
value: 0.42100000000000004
- type: precision_at_3
value: 25.167
- type: precision_at_5
value: 21.26
- type: recall_at_1
value: 6.873
- type: recall_at_10
value: 30.568
- type: recall_at_100
value: 62.062
- type: recall_at_1000
value: 85.37700000000001
- type: recall_at_3
value: 15.312999999999999
- type: recall_at_5
value: 21.575
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.37009118256057
- type: cos_sim_spearman
value: 79.27986395671529
- type: euclidean_pearson
value: 79.18037715442115
- type: euclidean_spearman
value: 79.28004791561621
- type: manhattan_pearson
value: 79.34062972800541
- type: manhattan_spearman
value: 79.43106695543402
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 87.48474767383833
- type: cos_sim_spearman
value: 79.54505388752513
- type: euclidean_pearson
value: 83.43282704179565
- type: euclidean_spearman
value: 79.54579919925405
- type: manhattan_pearson
value: 83.77564492427952
- type: manhattan_spearman
value: 79.84558396989286
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 88.803698035802
- type: cos_sim_spearman
value: 88.83451367754881
- type: euclidean_pearson
value: 88.28939285711628
- type: euclidean_spearman
value: 88.83528996073112
- type: manhattan_pearson
value: 88.28017412671795
- type: manhattan_spearman
value: 88.9228828016344
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 85.27469288153428
- type: cos_sim_spearman
value: 83.87477064876288
- type: euclidean_pearson
value: 84.2601737035379
- type: euclidean_spearman
value: 83.87431082479074
- type: manhattan_pearson
value: 84.3621547772745
- type: manhattan_spearman
value: 84.12094375000423
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.12749863201587
- type: cos_sim_spearman
value: 88.54287568368565
- type: euclidean_pearson
value: 87.90429700607999
- type: euclidean_spearman
value: 88.5437689576261
- type: manhattan_pearson
value: 88.19276653356833
- type: manhattan_spearman
value: 88.99995393814679
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.68398747560902
- type: cos_sim_spearman
value: 86.48815303460574
- type: euclidean_pearson
value: 85.52356631237954
- type: euclidean_spearman
value: 86.486391949551
- type: manhattan_pearson
value: 85.67267981761788
- type: manhattan_spearman
value: 86.7073696332485
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.9057107443124
- type: cos_sim_spearman
value: 88.7312168757697
- type: euclidean_pearson
value: 88.72810439714794
- type: euclidean_spearman
value: 88.71976185854771
- type: manhattan_pearson
value: 88.50433745949111
- type: manhattan_spearman
value: 88.51726175544195
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 67.59391795109886
- type: cos_sim_spearman
value: 66.87613008631367
- type: euclidean_pearson
value: 69.23198488262217
- type: euclidean_spearman
value: 66.85427723013692
- type: manhattan_pearson
value: 69.50730124841084
- type: manhattan_spearman
value: 67.10404669820792
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.0820605344619
- type: cos_sim_spearman
value: 86.8518089863434
- type: euclidean_pearson
value: 86.31087134689284
- type: euclidean_spearman
value: 86.8518520517941
- type: manhattan_pearson
value: 86.47203796160612
- type: manhattan_spearman
value: 87.1080149734421
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 89.09255369305481
- type: mrr
value: 97.10323445617563
- task:
type: Retrieval
dataset:
type: mteb/scifact
name: MTEB SciFact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 61.260999999999996
- type: map_at_10
value: 74.043
- type: map_at_100
value: 74.37700000000001
- type: map_at_1000
value: 74.384
- type: map_at_3
value: 71.222
- type: map_at_5
value: 72.875
- type: mrr_at_1
value: 64.333
- type: mrr_at_10
value: 74.984
- type: mrr_at_100
value: 75.247
- type: mrr_at_1000
value: 75.25500000000001
- type: mrr_at_3
value: 73.167
- type: mrr_at_5
value: 74.35000000000001
- type: ndcg_at_1
value: 64.333
- type: ndcg_at_10
value: 79.06
- type: ndcg_at_100
value: 80.416
- type: ndcg_at_1000
value: 80.55600000000001
- type: ndcg_at_3
value: 74.753
- type: ndcg_at_5
value: 76.97500000000001
- type: precision_at_1
value: 64.333
- type: precision_at_10
value: 10.567
- type: precision_at_100
value: 1.1199999999999999
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 29.889
- type: precision_at_5
value: 19.533
- type: recall_at_1
value: 61.260999999999996
- type: recall_at_10
value: 93.167
- type: recall_at_100
value: 99.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 81.667
- type: recall_at_5
value: 87.394
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.71980198019801
- type: cos_sim_ap
value: 92.81616007802704
- type: cos_sim_f1
value: 85.17548454688318
- type: cos_sim_precision
value: 89.43894389438944
- type: cos_sim_recall
value: 81.3
- type: dot_accuracy
value: 99.71980198019801
- type: dot_ap
value: 92.81398760591358
- type: dot_f1
value: 85.17548454688318
- type: dot_precision
value: 89.43894389438944
- type: dot_recall
value: 81.3
- type: euclidean_accuracy
value: 99.71980198019801
- type: euclidean_ap
value: 92.81560637245072
- type: euclidean_f1
value: 85.17548454688318
- type: euclidean_precision
value: 89.43894389438944
- type: euclidean_recall
value: 81.3
- type: manhattan_accuracy
value: 99.73069306930694
- type: manhattan_ap
value: 93.14005487480794
- type: manhattan_f1
value: 85.56263269639068
- type: manhattan_precision
value: 91.17647058823529
- type: manhattan_recall
value: 80.60000000000001
- type: max_accuracy
value: 99.73069306930694
- type: max_ap
value: 93.14005487480794
- type: max_f1
value: 85.56263269639068
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 79.86443362395185
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 49.40897096662564
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.66040806627947
- type: mrr
value: 56.58670475766064
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.51015090598575
- type: cos_sim_spearman
value: 31.35016454939226
- type: dot_pearson
value: 31.5150068731
- type: dot_spearman
value: 31.34790869023487
- task:
type: Retrieval
dataset:
type: mteb/trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.254
- type: map_at_10
value: 2.064
- type: map_at_100
value: 12.909
- type: map_at_1000
value: 31.761
- type: map_at_3
value: 0.738
- type: map_at_5
value: 1.155
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 98.0
- type: mrr_at_100
value: 98.0
- type: mrr_at_1000
value: 98.0
- type: mrr_at_3
value: 98.0
- type: mrr_at_5
value: 98.0
- type: ndcg_at_1
value: 93.0
- type: ndcg_at_10
value: 82.258
- type: ndcg_at_100
value: 64.34
- type: ndcg_at_1000
value: 57.912
- type: ndcg_at_3
value: 90.827
- type: ndcg_at_5
value: 86.79
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 84.8
- type: precision_at_100
value: 66.0
- type: precision_at_1000
value: 25.356
- type: precision_at_3
value: 94.667
- type: precision_at_5
value: 90.4
- type: recall_at_1
value: 0.254
- type: recall_at_10
value: 2.1950000000000003
- type: recall_at_100
value: 16.088
- type: recall_at_1000
value: 54.559000000000005
- type: recall_at_3
value: 0.75
- type: recall_at_5
value: 1.191
- task:
type: Retrieval
dataset:
type: mteb/touche2020
name: MTEB Touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.976
- type: map_at_10
value: 11.389000000000001
- type: map_at_100
value: 18.429000000000002
- type: map_at_1000
value: 20.113
- type: map_at_3
value: 6.483
- type: map_at_5
value: 8.770999999999999
- type: mrr_at_1
value: 40.816
- type: mrr_at_10
value: 58.118
- type: mrr_at_100
value: 58.489999999999995
- type: mrr_at_1000
value: 58.489999999999995
- type: mrr_at_3
value: 53.061
- type: mrr_at_5
value: 57.041
- type: ndcg_at_1
value: 40.816
- type: ndcg_at_10
value: 30.567
- type: ndcg_at_100
value: 42.44
- type: ndcg_at_1000
value: 53.480000000000004
- type: ndcg_at_3
value: 36.016
- type: ndcg_at_5
value: 34.257
- type: precision_at_1
value: 42.857
- type: precision_at_10
value: 25.714
- type: precision_at_100
value: 8.429
- type: precision_at_1000
value: 1.5939999999999999
- type: precision_at_3
value: 36.735
- type: precision_at_5
value: 33.878
- type: recall_at_1
value: 2.976
- type: recall_at_10
value: 17.854999999999997
- type: recall_at_100
value: 51.833
- type: recall_at_1000
value: 86.223
- type: recall_at_3
value: 7.887
- type: recall_at_5
value: 12.026
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 85.1174
- type: ap
value: 30.169441069345748
- type: f1
value: 69.79254701873245
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 72.58347481607245
- type: f1
value: 72.74877295564937
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 53.90586138221305
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.35769207844072
- type: cos_sim_ap
value: 77.9645072410354
- type: cos_sim_f1
value: 71.32352941176471
- type: cos_sim_precision
value: 66.5903890160183
- type: cos_sim_recall
value: 76.78100263852242
- type: dot_accuracy
value: 87.37557370209214
- type: dot_ap
value: 77.96250046429908
- type: dot_f1
value: 71.28932757557064
- type: dot_precision
value: 66.95249130938586
- type: dot_recall
value: 76.22691292875989
- type: euclidean_accuracy
value: 87.35173153722357
- type: euclidean_ap
value: 77.96520460741593
- type: euclidean_f1
value: 71.32470733210104
- type: euclidean_precision
value: 66.91329479768785
- type: euclidean_recall
value: 76.35883905013192
- type: manhattan_accuracy
value: 87.25636287774931
- type: manhattan_ap
value: 77.77752485611796
- type: manhattan_f1
value: 71.18148599269183
- type: manhattan_precision
value: 66.10859728506787
- type: manhattan_recall
value: 77.0976253298153
- type: max_accuracy
value: 87.37557370209214
- type: max_ap
value: 77.96520460741593
- type: max_f1
value: 71.32470733210104
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.38176737687739
- type: cos_sim_ap
value: 86.58811861657401
- type: cos_sim_f1
value: 79.09430644097604
- type: cos_sim_precision
value: 75.45085977911366
- type: cos_sim_recall
value: 83.10748383122882
- type: dot_accuracy
value: 89.38370784336554
- type: dot_ap
value: 86.58840606004333
- type: dot_f1
value: 79.10179860068133
- type: dot_precision
value: 75.44546153308643
- type: dot_recall
value: 83.13058207576223
- type: euclidean_accuracy
value: 89.38564830985369
- type: euclidean_ap
value: 86.58820721061164
- type: euclidean_f1
value: 79.09070942235888
- type: euclidean_precision
value: 75.38729937194697
- type: euclidean_recall
value: 83.17677856482906
- type: manhattan_accuracy
value: 89.40699344122326
- type: manhattan_ap
value: 86.60631843011362
- type: manhattan_f1
value: 79.14949970570925
- type: manhattan_precision
value: 75.78191039729502
- type: manhattan_recall
value: 82.83030489682784
- type: max_accuracy
value: 89.40699344122326
- type: max_ap
value: 86.60631843011362
- type: max_f1
value: 79.14949970570925
- task:
type: STS
dataset:
type: C-MTEB/AFQMC
name: MTEB AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cos_sim_pearson
value: 65.58442135663871
- type: cos_sim_spearman
value: 72.2538631361313
- type: euclidean_pearson
value: 70.97255486607429
- type: euclidean_spearman
value: 72.25374250228647
- type: manhattan_pearson
value: 70.83250199989911
- type: manhattan_spearman
value: 72.14819496536272
- task:
type: STS
dataset:
type: C-MTEB/ATEC
name: MTEB ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cos_sim_pearson
value: 59.99478404929932
- type: cos_sim_spearman
value: 62.61836216999812
- type: euclidean_pearson
value: 66.86429811933593
- type: euclidean_spearman
value: 62.6183520374191
- type: manhattan_pearson
value: 66.8063778911633
- type: manhattan_spearman
value: 62.569607573241115
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.98400000000001
- type: f1
value: 51.21447361350723
- task:
type: STS
dataset:
type: C-MTEB/BQ
name: MTEB BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cos_sim_pearson
value: 79.11941660686553
- type: cos_sim_spearman
value: 81.25029594540435
- type: euclidean_pearson
value: 82.06973504238826
- type: euclidean_spearman
value: 81.2501989488524
- type: manhattan_pearson
value: 82.10094630392753
- type: manhattan_spearman
value: 81.27987244392389
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringP2P
name: MTEB CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: v_measure
value: 47.07270168705156
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringS2S
name: MTEB CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: v_measure
value: 45.98511703185043
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv1-reranking
name: MTEB CMedQAv1
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 88.19895157194931
- type: mrr
value: 90.21424603174603
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv2-reranking
name: MTEB CMedQAv2
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 88.03317320980119
- type: mrr
value: 89.9461507936508
- task:
type: Retrieval
dataset:
type: C-MTEB/CmedqaRetrieval
name: MTEB CmedqaRetrieval
config: default
split: dev
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
metrics:
- type: map_at_1
value: 29.037000000000003
- type: map_at_10
value: 42.001
- type: map_at_100
value: 43.773
- type: map_at_1000
value: 43.878
- type: map_at_3
value: 37.637
- type: map_at_5
value: 40.034
- type: mrr_at_1
value: 43.136
- type: mrr_at_10
value: 51.158
- type: mrr_at_100
value: 52.083
- type: mrr_at_1000
value: 52.12
- type: mrr_at_3
value: 48.733
- type: mrr_at_5
value: 50.025
- type: ndcg_at_1
value: 43.136
- type: ndcg_at_10
value: 48.685
- type: ndcg_at_100
value: 55.513
- type: ndcg_at_1000
value: 57.242000000000004
- type: ndcg_at_3
value: 43.329
- type: ndcg_at_5
value: 45.438
- type: precision_at_1
value: 43.136
- type: precision_at_10
value: 10.56
- type: precision_at_100
value: 1.6129999999999998
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 24.064
- type: precision_at_5
value: 17.269000000000002
- type: recall_at_1
value: 29.037000000000003
- type: recall_at_10
value: 59.245000000000005
- type: recall_at_100
value: 87.355
- type: recall_at_1000
value: 98.74000000000001
- type: recall_at_3
value: 42.99
- type: recall_at_5
value: 49.681999999999995
- task:
type: PairClassification
dataset:
type: C-MTEB/CMNLI
name: MTEB Cmnli
config: default
split: validation
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
metrics:
- type: cos_sim_accuracy
value: 82.68190018039687
- type: cos_sim_ap
value: 90.18017125327886
- type: cos_sim_f1
value: 83.64080906868193
- type: cos_sim_precision
value: 79.7076890489303
- type: cos_sim_recall
value: 87.98223053542202
- type: dot_accuracy
value: 82.68190018039687
- type: dot_ap
value: 90.18782350103646
- type: dot_f1
value: 83.64242087729039
- type: dot_precision
value: 79.65313028764805
- type: dot_recall
value: 88.05237315875614
- type: euclidean_accuracy
value: 82.68190018039687
- type: euclidean_ap
value: 90.1801957900632
- type: euclidean_f1
value: 83.63636363636364
- type: euclidean_precision
value: 79.52772506852203
- type: euclidean_recall
value: 88.19265840542437
- type: manhattan_accuracy
value: 82.14070956103427
- type: manhattan_ap
value: 89.96178420101427
- type: manhattan_f1
value: 83.21087838578791
- type: manhattan_precision
value: 78.35605121850475
- type: manhattan_recall
value: 88.70703764320785
- type: max_accuracy
value: 82.68190018039687
- type: max_ap
value: 90.18782350103646
- type: max_f1
value: 83.64242087729039
- task:
type: Retrieval
dataset:
type: C-MTEB/CovidRetrieval
name: MTEB CovidRetrieval
config: default
split: dev
revision: 1271c7809071a13532e05f25fb53511ffce77117
metrics:
- type: map_at_1
value: 72.234
- type: map_at_10
value: 80.10000000000001
- type: map_at_100
value: 80.36
- type: map_at_1000
value: 80.363
- type: map_at_3
value: 78.315
- type: map_at_5
value: 79.607
- type: mrr_at_1
value: 72.392
- type: mrr_at_10
value: 80.117
- type: mrr_at_100
value: 80.36999999999999
- type: mrr_at_1000
value: 80.373
- type: mrr_at_3
value: 78.469
- type: mrr_at_5
value: 79.633
- type: ndcg_at_1
value: 72.392
- type: ndcg_at_10
value: 83.651
- type: ndcg_at_100
value: 84.749
- type: ndcg_at_1000
value: 84.83000000000001
- type: ndcg_at_3
value: 80.253
- type: ndcg_at_5
value: 82.485
- type: precision_at_1
value: 72.392
- type: precision_at_10
value: 9.557
- type: precision_at_100
value: 1.004
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 28.732000000000003
- type: precision_at_5
value: 18.377
- type: recall_at_1
value: 72.234
- type: recall_at_10
value: 94.573
- type: recall_at_100
value: 99.368
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 85.669
- type: recall_at_5
value: 91.01700000000001
- task:
type: Retrieval
dataset:
type: C-MTEB/DuRetrieval
name: MTEB DuRetrieval
config: default
split: dev
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
metrics:
- type: map_at_1
value: 26.173999999999996
- type: map_at_10
value: 80.04
- type: map_at_100
value: 82.94500000000001
- type: map_at_1000
value: 82.98100000000001
- type: map_at_3
value: 55.562999999999995
- type: map_at_5
value: 69.89800000000001
- type: mrr_at_1
value: 89.5
- type: mrr_at_10
value: 92.996
- type: mrr_at_100
value: 93.06400000000001
- type: mrr_at_1000
value: 93.065
- type: mrr_at_3
value: 92.658
- type: mrr_at_5
value: 92.84599999999999
- type: ndcg_at_1
value: 89.5
- type: ndcg_at_10
value: 87.443
- type: ndcg_at_100
value: 90.253
- type: ndcg_at_1000
value: 90.549
- type: ndcg_at_3
value: 85.874
- type: ndcg_at_5
value: 84.842
- type: precision_at_1
value: 89.5
- type: precision_at_10
value: 41.805
- type: precision_at_100
value: 4.827
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 76.85
- type: precision_at_5
value: 64.8
- type: recall_at_1
value: 26.173999999999996
- type: recall_at_10
value: 89.101
- type: recall_at_100
value: 98.08099999999999
- type: recall_at_1000
value: 99.529
- type: recall_at_3
value: 57.902
- type: recall_at_5
value: 74.602
- task:
type: Retrieval
dataset:
type: C-MTEB/EcomRetrieval
name: MTEB EcomRetrieval
config: default
split: dev
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
metrics:
- type: map_at_1
value: 56.10000000000001
- type: map_at_10
value: 66.15299999999999
- type: map_at_100
value: 66.625
- type: map_at_1000
value: 66.636
- type: map_at_3
value: 63.632999999999996
- type: map_at_5
value: 65.293
- type: mrr_at_1
value: 56.10000000000001
- type: mrr_at_10
value: 66.15299999999999
- type: mrr_at_100
value: 66.625
- type: mrr_at_1000
value: 66.636
- type: mrr_at_3
value: 63.632999999999996
- type: mrr_at_5
value: 65.293
- type: ndcg_at_1
value: 56.10000000000001
- type: ndcg_at_10
value: 71.146
- type: ndcg_at_100
value: 73.27799999999999
- type: ndcg_at_1000
value: 73.529
- type: ndcg_at_3
value: 66.09
- type: ndcg_at_5
value: 69.08999999999999
- type: precision_at_1
value: 56.10000000000001
- type: precision_at_10
value: 8.68
- type: precision_at_100
value: 0.964
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 24.4
- type: precision_at_5
value: 16.1
- type: recall_at_1
value: 56.10000000000001
- type: recall_at_10
value: 86.8
- type: recall_at_100
value: 96.39999999999999
- type: recall_at_1000
value: 98.3
- type: recall_at_3
value: 73.2
- type: recall_at_5
value: 80.5
- task:
type: Classification
dataset:
type: C-MTEB/IFlyTek-classification
name: MTEB IFlyTek
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 54.52096960369373
- type: f1
value: 40.930845295808695
- task:
type: Classification
dataset:
type: C-MTEB/JDReview-classification
name: MTEB JDReview
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 86.51031894934334
- type: ap
value: 55.9516014323483
- type: f1
value: 81.54813679326381
- task:
type: STS
dataset:
type: C-MTEB/LCQMC
name: MTEB LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cos_sim_pearson
value: 69.67437838574276
- type: cos_sim_spearman
value: 73.81314174653045
- type: euclidean_pearson
value: 72.63430276680275
- type: euclidean_spearman
value: 73.81358736777001
- type: manhattan_pearson
value: 72.58743833842829
- type: manhattan_spearman
value: 73.7590419009179
- task:
type: Reranking
dataset:
type: C-MTEB/Mmarco-reranking
name: MTEB MMarcoReranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 31.648613483640254
- type: mrr
value: 30.37420634920635
- task:
type: Retrieval
dataset:
type: C-MTEB/MMarcoRetrieval
name: MTEB MMarcoRetrieval
config: default
split: dev
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
metrics:
- type: map_at_1
value: 73.28099999999999
- type: map_at_10
value: 81.977
- type: map_at_100
value: 82.222
- type: map_at_1000
value: 82.22699999999999
- type: map_at_3
value: 80.441
- type: map_at_5
value: 81.46600000000001
- type: mrr_at_1
value: 75.673
- type: mrr_at_10
value: 82.41000000000001
- type: mrr_at_100
value: 82.616
- type: mrr_at_1000
value: 82.621
- type: mrr_at_3
value: 81.094
- type: mrr_at_5
value: 81.962
- type: ndcg_at_1
value: 75.673
- type: ndcg_at_10
value: 85.15599999999999
- type: ndcg_at_100
value: 86.151
- type: ndcg_at_1000
value: 86.26899999999999
- type: ndcg_at_3
value: 82.304
- type: ndcg_at_5
value: 84.009
- type: precision_at_1
value: 75.673
- type: precision_at_10
value: 10.042
- type: precision_at_100
value: 1.052
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 30.673000000000002
- type: precision_at_5
value: 19.326999999999998
- type: recall_at_1
value: 73.28099999999999
- type: recall_at_10
value: 94.446
- type: recall_at_100
value: 98.737
- type: recall_at_1000
value: 99.649
- type: recall_at_3
value: 86.984
- type: recall_at_5
value: 91.024
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-CN)
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 81.08607935440484
- type: f1
value: 78.24879986066307
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-CN)
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 86.05917955615332
- type: f1
value: 85.05279279434997
- task:
type: Retrieval
dataset:
type: C-MTEB/MedicalRetrieval
name: MTEB MedicalRetrieval
config: default
split: dev
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
metrics:
- type: map_at_1
value: 56.2
- type: map_at_10
value: 62.57899999999999
- type: map_at_100
value: 63.154999999999994
- type: map_at_1000
value: 63.193
- type: map_at_3
value: 61.217
- type: map_at_5
value: 62.012
- type: mrr_at_1
value: 56.3
- type: mrr_at_10
value: 62.629000000000005
- type: mrr_at_100
value: 63.205999999999996
- type: mrr_at_1000
value: 63.244
- type: mrr_at_3
value: 61.267
- type: mrr_at_5
value: 62.062
- type: ndcg_at_1
value: 56.2
- type: ndcg_at_10
value: 65.592
- type: ndcg_at_100
value: 68.657
- type: ndcg_at_1000
value: 69.671
- type: ndcg_at_3
value: 62.808
- type: ndcg_at_5
value: 64.24499999999999
- type: precision_at_1
value: 56.2
- type: precision_at_10
value: 7.5
- type: precision_at_100
value: 0.899
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 22.467000000000002
- type: precision_at_5
value: 14.180000000000001
- type: recall_at_1
value: 56.2
- type: recall_at_10
value: 75.0
- type: recall_at_100
value: 89.9
- type: recall_at_1000
value: 97.89999999999999
- type: recall_at_3
value: 67.4
- type: recall_at_5
value: 70.89999999999999
- task:
type: Classification
dataset:
type: C-MTEB/MultilingualSentiment-classification
name: MTEB MultilingualSentiment
config: default
split: validation
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 76.87666666666667
- type: f1
value: 76.7317686219665
- task:
type: PairClassification
dataset:
type: C-MTEB/OCNLI
name: MTEB Ocnli
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cos_sim_accuracy
value: 79.64266377910124
- type: cos_sim_ap
value: 84.78274442344829
- type: cos_sim_f1
value: 81.16947472745292
- type: cos_sim_precision
value: 76.47058823529412
- type: cos_sim_recall
value: 86.48363252375924
- type: dot_accuracy
value: 79.64266377910124
- type: dot_ap
value: 84.7851404063692
- type: dot_f1
value: 81.16947472745292
- type: dot_precision
value: 76.47058823529412
- type: dot_recall
value: 86.48363252375924
- type: euclidean_accuracy
value: 79.64266377910124
- type: euclidean_ap
value: 84.78068373762378
- type: euclidean_f1
value: 81.14794656110837
- type: euclidean_precision
value: 76.35009310986965
- type: euclidean_recall
value: 86.58922914466737
- type: manhattan_accuracy
value: 79.48023822414727
- type: manhattan_ap
value: 84.72928897427576
- type: manhattan_f1
value: 81.32084770823064
- type: manhattan_precision
value: 76.24768946395564
- type: manhattan_recall
value: 87.11721224920802
- type: max_accuracy
value: 79.64266377910124
- type: max_ap
value: 84.7851404063692
- type: max_f1
value: 81.32084770823064
- task:
type: Classification
dataset:
type: C-MTEB/OnlineShopping-classification
name: MTEB OnlineShopping
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 94.3
- type: ap
value: 92.8664032274438
- type: f1
value: 94.29311102997727
- task:
type: STS
dataset:
type: C-MTEB/PAWSX
name: MTEB PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cos_sim_pearson
value: 48.51392279882909
- type: cos_sim_spearman
value: 54.06338895994974
- type: euclidean_pearson
value: 52.58480559573412
- type: euclidean_spearman
value: 54.06417276612201
- type: manhattan_pearson
value: 52.69525121721343
- type: manhattan_spearman
value: 54.048147455389675
- task:
type: STS
dataset:
type: C-MTEB/QBQTC
name: MTEB QBQTC
config: default
split: test
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
metrics:
- type: cos_sim_pearson
value: 29.728387290757325
- type: cos_sim_spearman
value: 31.366121633635284
- type: euclidean_pearson
value: 29.14588368552961
- type: euclidean_spearman
value: 31.36764411112844
- type: manhattan_pearson
value: 29.63517350523121
- type: manhattan_spearman
value: 31.94157020583762
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 63.64868296271406
- type: cos_sim_spearman
value: 66.12800618164744
- type: euclidean_pearson
value: 63.21405767340238
- type: euclidean_spearman
value: 66.12786567790748
- type: manhattan_pearson
value: 64.04300276525848
- type: manhattan_spearman
value: 66.5066857145652
- task:
type: STS
dataset:
type: C-MTEB/STSB
name: MTEB STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cos_sim_pearson
value: 81.2302623912794
- type: cos_sim_spearman
value: 81.16833673266562
- type: euclidean_pearson
value: 79.47647843876024
- type: euclidean_spearman
value: 81.16944349524972
- type: manhattan_pearson
value: 79.84947238492208
- type: manhattan_spearman
value: 81.64626599410026
- task:
type: Reranking
dataset:
type: C-MTEB/T2Reranking
name: MTEB T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 67.80129586475687
- type: mrr
value: 77.77402311635554
- task:
type: Retrieval
dataset:
type: C-MTEB/T2Retrieval
name: MTEB T2Retrieval
config: default
split: dev
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
metrics:
- type: map_at_1
value: 28.666999999999998
- type: map_at_10
value: 81.063
- type: map_at_100
value: 84.504
- type: map_at_1000
value: 84.552
- type: map_at_3
value: 56.897
- type: map_at_5
value: 70.073
- type: mrr_at_1
value: 92.087
- type: mrr_at_10
value: 94.132
- type: mrr_at_100
value: 94.19800000000001
- type: mrr_at_1000
value: 94.19999999999999
- type: mrr_at_3
value: 93.78999999999999
- type: mrr_at_5
value: 94.002
- type: ndcg_at_1
value: 92.087
- type: ndcg_at_10
value: 87.734
- type: ndcg_at_100
value: 90.736
- type: ndcg_at_1000
value: 91.184
- type: ndcg_at_3
value: 88.78
- type: ndcg_at_5
value: 87.676
- type: precision_at_1
value: 92.087
- type: precision_at_10
value: 43.46
- type: precision_at_100
value: 5.07
- type: precision_at_1000
value: 0.518
- type: precision_at_3
value: 77.49000000000001
- type: precision_at_5
value: 65.194
- type: recall_at_1
value: 28.666999999999998
- type: recall_at_10
value: 86.632
- type: recall_at_100
value: 96.646
- type: recall_at_1000
value: 98.917
- type: recall_at_3
value: 58.333999999999996
- type: recall_at_5
value: 72.974
- task:
type: Classification
dataset:
type: C-MTEB/TNews-classification
name: MTEB TNews
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 52.971999999999994
- type: f1
value: 50.2898280984929
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringP2P
name: MTEB ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: v_measure
value: 86.0797948663824
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringS2S
name: MTEB ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: v_measure
value: 85.10759092255017
- task:
type: Retrieval
dataset:
type: C-MTEB/VideoRetrieval
name: MTEB VideoRetrieval
config: default
split: dev
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
metrics:
- type: map_at_1
value: 65.60000000000001
- type: map_at_10
value: 74.773
- type: map_at_100
value: 75.128
- type: map_at_1000
value: 75.136
- type: map_at_3
value: 73.05
- type: map_at_5
value: 74.13499999999999
- type: mrr_at_1
value: 65.60000000000001
- type: mrr_at_10
value: 74.773
- type: mrr_at_100
value: 75.128
- type: mrr_at_1000
value: 75.136
- type: mrr_at_3
value: 73.05
- type: mrr_at_5
value: 74.13499999999999
- type: ndcg_at_1
value: 65.60000000000001
- type: ndcg_at_10
value: 78.84299999999999
- type: ndcg_at_100
value: 80.40899999999999
- type: ndcg_at_1000
value: 80.57
- type: ndcg_at_3
value: 75.40599999999999
- type: ndcg_at_5
value: 77.351
- type: precision_at_1
value: 65.60000000000001
- type: precision_at_10
value: 9.139999999999999
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 27.400000000000002
- type: precision_at_5
value: 17.380000000000003
- type: recall_at_1
value: 65.60000000000001
- type: recall_at_10
value: 91.4
- type: recall_at_100
value: 98.4
- type: recall_at_1000
value: 99.6
- type: recall_at_3
value: 82.19999999999999
- type: recall_at_5
value: 86.9
- task:
type: Classification
dataset:
type: C-MTEB/waimai-classification
name: MTEB Waimai
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 89.47
- type: ap
value: 75.59561751845389
- type: f1
value: 87.95207751382563
- dataset:
config: default
name: MTEB AlloProfClusteringP2P
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
split: test
type: lyon-nlp/alloprof
metrics:
- type: v_measure
value: 76.05592323841036
task:
type: Clustering
- dataset:
config: default
name: MTEB AlloProfClusteringS2S
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
split: test
type: lyon-nlp/alloprof
metrics:
- type: v_measure
value: 64.51718058866508
task:
type: Clustering
- dataset:
config: default
name: MTEB AlloprofReranking
revision: 666fdacebe0291776e86f29345663dfaf80a0db9
split: test
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
metrics:
- type: map
value: 73.08278490943373
- type: mrr
value: 74.66561454570449
task:
type: Reranking
- dataset:
config: default
name: MTEB AlloprofRetrieval
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
split: test
type: lyon-nlp/alloprof
metrics:
- type: map_at_1
value: 38.912
- type: map_at_10
value: 52.437999999999995
- type: map_at_100
value: 53.38
- type: map_at_1000
value: 53.427
- type: map_at_3
value: 48.879
- type: map_at_5
value: 50.934000000000005
- type: mrr_at_1
value: 44.085
- type: mrr_at_10
value: 55.337
- type: mrr_at_100
value: 56.016999999999996
- type: mrr_at_1000
value: 56.043
- type: mrr_at_3
value: 52.55499999999999
- type: mrr_at_5
value: 54.20399999999999
- type: ndcg_at_1
value: 44.085
- type: ndcg_at_10
value: 58.876
- type: ndcg_at_100
value: 62.714000000000006
- type: ndcg_at_1000
value: 63.721000000000004
- type: ndcg_at_3
value: 52.444
- type: ndcg_at_5
value: 55.692
- type: precision_at_1
value: 44.085
- type: precision_at_10
value: 9.21
- type: precision_at_100
value: 1.164
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 23.043
- type: precision_at_5
value: 15.898000000000001
- type: recall_at_1
value: 38.912
- type: recall_at_10
value: 75.577
- type: recall_at_100
value: 92.038
- type: recall_at_1000
value: 99.325
- type: recall_at_3
value: 58.592
- type: recall_at_5
value: 66.235
task:
type: Retrieval
- dataset:
config: fr
name: MTEB AmazonReviewsClassification (fr)
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
split: test
type: mteb/amazon_reviews_multi
metrics:
- type: accuracy
value: 55.532000000000004
- type: f1
value: 52.5783943471605
task:
type: Classification
- dataset:
config: default
name: MTEB BSARDRetrieval
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
split: test
type: maastrichtlawtech/bsard
metrics:
- type: map_at_1
value: 8.108
- type: map_at_10
value: 14.710999999999999
- type: map_at_100
value: 15.891
- type: map_at_1000
value: 15.983
- type: map_at_3
value: 12.237
- type: map_at_5
value: 13.679
- type: mrr_at_1
value: 8.108
- type: mrr_at_10
value: 14.710999999999999
- type: mrr_at_100
value: 15.891
- type: mrr_at_1000
value: 15.983
- type: mrr_at_3
value: 12.237
- type: mrr_at_5
value: 13.679
- type: ndcg_at_1
value: 8.108
- type: ndcg_at_10
value: 18.796
- type: ndcg_at_100
value: 25.098
- type: ndcg_at_1000
value: 27.951999999999998
- type: ndcg_at_3
value: 13.712
- type: ndcg_at_5
value: 16.309
- type: precision_at_1
value: 8.108
- type: precision_at_10
value: 3.198
- type: precision_at_100
value: 0.626
- type: precision_at_1000
value: 0.086
- type: precision_at_3
value: 6.006
- type: precision_at_5
value: 4.865
- type: recall_at_1
value: 8.108
- type: recall_at_10
value: 31.982
- type: recall_at_100
value: 62.613
- type: recall_at_1000
value: 86.036
- type: recall_at_3
value: 18.018
- type: recall_at_5
value: 24.324
task:
type: Retrieval
- dataset:
config: default
name: MTEB HALClusteringS2S
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
split: test
type: lyon-nlp/clustering-hal-s2s
metrics:
- type: v_measure
value: 30.833269778867116
task:
type: Clustering
- dataset:
config: default
name: MTEB MLSUMClusteringP2P
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
split: test
type: mlsum
metrics:
- type: v_measure
value: 50.0281928004713
task:
type: Clustering
- dataset:
config: default
name: MTEB MLSUMClusteringS2S
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
split: test
type: mlsum
metrics:
- type: v_measure
value: 43.699961510636534
task:
type: Clustering
- dataset:
config: fr
name: MTEB MTOPDomainClassification (fr)
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
split: test
type: mteb/mtop_domain
metrics:
- type: accuracy
value: 96.68963357344191
- type: f1
value: 96.45175170820961
task:
type: Classification
- dataset:
config: fr
name: MTEB MTOPIntentClassification (fr)
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
split: test
type: mteb/mtop_intent
metrics:
- type: accuracy
value: 87.46946445349202
- type: f1
value: 65.79860440988624
task:
type: Classification
- dataset:
config: fra
name: MTEB MasakhaNEWSClassification (fra)
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
split: test
type: masakhane/masakhanews
metrics:
- type: accuracy
value: 82.60663507109005
- type: f1
value: 77.20462646604777
task:
type: Classification
- dataset:
config: fra
name: MTEB MasakhaNEWSClusteringP2P (fra)
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
split: test
type: masakhane/masakhanews
metrics:
- type: v_measure
value: 60.19311264967803
task:
type: Clustering
- dataset:
config: fra
name: MTEB MasakhaNEWSClusteringS2S (fra)
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
split: test
type: masakhane/masakhanews
metrics:
- type: v_measure
value: 63.6235764409785
task:
type: Clustering
- dataset:
config: fr
name: MTEB MassiveIntentClassification (fr)
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 81.65097511768661
- type: f1
value: 78.77796091490924
task:
type: Classification
- dataset:
config: fr
name: MTEB MassiveScenarioClassification (fr)
revision: 7d571f92784cd94a019292a1f45445077d0ef634
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 86.64425016812373
- type: f1
value: 85.4912728670017
task:
type: Classification
- dataset:
config: fr
name: MTEB MintakaRetrieval (fr)
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
split: test
type: jinaai/mintakaqa
metrics:
- type: map_at_1
value: 35.913000000000004
- type: map_at_10
value: 48.147
- type: map_at_100
value: 48.91
- type: map_at_1000
value: 48.949
- type: map_at_3
value: 45.269999999999996
- type: map_at_5
value: 47.115
- type: mrr_at_1
value: 35.913000000000004
- type: mrr_at_10
value: 48.147
- type: mrr_at_100
value: 48.91
- type: mrr_at_1000
value: 48.949
- type: mrr_at_3
value: 45.269999999999996
- type: mrr_at_5
value: 47.115
- type: ndcg_at_1
value: 35.913000000000004
- type: ndcg_at_10
value: 54.03
- type: ndcg_at_100
value: 57.839
- type: ndcg_at_1000
value: 58.925000000000004
- type: ndcg_at_3
value: 48.217999999999996
- type: ndcg_at_5
value: 51.56699999999999
- type: precision_at_1
value: 35.913000000000004
- type: precision_at_10
value: 7.244000000000001
- type: precision_at_100
value: 0.9039999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 18.905
- type: precision_at_5
value: 12.981000000000002
- type: recall_at_1
value: 35.913000000000004
- type: recall_at_10
value: 72.441
- type: recall_at_100
value: 90.41799999999999
- type: recall_at_1000
value: 99.099
- type: recall_at_3
value: 56.716
- type: recall_at_5
value: 64.90599999999999
task:
type: Retrieval
- dataset:
config: fr
name: MTEB OpusparcusPC (fr)
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
split: test
type: GEM/opusparcus
metrics:
- type: cos_sim_accuracy
value: 99.90069513406156
- type: cos_sim_ap
value: 100.0
- type: cos_sim_f1
value: 99.95032290114257
- type: cos_sim_precision
value: 100.0
- type: cos_sim_recall
value: 99.90069513406156
- type: dot_accuracy
value: 99.90069513406156
- type: dot_ap
value: 100.0
- type: dot_f1
value: 99.95032290114257
- type: dot_precision
value: 100.0
- type: dot_recall
value: 99.90069513406156
- type: euclidean_accuracy
value: 99.90069513406156
- type: euclidean_ap
value: 100.0
- type: euclidean_f1
value: 99.95032290114257
- type: euclidean_precision
value: 100.0
- type: euclidean_recall
value: 99.90069513406156
- type: manhattan_accuracy
value: 99.90069513406156
- type: manhattan_ap
value: 100.0
- type: manhattan_f1
value: 99.95032290114257
- type: manhattan_precision
value: 100.0
- type: manhattan_recall
value: 99.90069513406156
- type: max_accuracy
value: 99.90069513406156
- type: max_ap
value: 100.0
- type: max_f1
value: 99.95032290114257
task:
type: PairClassification
- dataset:
config: fr
name: MTEB PawsX (fr)
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
split: test
type: paws-x
metrics:
- type: cos_sim_accuracy
value: 75.25
- type: cos_sim_ap
value: 80.86376001270014
- type: cos_sim_f1
value: 73.65945437441204
- type: cos_sim_precision
value: 64.02289452166802
- type: cos_sim_recall
value: 86.71096345514951
- type: dot_accuracy
value: 75.25
- type: dot_ap
value: 80.93686107633002
- type: dot_f1
value: 73.65945437441204
- type: dot_precision
value: 64.02289452166802
- type: dot_recall
value: 86.71096345514951
- type: euclidean_accuracy
value: 75.25
- type: euclidean_ap
value: 80.86379136218862
- type: euclidean_f1
value: 73.65945437441204
- type: euclidean_precision
value: 64.02289452166802
- type: euclidean_recall
value: 86.71096345514951
- type: manhattan_accuracy
value: 75.3
- type: manhattan_ap
value: 80.87826606097734
- type: manhattan_f1
value: 73.68421052631581
- type: manhattan_precision
value: 64.0
- type: manhattan_recall
value: 86.82170542635659
- type: max_accuracy
value: 75.3
- type: max_ap
value: 80.93686107633002
- type: max_f1
value: 73.68421052631581
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICKFr
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
split: test
type: Lajavaness/SICK-fr
metrics:
- type: cos_sim_pearson
value: 81.42349425981143
- type: cos_sim_spearman
value: 78.90454327031226
- type: euclidean_pearson
value: 78.39086497435166
- type: euclidean_spearman
value: 78.9046133980509
- type: manhattan_pearson
value: 78.63743094286502
- type: manhattan_spearman
value: 79.12136348449269
task:
type: STS
- dataset:
config: fr
name: MTEB STS22 (fr)
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cos_sim_pearson
value: 81.452697919749
- type: cos_sim_spearman
value: 82.58116836039301
- type: euclidean_pearson
value: 81.04038478932786
- type: euclidean_spearman
value: 82.58116836039301
- type: manhattan_pearson
value: 81.37075396187771
- type: manhattan_spearman
value: 82.73678231355368
task:
type: STS
- dataset:
config: fr
name: MTEB STSBenchmarkMultilingualSTS (fr)
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
split: test
type: stsb_multi_mt
metrics:
- type: cos_sim_pearson
value: 85.7419764013806
- type: cos_sim_spearman
value: 85.46085808849622
- type: euclidean_pearson
value: 83.70449639870063
- type: euclidean_spearman
value: 85.46159013076233
- type: manhattan_pearson
value: 83.95259510313929
- type: manhattan_spearman
value: 85.8029724659458
task:
type: STS
- dataset:
config: default
name: MTEB SummEvalFr
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
split: test
type: lyon-nlp/summarization-summeval-fr-p2p
metrics:
- type: cos_sim_pearson
value: 32.61063271753325
- type: cos_sim_spearman
value: 31.454589417353603
- type: dot_pearson
value: 32.6106288643431
- type: dot_spearman
value: 31.454589417353603
task:
type: Summarization
- dataset:
config: default
name: MTEB SyntecReranking
revision: b205c5084a0934ce8af14338bf03feb19499c84d
split: test
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
metrics:
- type: map
value: 84.31666666666666
- type: mrr
value: 84.31666666666666
task:
type: Reranking
- dataset:
config: default
name: MTEB SyntecRetrieval
revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff
split: test
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
metrics:
- type: map_at_1
value: 63.0
- type: map_at_10
value: 73.471
- type: map_at_100
value: 73.87
- type: map_at_1000
value: 73.87
- type: map_at_3
value: 70.5
- type: map_at_5
value: 73.05
- type: mrr_at_1
value: 63.0
- type: mrr_at_10
value: 73.471
- type: mrr_at_100
value: 73.87
- type: mrr_at_1000
value: 73.87
- type: mrr_at_3
value: 70.5
- type: mrr_at_5
value: 73.05
- type: ndcg_at_1
value: 63.0
- type: ndcg_at_10
value: 78.255
- type: ndcg_at_100
value: 79.88
- type: ndcg_at_1000
value: 79.88
- type: ndcg_at_3
value: 72.702
- type: ndcg_at_5
value: 77.264
- type: precision_at_1
value: 63.0
- type: precision_at_10
value: 9.3
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 26.333000000000002
- type: precision_at_5
value: 18.0
- type: recall_at_1
value: 63.0
- type: recall_at_10
value: 93.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 79.0
- type: recall_at_5
value: 90.0
task:
type: Retrieval
- dataset:
config: fr
name: MTEB XPQARetrieval (fr)
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
split: test
type: jinaai/xpqa
metrics:
- type: map_at_1
value: 40.338
- type: map_at_10
value: 61.927
- type: map_at_100
value: 63.361999999999995
- type: map_at_1000
value: 63.405
- type: map_at_3
value: 55.479
- type: map_at_5
value: 59.732
- type: mrr_at_1
value: 63.551
- type: mrr_at_10
value: 71.006
- type: mrr_at_100
value: 71.501
- type: mrr_at_1000
value: 71.509
- type: mrr_at_3
value: 69.07
- type: mrr_at_5
value: 70.165
- type: ndcg_at_1
value: 63.551
- type: ndcg_at_10
value: 68.297
- type: ndcg_at_100
value: 73.13199999999999
- type: ndcg_at_1000
value: 73.751
- type: ndcg_at_3
value: 62.999
- type: ndcg_at_5
value: 64.89
- type: precision_at_1
value: 63.551
- type: precision_at_10
value: 15.661
- type: precision_at_100
value: 1.9789999999999999
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 38.273
- type: precision_at_5
value: 27.61
- type: recall_at_1
value: 40.338
- type: recall_at_10
value: 77.267
- type: recall_at_100
value: 95.892
- type: recall_at_1000
value: 99.75500000000001
- type: recall_at_3
value: 60.36
- type: recall_at_5
value: 68.825
task:
type: Retrieval
- dataset:
config: default
name: MTEB 8TagsClustering
revision: None
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 51.36126303874126
task:
type: Clustering
- dataset:
config: default
name: MTEB AllegroReviews
revision: None
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 67.13717693836979
- type: f1
value: 57.27609848003782
task:
type: Classification
- dataset:
config: default
name: MTEB ArguAna-PL
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
split: test
type: clarin-knext/arguana-pl
metrics:
- type: map_at_1
value: 35.276999999999994
- type: map_at_10
value: 51.086
- type: map_at_100
value: 51.788000000000004
- type: map_at_1000
value: 51.791
- type: map_at_3
value: 46.147
- type: map_at_5
value: 49.078
- type: mrr_at_1
value: 35.917
- type: mrr_at_10
value: 51.315999999999995
- type: mrr_at_100
value: 52.018
- type: mrr_at_1000
value: 52.022
- type: mrr_at_3
value: 46.349000000000004
- type: mrr_at_5
value: 49.297000000000004
- type: ndcg_at_1
value: 35.276999999999994
- type: ndcg_at_10
value: 59.870999999999995
- type: ndcg_at_100
value: 62.590999999999994
- type: ndcg_at_1000
value: 62.661
- type: ndcg_at_3
value: 49.745
- type: ndcg_at_5
value: 55.067
- type: precision_at_1
value: 35.276999999999994
- type: precision_at_10
value: 8.791
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.057
- type: precision_at_5
value: 14.637
- type: recall_at_1
value: 35.276999999999994
- type: recall_at_10
value: 87.909
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 60.171
- type: recall_at_5
value: 73.18599999999999
task:
type: Retrieval
- dataset:
config: default
name: MTEB CBD
revision: None
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 78.03000000000002
- type: ap
value: 29.12548553897622
- type: f1
value: 66.54857118886073
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: None
split: test
type: PL-MTEB/cdsce-pairclassification
metrics:
- type: cos_sim_accuracy
value: 89.0
- type: cos_sim_ap
value: 76.75437826834582
- type: cos_sim_f1
value: 66.4850136239782
- type: cos_sim_precision
value: 68.92655367231639
- type: cos_sim_recall
value: 64.21052631578948
- type: dot_accuracy
value: 89.0
- type: dot_ap
value: 76.75437826834582
- type: dot_f1
value: 66.4850136239782
- type: dot_precision
value: 68.92655367231639
- type: dot_recall
value: 64.21052631578948
- type: euclidean_accuracy
value: 89.0
- type: euclidean_ap
value: 76.75437826834582
- type: euclidean_f1
value: 66.4850136239782
- type: euclidean_precision
value: 68.92655367231639
- type: euclidean_recall
value: 64.21052631578948
- type: manhattan_accuracy
value: 89.0
- type: manhattan_ap
value: 76.66074220647083
- type: manhattan_f1
value: 66.47058823529412
- type: manhattan_precision
value: 75.33333333333333
- type: manhattan_recall
value: 59.473684210526315
- type: max_accuracy
value: 89.0
- type: max_ap
value: 76.75437826834582
- type: max_f1
value: 66.4850136239782
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: None
split: test
type: PL-MTEB/cdscr-sts
metrics:
- type: cos_sim_pearson
value: 93.12903172428328
- type: cos_sim_spearman
value: 92.66381487060741
- type: euclidean_pearson
value: 90.37278396708922
- type: euclidean_spearman
value: 92.66381487060741
- type: manhattan_pearson
value: 90.32503296540962
- type: manhattan_spearman
value: 92.6902938354313
task:
type: STS
- dataset:
config: default
name: MTEB DBPedia-PL
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
split: test
type: clarin-knext/dbpedia-pl
metrics:
- type: map_at_1
value: 8.83
- type: map_at_10
value: 18.326
- type: map_at_100
value: 26.496
- type: map_at_1000
value: 28.455000000000002
- type: map_at_3
value: 12.933
- type: map_at_5
value: 15.168000000000001
- type: mrr_at_1
value: 66.0
- type: mrr_at_10
value: 72.76700000000001
- type: mrr_at_100
value: 73.203
- type: mrr_at_1000
value: 73.219
- type: mrr_at_3
value: 71.458
- type: mrr_at_5
value: 72.246
- type: ndcg_at_1
value: 55.375
- type: ndcg_at_10
value: 41.3
- type: ndcg_at_100
value: 45.891
- type: ndcg_at_1000
value: 52.905
- type: ndcg_at_3
value: 46.472
- type: ndcg_at_5
value: 43.734
- type: precision_at_1
value: 66.0
- type: precision_at_10
value: 33.074999999999996
- type: precision_at_100
value: 11.094999999999999
- type: precision_at_1000
value: 2.374
- type: precision_at_3
value: 48.583
- type: precision_at_5
value: 42.0
- type: recall_at_1
value: 8.83
- type: recall_at_10
value: 22.587
- type: recall_at_100
value: 50.61600000000001
- type: recall_at_1000
value: 73.559
- type: recall_at_3
value: 13.688
- type: recall_at_5
value: 16.855
task:
type: Retrieval
- dataset:
config: default
name: MTEB FiQA-PL
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
split: test
type: clarin-knext/fiqa-pl
metrics:
- type: map_at_1
value: 20.587
- type: map_at_10
value: 33.095
- type: map_at_100
value: 35.24
- type: map_at_1000
value: 35.429
- type: map_at_3
value: 28.626
- type: map_at_5
value: 31.136999999999997
- type: mrr_at_1
value: 40.586
- type: mrr_at_10
value: 49.033
- type: mrr_at_100
value: 49.952999999999996
- type: mrr_at_1000
value: 49.992
- type: mrr_at_3
value: 46.553
- type: mrr_at_5
value: 48.035
- type: ndcg_at_1
value: 40.586
- type: ndcg_at_10
value: 41.046
- type: ndcg_at_100
value: 48.586
- type: ndcg_at_1000
value: 51.634
- type: ndcg_at_3
value: 36.773
- type: ndcg_at_5
value: 38.389
- type: precision_at_1
value: 40.586
- type: precision_at_10
value: 11.466
- type: precision_at_100
value: 1.909
- type: precision_at_1000
value: 0.245
- type: precision_at_3
value: 24.434
- type: precision_at_5
value: 18.426000000000002
- type: recall_at_1
value: 20.587
- type: recall_at_10
value: 47.986000000000004
- type: recall_at_100
value: 75.761
- type: recall_at_1000
value: 94.065
- type: recall_at_3
value: 33.339
- type: recall_at_5
value: 39.765
task:
type: Retrieval
- dataset:
config: default
name: MTEB HotpotQA-PL
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
split: test
type: clarin-knext/hotpotqa-pl
metrics:
- type: map_at_1
value: 40.878
- type: map_at_10
value: 58.775999999999996
- type: map_at_100
value: 59.632
- type: map_at_1000
value: 59.707
- type: map_at_3
value: 56.074
- type: map_at_5
value: 57.629
- type: mrr_at_1
value: 81.756
- type: mrr_at_10
value: 86.117
- type: mrr_at_100
value: 86.299
- type: mrr_at_1000
value: 86.30600000000001
- type: mrr_at_3
value: 85.345
- type: mrr_at_5
value: 85.832
- type: ndcg_at_1
value: 81.756
- type: ndcg_at_10
value: 67.608
- type: ndcg_at_100
value: 70.575
- type: ndcg_at_1000
value: 71.99600000000001
- type: ndcg_at_3
value: 63.723
- type: ndcg_at_5
value: 65.70700000000001
- type: precision_at_1
value: 81.756
- type: precision_at_10
value: 13.619
- type: precision_at_100
value: 1.5939999999999999
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 39.604
- type: precision_at_5
value: 25.332
- type: recall_at_1
value: 40.878
- type: recall_at_10
value: 68.096
- type: recall_at_100
value: 79.696
- type: recall_at_1000
value: 89.082
- type: recall_at_3
value: 59.406000000000006
- type: recall_at_5
value: 63.329
task:
type: Retrieval
- dataset:
config: default
name: MTEB MSMARCO-PL
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
split: test
type: clarin-knext/msmarco-pl
metrics:
- type: map_at_1
value: 2.1839999999999997
- type: map_at_10
value: 11.346
- type: map_at_100
value: 30.325000000000003
- type: map_at_1000
value: 37.806
- type: map_at_3
value: 4.842
- type: map_at_5
value: 6.891
- type: mrr_at_1
value: 86.047
- type: mrr_at_10
value: 89.14699999999999
- type: mrr_at_100
value: 89.46600000000001
- type: mrr_at_1000
value: 89.46600000000001
- type: mrr_at_3
value: 89.14699999999999
- type: mrr_at_5
value: 89.14699999999999
- type: ndcg_at_1
value: 67.829
- type: ndcg_at_10
value: 62.222
- type: ndcg_at_100
value: 55.337
- type: ndcg_at_1000
value: 64.076
- type: ndcg_at_3
value: 68.12700000000001
- type: ndcg_at_5
value: 64.987
- type: precision_at_1
value: 86.047
- type: precision_at_10
value: 69.535
- type: precision_at_100
value: 32.93
- type: precision_at_1000
value: 6.6049999999999995
- type: precision_at_3
value: 79.845
- type: precision_at_5
value: 75.349
- type: recall_at_1
value: 2.1839999999999997
- type: recall_at_10
value: 12.866
- type: recall_at_100
value: 43.505
- type: recall_at_1000
value: 72.366
- type: recall_at_3
value: 4.947
- type: recall_at_5
value: 7.192
task:
type: Retrieval
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 80.75319435104238
- type: f1
value: 77.58961444860606
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: 7d571f92784cd94a019292a1f45445077d0ef634
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 85.54472091459313
- type: f1
value: 84.29498563572106
task:
type: Classification
- dataset:
config: default
name: MTEB NFCorpus-PL
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
split: test
type: clarin-knext/nfcorpus-pl
metrics:
- type: map_at_1
value: 4.367
- type: map_at_10
value: 10.38
- type: map_at_100
value: 13.516
- type: map_at_1000
value: 14.982000000000001
- type: map_at_3
value: 7.367
- type: map_at_5
value: 8.59
- type: mrr_at_1
value: 41.486000000000004
- type: mrr_at_10
value: 48.886
- type: mrr_at_100
value: 49.657000000000004
- type: mrr_at_1000
value: 49.713
- type: mrr_at_3
value: 46.904
- type: mrr_at_5
value: 48.065000000000005
- type: ndcg_at_1
value: 40.402
- type: ndcg_at_10
value: 30.885
- type: ndcg_at_100
value: 28.393
- type: ndcg_at_1000
value: 37.428
- type: ndcg_at_3
value: 35.394999999999996
- type: ndcg_at_5
value: 33.391999999999996
- type: precision_at_1
value: 41.486000000000004
- type: precision_at_10
value: 23.437
- type: precision_at_100
value: 7.638
- type: precision_at_1000
value: 2.0389999999999997
- type: precision_at_3
value: 32.817
- type: precision_at_5
value: 28.915999999999997
- type: recall_at_1
value: 4.367
- type: recall_at_10
value: 14.655000000000001
- type: recall_at_100
value: 29.665999999999997
- type: recall_at_1000
value: 62.073
- type: recall_at_3
value: 8.51
- type: recall_at_5
value: 10.689
task:
type: Retrieval
- dataset:
config: default
name: MTEB NQ-PL
revision: f171245712cf85dd4700b06bef18001578d0ca8d
split: test
type: clarin-knext/nq-pl
metrics:
- type: map_at_1
value: 28.616000000000003
- type: map_at_10
value: 41.626000000000005
- type: map_at_100
value: 42.689
- type: map_at_1000
value: 42.733
- type: map_at_3
value: 37.729
- type: map_at_5
value: 39.879999999999995
- type: mrr_at_1
value: 32.068000000000005
- type: mrr_at_10
value: 44.029
- type: mrr_at_100
value: 44.87
- type: mrr_at_1000
value: 44.901
- type: mrr_at_3
value: 40.687
- type: mrr_at_5
value: 42.625
- type: ndcg_at_1
value: 32.068000000000005
- type: ndcg_at_10
value: 48.449999999999996
- type: ndcg_at_100
value: 53.13
- type: ndcg_at_1000
value: 54.186
- type: ndcg_at_3
value: 40.983999999999995
- type: ndcg_at_5
value: 44.628
- type: precision_at_1
value: 32.068000000000005
- type: precision_at_10
value: 7.9750000000000005
- type: precision_at_100
value: 1.061
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 18.404999999999998
- type: precision_at_5
value: 13.111
- type: recall_at_1
value: 28.616000000000003
- type: recall_at_10
value: 66.956
- type: recall_at_100
value: 87.657
- type: recall_at_1000
value: 95.548
- type: recall_at_3
value: 47.453
- type: recall_at_5
value: 55.87800000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB PAC
revision: None
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 69.04141326382856
- type: ap
value: 77.47589122111044
- type: f1
value: 66.6332277374775
task:
type: Classification
- dataset:
config: default
name: MTEB PPC
revision: None
split: test
type: PL-MTEB/ppc-pairclassification
metrics:
- type: cos_sim_accuracy
value: 86.4
- type: cos_sim_ap
value: 94.1044939667201
- type: cos_sim_f1
value: 88.78048780487805
- type: cos_sim_precision
value: 87.22044728434504
- type: cos_sim_recall
value: 90.39735099337747
- type: dot_accuracy
value: 86.4
- type: dot_ap
value: 94.1044939667201
- type: dot_f1
value: 88.78048780487805
- type: dot_precision
value: 87.22044728434504
- type: dot_recall
value: 90.39735099337747
- type: euclidean_accuracy
value: 86.4
- type: euclidean_ap
value: 94.1044939667201
- type: euclidean_f1
value: 88.78048780487805
- type: euclidean_precision
value: 87.22044728434504
- type: euclidean_recall
value: 90.39735099337747
- type: manhattan_accuracy
value: 86.4
- type: manhattan_ap
value: 94.11438365697387
- type: manhattan_f1
value: 88.77968877968877
- type: manhattan_precision
value: 87.84440842787681
- type: manhattan_recall
value: 89.73509933774835
- type: max_accuracy
value: 86.4
- type: max_ap
value: 94.11438365697387
- type: max_f1
value: 88.78048780487805
task:
type: PairClassification
- dataset:
config: default
name: MTEB PSC
revision: None
split: test
type: PL-MTEB/psc-pairclassification
metrics:
- type: cos_sim_accuracy
value: 97.86641929499072
- type: cos_sim_ap
value: 99.36904211868182
- type: cos_sim_f1
value: 96.56203288490283
- type: cos_sim_precision
value: 94.72140762463343
- type: cos_sim_recall
value: 98.47560975609755
- type: dot_accuracy
value: 97.86641929499072
- type: dot_ap
value: 99.36904211868183
- type: dot_f1
value: 96.56203288490283
- type: dot_precision
value: 94.72140762463343
- type: dot_recall
value: 98.47560975609755
- type: euclidean_accuracy
value: 97.86641929499072
- type: euclidean_ap
value: 99.36904211868183
- type: euclidean_f1
value: 96.56203288490283
- type: euclidean_precision
value: 94.72140762463343
- type: euclidean_recall
value: 98.47560975609755
- type: manhattan_accuracy
value: 98.14471243042672
- type: manhattan_ap
value: 99.43359540492416
- type: manhattan_f1
value: 96.98795180722892
- type: manhattan_precision
value: 95.83333333333334
- type: manhattan_recall
value: 98.17073170731707
- type: max_accuracy
value: 98.14471243042672
- type: max_ap
value: 99.43359540492416
- type: max_f1
value: 96.98795180722892
task:
type: PairClassification
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: None
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 89.39058171745152
- type: f1
value: 86.8552093529568
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: None
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 74.97975708502024
- type: f1
value: 58.73081628832407
task:
type: Classification
- dataset:
config: default
name: MTEB Quora-PL
revision: 0be27e93455051e531182b85e85e425aba12e9d4
split: test
type: clarin-knext/quora-pl
metrics:
- type: map_at_1
value: 64.917
- type: map_at_10
value: 78.74600000000001
- type: map_at_100
value: 79.501
- type: map_at_1000
value: 79.524
- type: map_at_3
value: 75.549
- type: map_at_5
value: 77.495
- type: mrr_at_1
value: 74.9
- type: mrr_at_10
value: 82.112
- type: mrr_at_100
value: 82.314
- type: mrr_at_1000
value: 82.317
- type: mrr_at_3
value: 80.745
- type: mrr_at_5
value: 81.607
- type: ndcg_at_1
value: 74.83999999999999
- type: ndcg_at_10
value: 83.214
- type: ndcg_at_100
value: 84.997
- type: ndcg_at_1000
value: 85.207
- type: ndcg_at_3
value: 79.547
- type: ndcg_at_5
value: 81.46600000000001
- type: precision_at_1
value: 74.83999999999999
- type: precision_at_10
value: 12.822
- type: precision_at_100
value: 1.506
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 34.903
- type: precision_at_5
value: 23.16
- type: recall_at_1
value: 64.917
- type: recall_at_10
value: 92.27199999999999
- type: recall_at_100
value: 98.715
- type: recall_at_1000
value: 99.854
- type: recall_at_3
value: 82.04599999999999
- type: recall_at_5
value: 87.2
task:
type: Retrieval
- dataset:
config: default
name: MTEB SCIDOCS-PL
revision: 45452b03f05560207ef19149545f168e596c9337
split: test
type: clarin-knext/scidocs-pl
metrics:
- type: map_at_1
value: 3.51
- type: map_at_10
value: 9.046999999999999
- type: map_at_100
value: 10.823
- type: map_at_1000
value: 11.144
- type: map_at_3
value: 6.257
- type: map_at_5
value: 7.648000000000001
- type: mrr_at_1
value: 17.299999999999997
- type: mrr_at_10
value: 27.419
- type: mrr_at_100
value: 28.618
- type: mrr_at_1000
value: 28.685
- type: mrr_at_3
value: 23.817
- type: mrr_at_5
value: 25.927
- type: ndcg_at_1
value: 17.299999999999997
- type: ndcg_at_10
value: 16.084
- type: ndcg_at_100
value: 23.729
- type: ndcg_at_1000
value: 29.476999999999997
- type: ndcg_at_3
value: 14.327000000000002
- type: ndcg_at_5
value: 13.017999999999999
- type: precision_at_1
value: 17.299999999999997
- type: precision_at_10
value: 8.63
- type: precision_at_100
value: 1.981
- type: precision_at_1000
value: 0.336
- type: precision_at_3
value: 13.4
- type: precision_at_5
value: 11.700000000000001
- type: recall_at_1
value: 3.51
- type: recall_at_10
value: 17.518
- type: recall_at_100
value: 40.275
- type: recall_at_1000
value: 68.203
- type: recall_at_3
value: 8.155
- type: recall_at_5
value: 11.875
task:
type: Retrieval
- dataset:
config: default
name: MTEB SICK-E-PL
revision: None
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics:
- type: cos_sim_accuracy
value: 86.30248675091724
- type: cos_sim_ap
value: 83.6756734006714
- type: cos_sim_f1
value: 74.97367497367497
- type: cos_sim_precision
value: 73.91003460207612
- type: cos_sim_recall
value: 76.06837606837607
- type: dot_accuracy
value: 86.30248675091724
- type: dot_ap
value: 83.6756734006714
- type: dot_f1
value: 74.97367497367497
- type: dot_precision
value: 73.91003460207612
- type: dot_recall
value: 76.06837606837607
- type: euclidean_accuracy
value: 86.30248675091724
- type: euclidean_ap
value: 83.67566984333091
- type: euclidean_f1
value: 74.97367497367497
- type: euclidean_precision
value: 73.91003460207612
- type: euclidean_recall
value: 76.06837606837607
- type: manhattan_accuracy
value: 86.28210354667753
- type: manhattan_ap
value: 83.64216119130171
- type: manhattan_f1
value: 74.92152075340078
- type: manhattan_precision
value: 73.4107997265892
- type: manhattan_recall
value: 76.49572649572649
- type: max_accuracy
value: 86.30248675091724
- type: max_ap
value: 83.6756734006714
- type: max_f1
value: 74.97367497367497
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: None
split: test
type: PL-MTEB/sickr-pl-sts
metrics:
- type: cos_sim_pearson
value: 82.23295940859121
- type: cos_sim_spearman
value: 78.89329160768719
- type: euclidean_pearson
value: 79.56019107076818
- type: euclidean_spearman
value: 78.89330209904084
- type: manhattan_pearson
value: 79.76098513973719
- type: manhattan_spearman
value: 79.05490162570123
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cos_sim_pearson
value: 37.732606308062486
- type: cos_sim_spearman
value: 41.01645667030284
- type: euclidean_pearson
value: 26.61722556367085
- type: euclidean_spearman
value: 41.01645667030284
- type: manhattan_pearson
value: 26.60917378970807
- type: manhattan_spearman
value: 41.51335727617614
task:
type: STS
- dataset:
config: default
name: MTEB SciFact-PL
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
split: test
type: clarin-knext/scifact-pl
metrics:
- type: map_at_1
value: 54.31700000000001
- type: map_at_10
value: 65.564
- type: map_at_100
value: 66.062
- type: map_at_1000
value: 66.08699999999999
- type: map_at_3
value: 62.592999999999996
- type: map_at_5
value: 63.888
- type: mrr_at_1
value: 56.99999999999999
- type: mrr_at_10
value: 66.412
- type: mrr_at_100
value: 66.85900000000001
- type: mrr_at_1000
value: 66.88
- type: mrr_at_3
value: 64.22200000000001
- type: mrr_at_5
value: 65.206
- type: ndcg_at_1
value: 56.99999999999999
- type: ndcg_at_10
value: 70.577
- type: ndcg_at_100
value: 72.879
- type: ndcg_at_1000
value: 73.45
- type: ndcg_at_3
value: 65.5
- type: ndcg_at_5
value: 67.278
- type: precision_at_1
value: 56.99999999999999
- type: precision_at_10
value: 9.667
- type: precision_at_100
value: 1.083
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.0
- type: precision_at_5
value: 16.933
- type: recall_at_1
value: 54.31700000000001
- type: recall_at_10
value: 85.056
- type: recall_at_100
value: 95.667
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 71.0
- type: recall_at_5
value: 75.672
task:
type: Retrieval
- dataset:
config: default
name: MTEB TRECCOVID-PL
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
split: test
type: clarin-knext/trec-covid-pl
metrics:
- type: map_at_1
value: 0.245
- type: map_at_10
value: 2.051
- type: map_at_100
value: 12.009
- type: map_at_1000
value: 27.448
- type: map_at_3
value: 0.721
- type: map_at_5
value: 1.13
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 93.0
- type: mrr_at_100
value: 93.0
- type: mrr_at_1000
value: 93.0
- type: mrr_at_3
value: 93.0
- type: mrr_at_5
value: 93.0
- type: ndcg_at_1
value: 85.0
- type: ndcg_at_10
value: 80.303
- type: ndcg_at_100
value: 61.23499999999999
- type: ndcg_at_1000
value: 52.978
- type: ndcg_at_3
value: 84.419
- type: ndcg_at_5
value: 82.976
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 83.39999999999999
- type: precision_at_100
value: 61.96
- type: precision_at_1000
value: 22.648
- type: precision_at_3
value: 89.333
- type: precision_at_5
value: 87.2
- type: recall_at_1
value: 0.245
- type: recall_at_10
value: 2.193
- type: recall_at_100
value: 14.938
- type: recall_at_1000
value: 48.563
- type: recall_at_3
value: 0.738
- type: recall_at_5
value: 1.173
task:
type: Retrieval
---
## gte-Qwen2-7B-instruct
**gte-Qwen2-7B-instruct** is the latest model in the gte (General Text Embedding) model family that ranks **No.1** in both English and Chinese evaluations on the Massive Text Embedding Benchmark [MTEB benchmark](https://huggingface.co/spaces/mteb/leaderboard) (as of June 16, 2024).
Recently, the [**Qwen team**](https://huggingface.co/Qwen) released the Qwen2 series models, and we have trained the **gte-Qwen2-7B-instruct** model based on the [Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B) LLM model. Compared to the [gte-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) model, the **gte-Qwen2-7B-instruct** model uses the same training data and training strategies during the finetuning stage, with the only difference being the upgraded base model to Qwen2-7B. Considering the improvements in the Qwen2 series models compared to the Qwen1.5 series, we can also expect consistent performance enhancements in the embedding models.
The model incorporates several key advancements:
- Integration of bidirectional attention mechanisms, enriching its contextual understanding.
- Instruction tuning, applied solely on the query side for streamlined efficiency
- Comprehensive training across a vast, multilingual text corpus spanning diverse domains and scenarios. This training leverages both weakly supervised and supervised data, ensuring the model's applicability across numerous languages and a wide array of downstream tasks.
## Model Information
- Model Size: 7B
- Embedding Dimension: 3584
- Max Input Tokens: 32k
## Requirements
```
transformers>=4.39.2
flash_attn>=2.5.6
```
## Usage
### Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("Alibaba-NLP/gte-Qwen2-7B-instruct", trust_remote_code=True)
# In case you want to reduce the maximum length:
model.max_seq_length = 8192
queries = [
"how much protein should a female eat",
"summit define",
]
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.",
]
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)
scores = (query_embeddings @ document_embeddings.T) * 100
print(scores.tolist())
```
Observe the [config_sentence_transformers.json](config_sentence_transformers.json) to see all pre-built prompt names. Otherwise, you can use `model.encode(queries, prompt="Instruct: ...\nQuery: "` to use a custom prompt of your choice.
### Transformers
```python
import torch
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def last_token_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
if left_padding:
return last_hidden_states[:, -1]
else:
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.shape[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery: {query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'how much protein should a female eat'),
get_detailed_instruct(task, 'summit define')
]
# No need to add instruction for retrieval documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
input_texts = queries + documents
tokenizer = AutoTokenizer.from_pretrained('Alibaba-NLP/gte-Qwen2-7B-instruct', trust_remote_code=True)
model = AutoModel.from_pretrained('Alibaba-NLP/gte-Qwen2-7B-instruct', trust_remote_code=True)
max_length = 8192
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=max_length, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Evaluation
### MTEB & C-MTEB
You can use the [scripts/eval_mteb.py](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct/blob/main/scripts/eval_mteb.py) to reproduce the following result of **gte-Qwen2-7B-instruct** on MTEB(English)/C-MTEB(Chinese):
| Model Name | MTEB(56) | C-MTEB(35) | MTEB-fr(26) | MTEB-pl(26) |
|:----:|:---------:|:----------:|:----------:|:----------:|
| [bge-base-en-1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 64.23 | - | - | - |
| [bge-large-en-1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 63.55 | - | - | - |
| [gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 65.39 | - | - | - |
| [gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 64.11 | - | - | - |
| [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) | 64.68 | - | - | - |
| [acge_text_embedding](https://huggingface.co/aspire/acge_text_embedding) | - | 69.07 | - | - |
| [stella-mrl-large-zh-v3.5-1792d](https://huggingface.co/infgrad/stella-mrl-large-zh-v3.5-1792d) | - | 68.55 | - | - |
| [gte-large-zh](https://huggingface.co/thenlper/gte-large-zh) | - | 66.72 | - | - |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 59.45 | 56.21 | - | - |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 61.50 | 58.81 | - | - |
| [e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) | 66.63 | 60.81 | - | - |
| [gte-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) | 67.34 | 69.52 | - | - |
| [NV-Embed-v1](https://huggingface.co/nvidia/NV-Embed-v1) | 69.32 | - | - | - |
| [**gte-Qwen2-7B-instruct**](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) | **70.24** | **72.05** | **68.25** | **67.86** |
| gte-Qwen2-1.5B-instruc(https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) | 67.16 | 67.65 | 66.60 | 64.04 |
### GTE Models
The gte series models have consistently released two types of models: encoder-only models (based on the BERT architecture) and decode-only models (based on the LLM architecture).
| Models | Language | Max Sequence Length | Dimension | Model Size (Memory Usage, fp32) |
|:-------------------------------------------------------------------------------------:|:--------:|:-----: |:---------:|:-------------------------------:|
| [GTE-large-zh](https://huggingface.co/thenlper/gte-large-zh) | Chinese | 512 | 1024 | 1.25GB |
| [GTE-base-zh](https://huggingface.co/thenlper/gte-base-zh) | Chinese | 512 | 512 | 0.41GB |
| [GTE-small-zh](https://huggingface.co/thenlper/gte-small-zh) | Chinese | 512 | 512 | 0.12GB |
| [GTE-large](https://huggingface.co/thenlper/gte-large) | English | 512 | 1024 | 1.25GB |
| [GTE-base](https://huggingface.co/thenlper/gte-base) | English | 512 | 512 | 0.21GB |
| [GTE-small](https://huggingface.co/thenlper/gte-small) | English | 512 | 384 | 0.10GB |
| [GTE-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | English | 8192 | 1024 | 1.74GB |
| [GTE-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) | English | 8192 | 768 | 0.51GB |
| [GTE-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) | Multilingual | 32000 | 4096 | 26.45GB |
| [GTE-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) | Multilingual | 32000 | 3584 | 26.45GB |
| [GTE-Qwen2-1.5B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) | Multilingual | 32000 | 1536 | 6.62GB |
## Citation
If you find our paper or models helpful, please consider cite:
```
@article{li2023towards,
title={Towards general text embeddings with multi-stage contrastive learning},
author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan},
journal={arXiv preprint arXiv:2308.03281},
year={2023}
}
``` |
MrezaPRZ/sql-encoder | MrezaPRZ | "2024-02-16T22:28:30Z" | 11,558 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-classification",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-coder-1.3b-instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-classification | "2024-02-16T20:18:55Z" | ---
license: other
tags:
- generated_from_trainer
base_model: deepseek-ai/deepseek-coder-1.3b-instruct
model-index:
- name: encoder
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# encoder
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0670
- Mse: 1.0575
- Rmse: 1.0283
- Mae: 0.9076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Rmse | Mae |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|
| 0.0988 | 0.21 | 3000 | 0.0928 | 1.4795 | 1.2164 | 1.1099 |
| 0.0793 | 0.42 | 6000 | 0.0867 | 1.4370 | 1.1987 | 1.1076 |
| 0.0702 | 0.63 | 9000 | 0.0777 | 0.7554 | 0.8691 | 0.7701 |
| 0.0634 | 0.84 | 12000 | 0.0716 | 1.0950 | 1.0464 | 0.9449 |
| 0.0563 | 1.05 | 15000 | 0.0686 | 0.9966 | 0.9983 | 0.8899 |
| 0.0484 | 1.26 | 18000 | 0.0673 | 1.0653 | 1.0321 | 0.9161 |
| 0.0466 | 1.47 | 21000 | 0.0671 | 1.0877 | 1.0429 | 0.9219 |
| 0.0462 | 1.68 | 24000 | 0.0670 | 1.0613 | 1.0302 | 0.9090 |
| 0.046 | 1.89 | 27000 | 0.0670 | 1.0575 | 1.0283 | 0.9076 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0.dev20230605+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
mradermacher/Hansoldeco-Gemma-2-9b-it-v0.1-GGUF | mradermacher | "2024-07-02T03:37:17Z" | 11,554 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:sosoai/Hansoldeco-Gemma-2-9b-it-v0.1",
"endpoints_compatible",
"region:us"
] | null | "2024-07-02T02:18:04Z" | ---
base_model: sosoai/Hansoldeco-Gemma-2-9b-it-v0.1
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/sosoai/Hansoldeco-Gemma-2-9b-it-v0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Hansoldeco-Gemma-2-9b-it-v0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hansoldeco-Gemma-2-9b-it-v0.1-GGUF/resolve/main/Hansoldeco-Gemma-2-9b-it-v0.1.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hansoldeco-Gemma-2-9b-it-v0.1-GGUF/resolve/main/Hansoldeco-Gemma-2-9b-it-v0.1.IQ3_XS.gguf) | IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Hansoldeco-Gemma-2-9b-it-v0.1-GGUF/resolve/main/Hansoldeco-Gemma-2-9b-it-v0.1.IQ3_S.gguf) | IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Hansoldeco-Gemma-2-9b-it-v0.1-GGUF/resolve/main/Hansoldeco-Gemma-2-9b-it-v0.1.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hansoldeco-Gemma-2-9b-it-v0.1-GGUF/resolve/main/Hansoldeco-Gemma-2-9b-it-v0.1.IQ3_M.gguf) | IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Hansoldeco-Gemma-2-9b-it-v0.1-GGUF/resolve/main/Hansoldeco-Gemma-2-9b-it-v0.1.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hansoldeco-Gemma-2-9b-it-v0.1-GGUF/resolve/main/Hansoldeco-Gemma-2-9b-it-v0.1.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Hansoldeco-Gemma-2-9b-it-v0.1-GGUF/resolve/main/Hansoldeco-Gemma-2-9b-it-v0.1.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hansoldeco-Gemma-2-9b-it-v0.1-GGUF/resolve/main/Hansoldeco-Gemma-2-9b-it-v0.1.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hansoldeco-Gemma-2-9b-it-v0.1-GGUF/resolve/main/Hansoldeco-Gemma-2-9b-it-v0.1.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hansoldeco-Gemma-2-9b-it-v0.1-GGUF/resolve/main/Hansoldeco-Gemma-2-9b-it-v0.1.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Hansoldeco-Gemma-2-9b-it-v0.1-GGUF/resolve/main/Hansoldeco-Gemma-2-9b-it-v0.1.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Hansoldeco-Gemma-2-9b-it-v0.1-GGUF/resolve/main/Hansoldeco-Gemma-2-9b-it-v0.1.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Hansoldeco-Gemma-2-9b-it-v0.1-GGUF/resolve/main/Hansoldeco-Gemma-2-9b-it-v0.1.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Hansoldeco-Gemma-2-9b-it-v0.1-GGUF/resolve/main/Hansoldeco-Gemma-2-9b-it-v0.1.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Aura_Revived_Base-i1-GGUF | mradermacher | "2024-06-24T16:43:56Z" | 11,548 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:jeiku/Aura_Revived_Base",
"endpoints_compatible",
"region:us"
] | null | "2024-06-24T15:03:33Z" | ---
base_model: jeiku/Aura_Revived_Base
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jeiku/Aura_Revived_Base
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Aura_Revived_Base-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-i1-GGUF/resolve/main/Aura_Revived_Base.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-i1-GGUF/resolve/main/Aura_Revived_Base.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-i1-GGUF/resolve/main/Aura_Revived_Base.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-i1-GGUF/resolve/main/Aura_Revived_Base.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-i1-GGUF/resolve/main/Aura_Revived_Base.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-i1-GGUF/resolve/main/Aura_Revived_Base.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-i1-GGUF/resolve/main/Aura_Revived_Base.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-i1-GGUF/resolve/main/Aura_Revived_Base.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-i1-GGUF/resolve/main/Aura_Revived_Base.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-i1-GGUF/resolve/main/Aura_Revived_Base.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-i1-GGUF/resolve/main/Aura_Revived_Base.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-i1-GGUF/resolve/main/Aura_Revived_Base.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-i1-GGUF/resolve/main/Aura_Revived_Base.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-i1-GGUF/resolve/main/Aura_Revived_Base.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-i1-GGUF/resolve/main/Aura_Revived_Base.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-i1-GGUF/resolve/main/Aura_Revived_Base.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-i1-GGUF/resolve/main/Aura_Revived_Base.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-i1-GGUF/resolve/main/Aura_Revived_Base.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-i1-GGUF/resolve/main/Aura_Revived_Base.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-i1-GGUF/resolve/main/Aura_Revived_Base.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_Revived_Base-i1-GGUF/resolve/main/Aura_Revived_Base.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mradermacher/Llama-3-Stheno-Instruct-8B-GGUF | mradermacher | "2024-06-24T17:08:11Z" | 11,544 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mpasila/Llama-3-Stheno-Instruct-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-06-24T16:03:51Z" | ---
base_model: mpasila/Llama-3-Stheno-Instruct-8B
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mpasila/Llama-3-Stheno-Instruct-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-Stheno-Instruct-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Stheno-Instruct-8B-GGUF/resolve/main/Llama-3-Stheno-Instruct-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Stheno-Instruct-8B-GGUF/resolve/main/Llama-3-Stheno-Instruct-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Stheno-Instruct-8B-GGUF/resolve/main/Llama-3-Stheno-Instruct-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Stheno-Instruct-8B-GGUF/resolve/main/Llama-3-Stheno-Instruct-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Stheno-Instruct-8B-GGUF/resolve/main/Llama-3-Stheno-Instruct-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Stheno-Instruct-8B-GGUF/resolve/main/Llama-3-Stheno-Instruct-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Stheno-Instruct-8B-GGUF/resolve/main/Llama-3-Stheno-Instruct-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Stheno-Instruct-8B-GGUF/resolve/main/Llama-3-Stheno-Instruct-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Stheno-Instruct-8B-GGUF/resolve/main/Llama-3-Stheno-Instruct-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Stheno-Instruct-8B-GGUF/resolve/main/Llama-3-Stheno-Instruct-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Stheno-Instruct-8B-GGUF/resolve/main/Llama-3-Stheno-Instruct-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Stheno-Instruct-8B-GGUF/resolve/main/Llama-3-Stheno-Instruct-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Stheno-Instruct-8B-GGUF/resolve/main/Llama-3-Stheno-Instruct-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Stheno-Instruct-8B-GGUF/resolve/main/Llama-3-Stheno-Instruct-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Stheno-Instruct-8B-GGUF/resolve/main/Llama-3-Stheno-Instruct-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/SMaid-v0.3-i1-GGUF | mradermacher | "2024-06-22T19:17:00Z" | 11,542 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Alsebay/SMaid-v0.3",
"endpoints_compatible",
"region:us"
] | null | "2024-06-22T18:00:09Z" | ---
base_model: Alsebay/SMaid-v0.3
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Alsebay/SMaid-v0.3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/SMaid-v0.3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SMaid-v0.3-i1-GGUF/resolve/main/SMaid-v0.3.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SMaid-v0.3-i1-GGUF/resolve/main/SMaid-v0.3.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SMaid-v0.3-i1-GGUF/resolve/main/SMaid-v0.3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/SMaid-v0.3-i1-GGUF/resolve/main/SMaid-v0.3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/SMaid-v0.3-i1-GGUF/resolve/main/SMaid-v0.3.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/SMaid-v0.3-i1-GGUF/resolve/main/SMaid-v0.3.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/SMaid-v0.3-i1-GGUF/resolve/main/SMaid-v0.3.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SMaid-v0.3-i1-GGUF/resolve/main/SMaid-v0.3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SMaid-v0.3-i1-GGUF/resolve/main/SMaid-v0.3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/SMaid-v0.3-i1-GGUF/resolve/main/SMaid-v0.3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SMaid-v0.3-i1-GGUF/resolve/main/SMaid-v0.3.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SMaid-v0.3-i1-GGUF/resolve/main/SMaid-v0.3.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/SMaid-v0.3-i1-GGUF/resolve/main/SMaid-v0.3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SMaid-v0.3-i1-GGUF/resolve/main/SMaid-v0.3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SMaid-v0.3-i1-GGUF/resolve/main/SMaid-v0.3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/SMaid-v0.3-i1-GGUF/resolve/main/SMaid-v0.3.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SMaid-v0.3-i1-GGUF/resolve/main/SMaid-v0.3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SMaid-v0.3-i1-GGUF/resolve/main/SMaid-v0.3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SMaid-v0.3-i1-GGUF/resolve/main/SMaid-v0.3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/SMaid-v0.3-i1-GGUF/resolve/main/SMaid-v0.3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/SMaid-v0.3-i1-GGUF/resolve/main/SMaid-v0.3.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mradermacher/llama-3-8b-gpt-4o-GGUF | mradermacher | "2024-06-29T17:35:42Z" | 11,511 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"dataset:ruslandev/tagengo-subset-gpt-4o",
"base_model:ruslandev/llama-3-8b-gpt-4o",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-29T16:36:15Z" | ---
base_model: ruslandev/llama-3-8b-gpt-4o
datasets:
- ruslandev/tagengo-subset-gpt-4o
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ruslandev/llama-3-8b-gpt-4o
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-gpt-4o-GGUF/resolve/main/llama-3-8b-gpt-4o.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-gpt-4o-GGUF/resolve/main/llama-3-8b-gpt-4o.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-gpt-4o-GGUF/resolve/main/llama-3-8b-gpt-4o.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-gpt-4o-GGUF/resolve/main/llama-3-8b-gpt-4o.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-gpt-4o-GGUF/resolve/main/llama-3-8b-gpt-4o.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-gpt-4o-GGUF/resolve/main/llama-3-8b-gpt-4o.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-gpt-4o-GGUF/resolve/main/llama-3-8b-gpt-4o.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-gpt-4o-GGUF/resolve/main/llama-3-8b-gpt-4o.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-gpt-4o-GGUF/resolve/main/llama-3-8b-gpt-4o.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-gpt-4o-GGUF/resolve/main/llama-3-8b-gpt-4o.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-gpt-4o-GGUF/resolve/main/llama-3-8b-gpt-4o.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-gpt-4o-GGUF/resolve/main/llama-3-8b-gpt-4o.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-gpt-4o-GGUF/resolve/main/llama-3-8b-gpt-4o.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-gpt-4o-GGUF/resolve/main/llama-3-8b-gpt-4o.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-gpt-4o-GGUF/resolve/main/llama-3-8b-gpt-4o.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
climatebert/distilroberta-base-climate-f | climatebert | "2023-05-04T13:05:20Z" | 11,496 | 32 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"climate",
"en",
"arxiv:2110.12010",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | ---
language: en
license: apache-2.0
tags:
- climate
---
# Model Card for distilroberta-base-climate-f
## Model Description
This is the ClimateBERT language model based on the FULL-SELECT sample selection strategy.
*Note: We generally recommend choosing this language model over those based on the other sample selection strategies (unless you have good reasons not to). This is also the only language model we will update from time to time.*
Using the [DistilRoBERTa](https://huggingface.co/distilroberta-base) model as starting point, the ClimateBERT Language Model is additionally pre-trained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our [language model research paper](https://arxiv.org/abs/2110.12010).
*Update September 2, 2022: Now additionally pre-trained on an even larger text corpus, comprising >2M paragraphs. If you are looking for the language model before the update (i.e. for reproducibility), just use an older commit like [6be4fbd](https://huggingface.co/climatebert/distilroberta-base-climate-f/tree/6be4fbd3fedfd78ccb3c730c1f166947fbc940ba).*
## Climate performance model card
| distilroberta-base-climate-f | |
|--------------------------------------------------------------------------|----------------|
| 1. Is the resulting model publicly available? | Yes |
| 2. How much time does the training of the final model take? | 48 hours |
| 3. How much time did all experiments take (incl. hyperparameter search)? | 350 hours |
| 4. What was the power of GPU and CPU? | 0.7 kW |
| 5. At which geo location were the computations performed? | Germany |
| 6. What was the energy mix at the geo location? | 470 gCO2eq/kWh |
| 7. How much CO2eq was emitted to train the final model? | 15.79 kg |
| 8. How much CO2eq was emitted for all experiments? | 115.15 kg |
| 9. What is the average CO2eq emission for the inference of one sample? | 0.62 mg |
| 10. Which positive environmental impact can be expected from this work? | This work can be categorized as a building block tools following Jin et al (2021). It supports the training of NLP models in the field of climate change and, thereby, have a positive environmental impact in the future. |
| 11. Comments | Block pruning could decrease CO2eq emissions |
## Citation Information
```bibtex
@inproceedings{wkbl2022climatebert,
title={{ClimateBERT: A Pretrained Language Model for Climate-Related Text}},
author={Webersinke, Nicolas and Kraus, Mathias and Bingler, Julia and Leippold, Markus},
booktitle={Proceedings of AAAI 2022 Fall Symposium: The Role of AI in Responding to Climate Challenges},
year={2022},
doi={https://doi.org/10.48550/arXiv.2212.13631},
}
``` |
mradermacher/Qwen-IronMan-GGUF | mradermacher | "2024-07-02T16:30:20Z" | 11,492 | 0 | transformers | [
"transformers",
"gguf",
"rolp",
"ironman",
"ko",
"base_model:choah/Qwen-IronMan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-01T20:09:10Z" | ---
base_model: choah/Qwen-IronMan
language:
- ko
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- rolp
- ironman
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/choah/Qwen-IronMan
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen-IronMan-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-GGUF/resolve/main/Qwen-IronMan.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-GGUF/resolve/main/Qwen-IronMan.IQ3_XS.gguf) | IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-GGUF/resolve/main/Qwen-IronMan.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-GGUF/resolve/main/Qwen-IronMan.IQ3_S.gguf) | IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-GGUF/resolve/main/Qwen-IronMan.IQ3_M.gguf) | IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-GGUF/resolve/main/Qwen-IronMan.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-GGUF/resolve/main/Qwen-IronMan.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-GGUF/resolve/main/Qwen-IronMan.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-GGUF/resolve/main/Qwen-IronMan.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-GGUF/resolve/main/Qwen-IronMan.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-GGUF/resolve/main/Qwen-IronMan.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-GGUF/resolve/main/Qwen-IronMan.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-GGUF/resolve/main/Qwen-IronMan.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-IronMan-GGUF/resolve/main/Qwen-IronMan.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1-8B-GGUF | mradermacher | "2024-06-24T20:36:26Z" | 11,490 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"not-for-all-audiences",
"nsfw",
"rp",
"roleplay",
"role-play",
"en",
"base_model:Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.1-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-06-24T15:12:29Z" | ---
base_model: Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.1-8B
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- mergekit
- merge
- not-for-all-audiences
- nsfw
- rp
- roleplay
- role-play
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.1-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/L3-Uncen-Merger-Omelette-RP-v0.1-8B-GGUF/resolve/main/L3-Uncen-Merger-Omelette-RP-v0.1-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
gaianet/DeepSeek-Coder-V2-Lite-Instruct-GGUF | gaianet | "2024-06-25T06:35:21Z" | 11,490 | 0 | transformers | [
"transformers",
"gguf",
"deepseek_v2",
"text-generation",
"custom_code",
"base_model:deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-06-25T06:21:15Z" | ---
base_model: deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct
inference: false
license: other
license_name: deepseek-license
license_link: LICENSE
model_creator: DeepSeek-ai
model_name: DeepSeek-Coder-V2-Lite-Instruct
model_type: deepseek
quantized_by: Second State Inc.
---

# DeepSeek-Coder-V2-Lite-Instruct-GGUF
## Original Model
[deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct)
## Run with Gaianet
**Prompt template**
prompt template: `deepseek-coder-2`
**Context size**
chat_ctx_size: `128000`
**Run with GaiaNet**
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
- Customize your node: https://docs.gaianet.ai/node-guide/customize
*Quatized with llama.cpp b3212*
|
legraphista/Qwen2-57B-A14B-Instruct-GGUF | legraphista | "2024-06-07T13:09:01Z" | 11,489 | 0 | gguf | [
"gguf",
"chat",
"quantized",
"GGUF",
"quantization",
"static",
"16bit",
"8bit",
"6bit",
"5bit",
"4bit",
"3bit",
"2bit",
"text-generation",
"en",
"base_model:Qwen/Qwen2-57B-A14B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | "2024-06-07T09:42:49Z" | ---
base_model: Qwen/Qwen2-57B-A14B-Instruct
inference: false
language:
- en
library_name: gguf
license: apache-2.0
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- chat
- quantized
- GGUF
- quantization
- static
- 16bit
- 8bit
- 6bit
- 5bit
- 4bit
- 3bit
- 2bit
---
# Qwen2-57B-A14B-Instruct-GGUF
_Llama.cpp static quantization of Qwen/Qwen2-57B-A14B-Instruct_
Original Model: [Qwen/Qwen2-57B-A14B-Instruct](https://huggingface.co/Qwen/Qwen2-57B-A14B-Instruct)
Original dtype: `BF16` (`bfloat16`)
Quantized by: [https://github.com/ggerganov/llama.cpp/tree/master](https://github.com/ggerganov/llama.cpp/tree/master)
IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
- [Files](#files)
- [Common Quants](#common-quants)
- [All Quants](#all-quants)
- [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
- [Inference](#inference)
- [Simple chat template](#simple-chat-template)
- [Chat template with system prompt](#chat-template-with-system-prompt)
- [Llama.cpp](#llama-cpp)
- [FAQ](#faq)
- [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
- [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
---
## Files
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [Qwen2-57B-A14B-Instruct.Q8_0/*](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF/tree/main/Qwen2-57B-A14B-Instruct.Q8_0) | Q8_0 | 61.02GB | โ
Available | โช Static | โ Yes
| [Qwen2-57B-A14B-Instruct.Q6_K/*](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF/tree/main/Qwen2-57B-A14B-Instruct.Q6_K) | Q6_K | 47.12GB | โ
Available | โช Static | โ Yes
| [Qwen2-57B-A14B-Instruct.Q4_K.gguf](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF/blob/main/Qwen2-57B-A14B-Instruct.Q4_K.gguf) | Q4_K | 34.85GB | โ
Available | โช Static | ๐ฆ No
| [Qwen2-57B-A14B-Instruct.Q3_K.gguf](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF/blob/main/Qwen2-57B-A14B-Instruct.Q3_K.gguf) | Q3_K | 27.51GB | โ
Available | โช Static | ๐ฆ No
| [Qwen2-57B-A14B-Instruct.Q2_K.gguf](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF/blob/main/Qwen2-57B-A14B-Instruct.Q2_K.gguf) | Q2_K | 21.06GB | โ
Available | โช Static | ๐ฆ No
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [Qwen2-57B-A14B-Instruct.BF16/*](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF/tree/main/Qwen2-57B-A14B-Instruct.BF16) | BF16 | 114.84GB | โ
Available | โช Static | โ Yes
| [Qwen2-57B-A14B-Instruct.FP16/*](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF/tree/main/Qwen2-57B-A14B-Instruct.FP16) | F16 | 114.84GB | โ
Available | โช Static | โ Yes
| [Qwen2-57B-A14B-Instruct.Q8_0/*](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF/tree/main/Qwen2-57B-A14B-Instruct.Q8_0) | Q8_0 | 61.02GB | โ
Available | โช Static | โ Yes
| [Qwen2-57B-A14B-Instruct.Q6_K/*](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF/tree/main/Qwen2-57B-A14B-Instruct.Q6_K) | Q6_K | 47.12GB | โ
Available | โช Static | โ Yes
| [Qwen2-57B-A14B-Instruct.Q5_K.gguf](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF/blob/main/Qwen2-57B-A14B-Instruct.Q5_K.gguf) | Q5_K | 40.80GB | โ
Available | โช Static | ๐ฆ No
| [Qwen2-57B-A14B-Instruct.Q5_K_S.gguf](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF/blob/main/Qwen2-57B-A14B-Instruct.Q5_K_S.gguf) | Q5_K_S | 39.57GB | โ
Available | โช Static | ๐ฆ No
| [Qwen2-57B-A14B-Instruct.Q4_K.gguf](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF/blob/main/Qwen2-57B-A14B-Instruct.Q4_K.gguf) | Q4_K | 34.85GB | โ
Available | โช Static | ๐ฆ No
| [Qwen2-57B-A14B-Instruct.Q4_K_S.gguf](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF/blob/main/Qwen2-57B-A14B-Instruct.Q4_K_S.gguf) | Q4_K_S | 32.71GB | โ
Available | โช Static | ๐ฆ No
| [Qwen2-57B-A14B-Instruct.IQ4_NL.gguf](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF/blob/main/Qwen2-57B-A14B-Instruct.IQ4_NL.gguf) | IQ4_NL | 32.72GB | โ
Available | โช Static | ๐ฆ No
| [Qwen2-57B-A14B-Instruct.IQ4_XS.gguf](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF/blob/main/Qwen2-57B-A14B-Instruct.IQ4_XS.gguf) | IQ4_XS | 31.00GB | โ
Available | โช Static | ๐ฆ No
| [Qwen2-57B-A14B-Instruct.Q3_K.gguf](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF/blob/main/Qwen2-57B-A14B-Instruct.Q3_K.gguf) | Q3_K | 27.51GB | โ
Available | โช Static | ๐ฆ No
| [Qwen2-57B-A14B-Instruct.Q3_K_L.gguf](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF/blob/main/Qwen2-57B-A14B-Instruct.Q3_K_L.gguf) | Q3_K_L | 29.79GB | โ
Available | โช Static | ๐ฆ No
| [Qwen2-57B-A14B-Instruct.Q3_K_S.gguf](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF/blob/main/Qwen2-57B-A14B-Instruct.Q3_K_S.gguf) | Q3_K_S | 24.91GB | โ
Available | โช Static | ๐ฆ No
| [Qwen2-57B-A14B-Instruct.IQ3_M.gguf](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF/blob/main/Qwen2-57B-A14B-Instruct.IQ3_M.gguf) | IQ3_M | 25.23GB | โ
Available | โช Static | ๐ฆ No
| [Qwen2-57B-A14B-Instruct.IQ3_S.gguf](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF/blob/main/Qwen2-57B-A14B-Instruct.IQ3_S.gguf) | IQ3_S | 24.92GB | โ
Available | โช Static | ๐ฆ No
| [Qwen2-57B-A14B-Instruct.IQ3_XS.gguf](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF/blob/main/Qwen2-57B-A14B-Instruct.IQ3_XS.gguf) | IQ3_XS | 23.60GB | โ
Available | โช Static | ๐ฆ No
| [Qwen2-57B-A14B-Instruct.Q2_K.gguf](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF/blob/main/Qwen2-57B-A14B-Instruct.Q2_K.gguf) | Q2_K | 21.06GB | โ
Available | โช Static | ๐ฆ No
## Downloading using huggingface-cli
If you do not have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Download the specific file you want:
```
huggingface-cli download legraphista/Qwen2-57B-A14B-Instruct-GGUF --include "Qwen2-57B-A14B-Instruct.Q8_0.gguf" --local-dir ./
```
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/Qwen2-57B-A14B-Instruct-GGUF --include "Qwen2-57B-A14B-Instruct.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's
```
---
## Inference
### Simple chat template
```
<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
{user_prompt}<|im_end|>
<|im_start|>assistant
{assistant_response}<|im_end|>
<|im_start|>user
{next_user_prompt}<|im_end|>
```
### Chat template with system prompt
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{user_prompt}<|im_end|>
<|im_start|>assistant
{assistant_response}<|im_end|>
<|im_start|>user
{next_user_prompt}<|im_end|>
```
### Llama.cpp
```
llama.cpp/main -m Qwen2-57B-A14B-Instruct.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
```
---
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `Qwen2-57B-A14B-Instruct.Q8_0`)
3. Run `gguf-split --merge Qwen2-57B-A14B-Instruct.Q8_0/Qwen2-57B-A14B-Instruct.Q8_0-00001-of-XXXXX.gguf Qwen2-57B-A14B-Instruct.Q8_0.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)! |
QuantFactory/Meta-Llama-3-8B-GGUF | QuantFactory | "2024-04-20T16:20:22Z" | 11,488 | 97 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"text-generation",
"en",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"region:us"
] | text-generation | "2024-04-18T16:42:45Z" | ---
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
license: other
license_name: llama3
license_link: LICENSE
base_model: meta-llama/Meta-Llama-3-8B
---
# Meta-Llama-3-8B-GGUF
- This is GGUF quantized version of [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes โ 8B and 70B parameters โ in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
|
facebook/s2t-small-librispeech-asr | facebook | "2023-09-06T19:14:59Z" | 11,464 | 22 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"speech_to_text",
"automatic-speech-recognition",
"speech",
"audio",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"arxiv:2010.05171",
"arxiv:1904.08779",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: mit
pipeline_tag: automatic-speech-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: s2t-small-librispeech-asr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 4.3
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 9.0
---
# S2T-SMALL-LIBRISPEECH-ASR
`s2t-small-librispeech-asr` is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is an end-to-end sequence-to-sequence transformer model. It is trained with standard
autoregressive cross-entropy loss and generates the transcripts autoregressively.
## Intended uses & limitations
This model can be used for end-to-end speech recognition (ASR).
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
*Note: The feature extractor depends on [torchaudio](https://github.com/pytorch/audio) and the tokenizer depends on [sentencepiece](https://github.com/google/sentencepiece)
so be sure to install those packages before running the examples.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
input_features = processor(
ds[0]["audio"]["array"],
sampling_rate=16_000,
return_tensors="pt"
).input_features # Batch size 1
generated_ids = model.generate(input_features=input_features)
transcription = processor.batch_decode(generated_ids)
```
#### Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr)
*"clean"* and *"other"* test dataset.
```python
from datasets import load_dataset
from evaluate import load
from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
wer = load("wer")
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr").to("cuda")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr", do_upper_case=True)
def map_to_pred(batch):
features = processor(batch["audio"]["array"], sampling_rate=16000, padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_features=input_features, attention_mask=attention_mask)
batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)[0]
return batch
result = librispeech_eval.map(map_to_pred, remove_columns=["audio"])
print("WER:", wer.compute(predictions=result["transcription"], references=result["text"]))
```
*Result (WER)*:
| "clean" | "other" |
|:-------:|:-------:|
| 4.3 | 9.0 |
## Training data
The S2T-SMALL-LIBRISPEECH-ASR is trained on [LibriSpeech ASR Corpus](https://www.openslr.org/12), a dataset consisting of
approximately 1000 hours of 16kHz read English speech.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively.
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
``` |
mradermacher/OpenCAI-7B-V2-i1-GGUF | mradermacher | "2024-06-23T00:01:10Z" | 11,464 | 0 | transformers | [
"transformers",
"gguf",
"art",
"not-for-all-audiences",
"en",
"dataset:Norquinal/OpenCAI",
"base_model:Norquinal/OpenCAI-7B-V2",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-21T04:22:27Z" | ---
base_model: Norquinal/OpenCAI-7B-V2
datasets: Norquinal/OpenCAI
language: en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- art
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Norquinal/OpenCAI-7B-V2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/OpenCAI-7B-V2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenCAI-7B-V2-i1-GGUF/resolve/main/OpenCAI-7B-V2.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/OpenCAI-7B-V2-i1-GGUF/resolve/main/OpenCAI-7B-V2.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/OpenCAI-7B-V2-i1-GGUF/resolve/main/OpenCAI-7B-V2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCAI-7B-V2-i1-GGUF/resolve/main/OpenCAI-7B-V2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCAI-7B-V2-i1-GGUF/resolve/main/OpenCAI-7B-V2.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCAI-7B-V2-i1-GGUF/resolve/main/OpenCAI-7B-V2.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCAI-7B-V2-i1-GGUF/resolve/main/OpenCAI-7B-V2.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenCAI-7B-V2-i1-GGUF/resolve/main/OpenCAI-7B-V2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenCAI-7B-V2-i1-GGUF/resolve/main/OpenCAI-7B-V2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCAI-7B-V2-i1-GGUF/resolve/main/OpenCAI-7B-V2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenCAI-7B-V2-i1-GGUF/resolve/main/OpenCAI-7B-V2.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OpenCAI-7B-V2-i1-GGUF/resolve/main/OpenCAI-7B-V2.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCAI-7B-V2-i1-GGUF/resolve/main/OpenCAI-7B-V2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenCAI-7B-V2-i1-GGUF/resolve/main/OpenCAI-7B-V2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenCAI-7B-V2-i1-GGUF/resolve/main/OpenCAI-7B-V2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCAI-7B-V2-i1-GGUF/resolve/main/OpenCAI-7B-V2.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/OpenCAI-7B-V2-i1-GGUF/resolve/main/OpenCAI-7B-V2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/OpenCAI-7B-V2-i1-GGUF/resolve/main/OpenCAI-7B-V2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenCAI-7B-V2-i1-GGUF/resolve/main/OpenCAI-7B-V2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCAI-7B-V2-i1-GGUF/resolve/main/OpenCAI-7B-V2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCAI-7B-V2-i1-GGUF/resolve/main/OpenCAI-7B-V2.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
shibing624/text2vec-base-chinese-paraphrase | shibing624 | "2024-02-19T08:39:45Z" | 11,461 | 67 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"ernie",
"feature-extraction",
"text2vec",
"sentence-similarity",
"transformers",
"zh",
"dataset:shibing624/nli-zh-all",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2023-06-19T12:48:16Z" | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- text2vec
- feature-extraction
- sentence-similarity
- transformers
datasets:
- shibing624/nli-zh-all
language:
- zh
metrics:
- spearmanr
library_name: sentence-transformers
---
# shibing624/text2vec-base-chinese-paraphrase
This is a CoSENT(Cosine Sentence) model: shibing624/text2vec-base-chinese-paraphrase.
It maps sentences to a 768 dimensional dense vector space and can be used for tasks
like sentence embeddings, text matching or semantic search.
- training dataset: https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-paraphrase-dataset
- base model: nghuyong/ernie-3.0-base-zh
- max_seq_length: 256
- best epoch: 5
- sentence embedding dim: 768
## Evaluation
For an automated evaluation of this model, see the *Evaluation Benchmark*: [text2vec](https://github.com/shibing624/text2vec)
### Release Models
- ๆฌ้กน็ฎreleaseๆจกๅ็ไธญๆๅน้
่ฏๆต็ปๆ๏ผ
| Arch | BaseModel | Model | ATEC | BQ | LCQMC | PAWSX | STS-B | SOHU-dd | SOHU-dc | Avg | QPS |
|:-----------|:----------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------|:-----:|:-----:|:-----:|:-----:|:-----:|:-------:|:-------:|:---------:|:-----:|
| Word2Vec | word2vec | [w2v-light-tencent-chinese](https://ai.tencent.com/ailab/nlp/en/download.html) | 20.00 | 31.49 | 59.46 | 2.57 | 55.78 | 55.04 | 20.70 | 35.03 | 23769 |
| SBERT | xlm-roberta-base | [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) | 18.42 | 38.52 | 63.96 | 10.14 | 78.90 | 63.01 | 52.28 | 46.46 | 3138 |
| Instructor | hfl/chinese-roberta-wwm-ext | [moka-ai/m3e-base](https://huggingface.co/moka-ai/m3e-base) | 41.27 | 63.81 | 74.87 | 12.20 | 76.96 | 75.83 | 60.55 | 57.93 | 2980 |
| CoSENT | hfl/chinese-macbert-base | [shibing624/text2vec-base-chinese](https://huggingface.co/shibing624/text2vec-base-chinese) | 31.93 | 42.67 | 70.16 | 17.21 | 79.30 | 70.27 | 50.42 | 51.61 | 3008 |
| CoSENT | hfl/chinese-lert-large | [GanymedeNil/text2vec-large-chinese](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 32.61 | 44.59 | 69.30 | 14.51 | 79.44 | 73.01 | 59.04 | 53.12 | 2092 |
| CoSENT | nghuyong/ernie-3.0-base-zh | [shibing624/text2vec-base-chinese-sentence](https://huggingface.co/shibing624/text2vec-base-chinese-sentence) | 43.37 | 61.43 | 73.48 | 38.90 | 78.25 | 70.60 | 53.08 | 59.87 | 3089 |
| CoSENT | nghuyong/ernie-3.0-base-zh | [shibing624/text2vec-base-chinese-paraphrase](https://huggingface.co/shibing624/text2vec-base-chinese-paraphrase) | 44.89 | 63.58 | 74.24 | 40.90 | 78.93 | 76.70 | 63.30 | **63.08** | 3066 |
| CoSENT | sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 | [shibing624/text2vec-base-multilingual](https://huggingface.co/shibing624/text2vec-base-multilingual) | 32.39 | 50.33 | 65.64 | 32.56 | 74.45 | 68.88 | 51.17 | 53.67 | 4004 |
่ฏดๆ๏ผ
- ็ปๆ่ฏๆตๆๆ ๏ผspearman็ณปๆฐ
- `shibing624/text2vec-base-chinese`ๆจกๅ๏ผๆฏ็จCoSENTๆนๆณ่ฎญ็ป๏ผๅบไบ`hfl/chinese-macbert-base`ๅจไธญๆSTS-Bๆฐๆฎ่ฎญ็ปๅพๅฐ๏ผๅนถๅจไธญๆSTS-Bๆต่ฏ้่ฏไผฐ่พพๅฐ่พๅฅฝๆๆ๏ผ่ฟ่ก[examples/training_sup_text_matching_model.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model.py)ไปฃ็ ๅฏ่ฎญ็ปๆจกๅ๏ผๆจกๅๆไปถๅทฒ็ปไธไผ HF model hub๏ผไธญๆ้็จ่ฏญไนๅน้
ไปปๅกๆจ่ไฝฟ็จ
- `shibing624/text2vec-base-chinese-sentence`ๆจกๅ๏ผๆฏ็จCoSENTๆนๆณ่ฎญ็ป๏ผๅบไบ`nghuyong/ernie-3.0-base-zh`็จไบบๅทฅๆ้ๅ็ไธญๆSTSๆฐๆฎ้[shibing624/nli-zh-all/text2vec-base-chinese-sentence-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-sentence-dataset)่ฎญ็ปๅพๅฐ๏ผๅนถๅจไธญๆๅNLIๆต่ฏ้่ฏไผฐ่พพๅฐ่พๅฅฝๆๆ๏ผ่ฟ่ก[examples/training_sup_text_matching_model_jsonl_data.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model_jsonl_data.py)ไปฃ็ ๅฏ่ฎญ็ปๆจกๅ๏ผๆจกๅๆไปถๅทฒ็ปไธไผ HF model hub๏ผไธญๆs2s(ๅฅๅญvsๅฅๅญ)่ฏญไนๅน้
ไปปๅกๆจ่ไฝฟ็จ
- `shibing624/text2vec-base-chinese-paraphrase`ๆจกๅ๏ผๆฏ็จCoSENTๆนๆณ่ฎญ็ป๏ผๅบไบ`nghuyong/ernie-3.0-base-zh`็จไบบๅทฅๆ้ๅ็ไธญๆSTSๆฐๆฎ้[shibing624/nli-zh-all/text2vec-base-chinese-paraphrase-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-paraphrase-dataset)๏ผๆฐๆฎ้็ธๅฏนไบ[shibing624/nli-zh-all/text2vec-base-chinese-sentence-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-sentence-dataset)ๅ ๅ
ฅไบs2p(sentence to paraphrase)ๆฐๆฎ๏ผๅผบๅไบๅ
ถ้ฟๆๆฌ็่กจๅพ่ฝๅ๏ผๅนถๅจไธญๆๅNLIๆต่ฏ้่ฏไผฐ่พพๅฐSOTA๏ผ่ฟ่ก[examples/training_sup_text_matching_model_jsonl_data.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model_jsonl_data.py)ไปฃ็ ๅฏ่ฎญ็ปๆจกๅ๏ผๆจกๅๆไปถๅทฒ็ปไธไผ HF model hub๏ผไธญๆs2p(ๅฅๅญvsๆฎต่ฝ)่ฏญไนๅน้
ไปปๅกๆจ่ไฝฟ็จ
- `sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2`ๆจกๅๆฏ็จSBERT่ฎญ็ป๏ผๆฏ`paraphrase-MiniLM-L12-v2`ๆจกๅ็ๅค่ฏญ่จ็ๆฌ๏ผๆฏๆไธญๆใ่ฑๆ็ญ
- `w2v-light-tencent-chinese`ๆฏ่
พ่ฎฏ่ฏๅ้็Word2Vecๆจกๅ๏ผCPUๅ ่ฝฝไฝฟ็จ๏ผ้็จไบไธญๆๅญ้ขๅน้
ไปปๅกๅ็ผบๅฐๆฐๆฎ็ๅทๅฏๅจๆ
ๅต
## Usage (text2vec)
Using this model becomes easy when you have [text2vec](https://github.com/shibing624/text2vec) installed:
```
pip install -U text2vec
```
Then you can use the model like this:
```python
from text2vec import SentenceModel
sentences = ['ๅฆไฝๆดๆข่ฑๅ็ปๅฎ้ถ่กๅก', '่ฑๅๆดๆน็ปๅฎ้ถ่กๅก']
model = SentenceModel('shibing624/text2vec-base-chinese-paraphrase')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [text2vec](https://github.com/shibing624/text2vec), you can use the model like this:
First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
Install transformers:
```
pip install transformers
```
Then load model and predict:
```python
from transformers import BertTokenizer, BertModel
import torch
# Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] # First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Load model from HuggingFace Hub
tokenizer = BertTokenizer.from_pretrained('shibing624/text2vec-base-chinese-paraphrase')
model = BertModel.from_pretrained('shibing624/text2vec-base-chinese-paraphrase')
sentences = ['ๅฆไฝๆดๆข่ฑๅ็ปๅฎ้ถ่กๅก', '่ฑๅๆดๆน็ปๅฎ้ถ่กๅก']
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Usage (sentence-transformers)
[sentence-transformers](https://github.com/UKPLab/sentence-transformers) is a popular library to compute dense vector representations for sentences.
Install sentence-transformers:
```
pip install -U sentence-transformers
```
Then load model and predict:
```python
from sentence_transformers import SentenceTransformer
m = SentenceTransformer("shibing624/text2vec-base-chinese-paraphrase")
sentences = ['ๅฆไฝๆดๆข่ฑๅ็ปๅฎ้ถ่กๅก', '่ฑๅๆดๆน็ปๅฎ้ถ่กๅก']
sentence_embeddings = m.encode(sentences)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
CoSENT(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: ErnieModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_mean_tokens': True})
)
```
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nghuyong/ernie-3.0-base-zh`](https://huggingface.co/nghuyong/ernie-3.0-base-zh) model.
Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each
possible sentence pairs from the batch.
We then apply the rank loss by comparing with true pairs and false pairs.
## Citing & Authors
This model was trained by [text2vec](https://github.com/shibing624/text2vec).
If you find this model helpful, feel free to cite:
```bibtex
@software{text2vec,
author = {Ming Xu},
title = {text2vec: A Tool for Text to Vector},
year = {2023},
url = {https://github.com/shibing624/text2vec},
}
``` |
skratos115/qwen2-7b-OpenDevin-f16 | skratos115 | "2024-06-28T18:35:39Z" | 11,433 | 0 | null | [
"gguf",
"text-generation",
"qwen2",
"instruct",
"unsloth",
"OpenDevin",
"dataset:xingyaoww/opendevin-code-act",
"license:mit",
"region:us"
] | text-generation | "2024-06-27T21:16:33Z" | ---
license: mit
tags:
- text-generation
- qwen2
- instruct
- unsloth
- OpenDevin
datasets:
- xingyaoww/opendevin-code-act
---
## Qwen2.7b.OpenDevin
brought to you by skratos115 (HF) / Kingatlas115 (GH) in colaboration with the official Opendevin Team ~xingyaoww
# Qwen2-7B-Instruct with OpenDevin Tool Calling
## Overview
This project involves the fine-tuning of the `Qwen2-7B-Instruct` model using the [opendevin-code-act dataset](https://huggingface.co/datasets/xingyaoww/opendevin-code-act) with the help of Unsloth. The primary goal is to develop a more powerful LLM capable of effectively using the CodeAct framework for tool calling. This is still in early development and should not be used in production. We are working on building a bigger dataset for tool paths/ trajectories and could you all the help we can by using the feedback integration to help us build better trajectories and release to the public via MIT license for OSS model training.
read more here:https://x.com/gneubig/status/1802740786242420896 and http://www.linkedin.com/feed/update/urn:li:activity:7208507606728929280/
## Model Details
- **Model Name**: Qwen2-7B-Instruct
- **Dataset**: [opendevin-code-act](https://huggingface.co/datasets/xingyaoww/opendevin-code-act)
- **Training Platform**: Unsloth
provided full merged files
or
Quantized f16, q4_k_m, Q5_k_m, and Q8_0 gguf files.
I used the qwen2.7b.OD.q4_k_m.gguf for my testing and got it to write me a simple script. more testing to come.
## Running the Model
You can run this model using `vLLM` or `ollama`. The following instructions are for using `ollama`.
### Prerequisites
- Docker
- Hugging Face `transformers` library (version >= 4.37.0 is recommended)
q-4k
ollama run skratos115/qwen2-7b-opendevin-q4_k_m
or
f16
ollama run skratos115/qwen2-7b-opendevin-f16
### Running with Ollama
1. **Install Docker**: Ensure you have Docker installed on your machine.
2. **Pull the Latest Hugging Face Transformers**:
pip install transformers>=4.37.0
3. **Set Up Your Workspace**:
WORKSPACE_BASE=$(pwd)/workspace
4. **Run the Docker Command**:
docker run -it \
--pull=always \
-e SANDBOX_USER_ID=$(id -u) \
-e PERSIST_SANDBOX="true" \
-e LLM_API_KEY="ollama" \
-e LLM_BASE_URL="http://[yourIPhere or 0.0.0.0]:11434" \
-e SSH_PASSWORD="make something up here" \
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
-v $WORKSPACE_BASE:/opt/workspace_base \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name opendevin-app-$(date +%Y%m%d%H%M%S) \
ghcr.io/opendevin/opendevin:main
Replace `[yourIPhere or 0.0.0.0]` with your actual IP address or use `0.0.0.0` for localhost.
## Early Development
This project is in its early stages, and we are continuously working to improve the model and its capabilities. Contributions and feedback are welcome.
## Support my work
Right now all of my work has been funded personally, if you like my work and can help support growth in the AI community consider joining or donating to my Patreon.
[Patreon Link](https://www.patreon.com/atlasaisecurity)
## License
This project is licensed under the [MIT License](LICENSE).
|
arnavgrg/mistral-7b-nf4-fp16-upscaled | arnavgrg | "2024-02-05T01:13:34Z" | 11,428 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-04T21:57:36Z" | ---
license: apache-2.0
tags:
- text-generation-inference
---
This is an upscaled fp16 variant of the original Mistral-7b base model by Mistral after it has been loaded with nf4 4-bit quantization via bitsandbytes.
The main idea here is to upscale the linear4bit layers to fp16 so that the quantization/dequantization cost doesn't have to paid for each forward pass at inference time.
_Note: The quantization operation to nf4 is not lossless, so the model weights for the linear layers are lossy, which means that this model will not work as well as the official base model._
To use this model, you can just load it via `transformers` in fp16:
```python
import torch
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"arnavgrg/mistral-7b-nf4-fp16-upscaled",
device_map="auto",
torch_dtype=torch.float16,
)
``` |
allenai/ivila-row-layoutlm-finetuned-s2vl-v2 | allenai | "2022-10-03T22:05:24Z" | 11,424 | 2 | transformers | [
"transformers",
"pytorch",
"layoutlm",
"token-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-07-05T23:54:29Z" | ---
language: en
---
|
mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF | mradermacher | "2024-06-26T13:42:59Z" | 11,422 | 0 | transformers | [
"transformers",
"gguf",
"axolotl",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"science",
"physics",
"chemistry",
"biology",
"math",
"qwen",
"qwen2",
"en",
"dataset:allenai/ai2_arc",
"dataset:camel-ai/physics",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/biology",
"dataset:camel-ai/math",
"dataset:metaeval/reclor",
"dataset:openbookqa",
"dataset:mandyyyyii/scibench",
"dataset:derek-thomas/ScienceQA",
"dataset:TIGER-Lab/ScienceEval",
"dataset:jondurbin/airoboros-3.2",
"dataset:LDJnr/Capybara",
"dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5",
"dataset:STEM-AI-mtl/Electrical-engineering",
"dataset:knowrohit07/saraswati-stem",
"dataset:sablo/oasst2_curated",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:bigbio/med_qa",
"dataset:meta-math/MetaMathQA-40K",
"dataset:piqa",
"dataset:scibench",
"dataset:sciq",
"dataset:Open-Orca/SlimOrca",
"dataset:migtissera/Synthia-v1.3",
"dataset:allenai/WildChat",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:openchat/openchat_sharegpt4_dataset",
"dataset:teknium/GPTeacher-General-Instruct",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:totally-not-an-llm/EverythingLM-data-V3",
"dataset:HuggingFaceH4/no_robots",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"dataset:abacusai/SystemChat-1.1",
"dataset:H-D-T/Buzz-V1.2",
"base_model:Weyaxi/Einstein-v7-Qwen2-7B",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-06-26T09:00:15Z" | ---
base_model: Weyaxi/Einstein-v7-Qwen2-7B
datasets:
- allenai/ai2_arc
- camel-ai/physics
- camel-ai/chemistry
- camel-ai/biology
- camel-ai/math
- metaeval/reclor
- openbookqa
- mandyyyyii/scibench
- derek-thomas/ScienceQA
- TIGER-Lab/ScienceEval
- jondurbin/airoboros-3.2
- LDJnr/Capybara
- Cot-Alpaca-GPT4-From-OpenHermes-2.5
- STEM-AI-mtl/Electrical-engineering
- knowrohit07/saraswati-stem
- sablo/oasst2_curated
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- bigbio/med_qa
- meta-math/MetaMathQA-40K
- openbookqa
- piqa
- metaeval/reclor
- derek-thomas/ScienceQA
- scibench
- sciq
- Open-Orca/SlimOrca
- migtissera/Synthia-v1.3
- TIGER-Lab/ScienceEval
- allenai/WildChat
- microsoft/orca-math-word-problems-200k
- openchat/openchat_sharegpt4_dataset
- teknium/GPTeacher-General-Instruct
- m-a-p/CodeFeedback-Filtered-Instruction
- totally-not-an-llm/EverythingLM-data-V3
- HuggingFaceH4/no_robots
- OpenAssistant/oasst_top1_2023-08-25
- WizardLM/WizardLM_evol_instruct_70k
- abacusai/SystemChat-1.1
- H-D-T/Buzz-V1.2
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- axolotl
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- science
- physics
- chemistry
- biology
- math
- qwen
- qwen2
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Weyaxi/Einstein-v7-Qwen2-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF/resolve/main/Einstein-v7-Qwen2-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
RichardErkhov/instructlab_-_granite-7b-lab-gguf | RichardErkhov | "2024-06-22T23:21:14Z" | 11,418 | 0 | null | [
"gguf",
"arxiv:2403.01081",
"region:us"
] | null | "2024-06-22T13:18:23Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
granite-7b-lab - GGUF
- Model creator: https://huggingface.co/instructlab/
- Original model: https://huggingface.co/instructlab/granite-7b-lab/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [granite-7b-lab.Q2_K.gguf](https://huggingface.co/RichardErkhov/instructlab_-_granite-7b-lab-gguf/blob/main/granite-7b-lab.Q2_K.gguf) | Q2_K | 2.36GB |
| [granite-7b-lab.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/instructlab_-_granite-7b-lab-gguf/blob/main/granite-7b-lab.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [granite-7b-lab.IQ3_S.gguf](https://huggingface.co/RichardErkhov/instructlab_-_granite-7b-lab-gguf/blob/main/granite-7b-lab.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [granite-7b-lab.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/instructlab_-_granite-7b-lab-gguf/blob/main/granite-7b-lab.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [granite-7b-lab.IQ3_M.gguf](https://huggingface.co/RichardErkhov/instructlab_-_granite-7b-lab-gguf/blob/main/granite-7b-lab.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [granite-7b-lab.Q3_K.gguf](https://huggingface.co/RichardErkhov/instructlab_-_granite-7b-lab-gguf/blob/main/granite-7b-lab.Q3_K.gguf) | Q3_K | 3.07GB |
| [granite-7b-lab.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/instructlab_-_granite-7b-lab-gguf/blob/main/granite-7b-lab.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [granite-7b-lab.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/instructlab_-_granite-7b-lab-gguf/blob/main/granite-7b-lab.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [granite-7b-lab.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/instructlab_-_granite-7b-lab-gguf/blob/main/granite-7b-lab.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [granite-7b-lab.Q4_0.gguf](https://huggingface.co/RichardErkhov/instructlab_-_granite-7b-lab-gguf/blob/main/granite-7b-lab.Q4_0.gguf) | Q4_0 | 3.56GB |
| [granite-7b-lab.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/instructlab_-_granite-7b-lab-gguf/blob/main/granite-7b-lab.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [granite-7b-lab.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/instructlab_-_granite-7b-lab-gguf/blob/main/granite-7b-lab.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [granite-7b-lab.Q4_K.gguf](https://huggingface.co/RichardErkhov/instructlab_-_granite-7b-lab-gguf/blob/main/granite-7b-lab.Q4_K.gguf) | Q4_K | 3.8GB |
| [granite-7b-lab.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/instructlab_-_granite-7b-lab-gguf/blob/main/granite-7b-lab.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [granite-7b-lab.Q4_1.gguf](https://huggingface.co/RichardErkhov/instructlab_-_granite-7b-lab-gguf/blob/main/granite-7b-lab.Q4_1.gguf) | Q4_1 | 3.95GB |
| [granite-7b-lab.Q5_0.gguf](https://huggingface.co/RichardErkhov/instructlab_-_granite-7b-lab-gguf/blob/main/granite-7b-lab.Q5_0.gguf) | Q5_0 | 4.33GB |
| [granite-7b-lab.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/instructlab_-_granite-7b-lab-gguf/blob/main/granite-7b-lab.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [granite-7b-lab.Q5_K.gguf](https://huggingface.co/RichardErkhov/instructlab_-_granite-7b-lab-gguf/blob/main/granite-7b-lab.Q5_K.gguf) | Q5_K | 4.45GB |
| [granite-7b-lab.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/instructlab_-_granite-7b-lab-gguf/blob/main/granite-7b-lab.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [granite-7b-lab.Q5_1.gguf](https://huggingface.co/RichardErkhov/instructlab_-_granite-7b-lab-gguf/blob/main/granite-7b-lab.Q5_1.gguf) | Q5_1 | 4.72GB |
| [granite-7b-lab.Q6_K.gguf](https://huggingface.co/RichardErkhov/instructlab_-_granite-7b-lab-gguf/blob/main/granite-7b-lab.Q6_K.gguf) | Q6_K | 5.15GB |
| [granite-7b-lab.Q8_0.gguf](https://huggingface.co/RichardErkhov/instructlab_-_granite-7b-lab-gguf/blob/main/granite-7b-lab.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
pipeline_tag: text-generation
tags:
- granite
- ibm
- lab
- labrador
- labradorite
license: apache-2.0
language:
- en
base_model: ibm/granite-7b-base
---
# Model Card for Granite-7b-lab [Paper](https://arxiv.org/abs/2403.01081)
### Overview

### Performance
| Model | Alignment | Base | Teacher | MTBench (Avg) * | MMLU(5-shot) |
| --- | --- | --- | --- | --- | --- |
| [Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) | RLHF | Llama-2-13b | Human Annotators | 6.65 |54.58 |
| [Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) | Progressive Training | Llama-2-13b | GPT-4 | 6.15 | 60.37 * |
| [WizardLM-13B-V1.2](https://huggingface.co/WizardLM/WizardLM-13B-V1.2) | Evol-Instruct | Llama-2-13b | GPT-4 | 7.20 | 54.83 |
| [Labradorite-13b](https://huggingface.co/ibm/labradorite-13b) | Large-scale Alignment for chatBots (LAB) | Llama-2-13b | Mixtral-8x7B-Instruct | 7.23 | 58.89 |
| [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | SFT | Mistral-7B-v0.1 | - | 6.84 | 60.37 |
| [zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | SFT/DPO | Mistral-7B-v0.1 | GPT-4 | 7.34 | 61.07 |
| [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | SFT | Mistral-7B-v0.1 | - | 7.6** | 60.78 |
| [Merlinite-7b-lab](https://huggingface.co/instructlab/merlinite-7b-lab) | Large-scale Alignment for chatBots (LAB) | Mistral-7B-v0.1 | Mixtral-8x7B-Instruct | 7.66 |64.88 |
| Granite-7b-lab | Large-scale Alignment for chatBots (LAB) | Granite-7b-base| Mixtral-8x7B-Instruct | 6.69 | 51.91 |
[*] Numbers for models other than Merlinite-7b-lab, Granite-7b-lab and [Labradorite-13b](https://huggingface.co/ibm/labradorite-13b) are taken from [lmsys/chatbot-arena-leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)
[**] Numbers taken from [MistralAI Release Blog](https://mistral.ai/news/la-plateforme/)
### Method
LAB: **L**arge-scale **A**lignment for chat**B**ots is a novel synthetic data-based alignment tuning method for LLMs from IBM Research. Granite-7b-lab is a Granite-7b-base derivative model trained with the LAB methodology, using Mixtral-8x7b-Instruct as a teacher model.
LAB consists of three key components:
1. Taxonomy-driven data curation process
2. Large-scale synthetic data generator
3. Two-phased-training with replay buffers

LAB approach allows for adding new knowledge and skills, in an incremental fashion, to an already pre-trained model without suffering from catastrophic forgetting.
Taxonomy is a tree of seed examples that are used to prompt a teacher model to generate synthetic data. Taxonomy allows the data curator or the model designer to easily specify a diverse set of the knowledge-domains and skills that they would like to include in their LLM. At a high level, these can be categorized into three high-level bins - knowledge, foundational skills, and compositional skills. The leaf nodes of the taxonomy are tasks associated with one or more seed examples.

During the synthetic data generation, **unlike previous approaches where seed examples are uniformly drawn from the entire pool (i.e. self-instruct), we use the taxonomy to drive the sampling process**: For each knowledge/skill, we only use the local examples within the leaf node as seeds to prompt the teacher model.
This makes the teacher model better exploit the task distributions defined by the local examples of each node and the diversity in the taxonomy itself ensures the entire generation covers a wide range of tasks, as illustrated below. In turns, this allows for using Mixtral 8x7B as the teacher model for generation while performing very competitively with models such as ORCA-2, WizardLM, and Zephyr Beta that rely on synthetic data generated by much larger and capable models like GPT-4.

For adding new domain-specific knowledge, we provide an external knowledge source (document) and prompt the model to generate questions and answers based on the document.
Foundational skills such as reasoning and compositional skills such as creative writing are generated through in-context learning using the seed examples from the taxonomy.
Additionally, to ensure the data is high-quality and safe, we employ steps to check the questions and answers to ensure that they are grounded and safe. This is done using the same teacher model that generated the data.
Our training consists of two major phases: knowledge tuning and skills tuning.
There are two steps in knowledge tuning where the first step learns simple knowledge (short samples) and the second step learns complicated knowledge (longer samples).
The second step uses replay a replay buffer with data from the first step.
Both foundational skills and compositional skills are learned during the skills tuning phases, where a replay buffer of data from the knowledge phase is used.
Importantly, we use a set of hyper-parameters for training that are very different from standard small-scale supervised fine-training: larger batch size and carefully optimized learning rate and scheduler.

## Model description
- **Model Name**: Granite-7b-lab
- **Language(s):** Primarily English
- **License:** Apache 2.0
- **Base model:** [ibm/granite-7b-base](https://huggingface.co/ibm/granite-7b-base)
- **Teacher Model:** [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
## Prompt Template
```python
sys_prompt = "You are an AI language model developed by IBM Research. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
prompt = f'<|system|>\n{sys_prompt}\n<|user|>\n{inputs}\n<|assistant|>\n'
stop_token = '<|endoftext|>'
```
We advise utilizing the system prompt employed during the model's training for optimal inference performance, as there could be performance variations based on the provided instructions.
**Bias, Risks, and Limitations**
Granite-7b-lab is a base model and has not undergone any safety alignment, there it may produce problematic outputs. In the absence of adequate safeguards and RLHF, there exists a risk of malicious utilization of these models for generating disinformation or harmful content. Caution is urged against complete reliance on a specific language model for crucial decisions or impactful information, as preventing these models from fabricating content is not straightforward. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in ungrounded generation scenarios due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain.
|
mradermacher/Logical-elyza-llama2-7b-fast-instruct-GGUF | mradermacher | "2024-06-29T17:12:44Z" | 11,415 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:Manual-Dataset-Creation-Project/Malum-130",
"base_model:Manual-Dataset-Creation-Project/Logical-elyza-llama2-7b-fast-instruct",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | "2024-06-29T16:47:44Z" | ---
base_model: Manual-Dataset-Creation-Project/Logical-elyza-llama2-7b-fast-instruct
datasets:
- Manual-Dataset-Creation-Project/Malum-130
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Manual-Dataset-Creation-Project/Logical-elyza-llama2-7b-fast-instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Logical-elyza-llama2-7b-fast-instruct-GGUF/resolve/main/Logical-elyza-llama2-7b-fast-instruct.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Logical-elyza-llama2-7b-fast-instruct-GGUF/resolve/main/Logical-elyza-llama2-7b-fast-instruct.IQ3_XS.gguf) | IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Logical-elyza-llama2-7b-fast-instruct-GGUF/resolve/main/Logical-elyza-llama2-7b-fast-instruct.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Logical-elyza-llama2-7b-fast-instruct-GGUF/resolve/main/Logical-elyza-llama2-7b-fast-instruct.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Logical-elyza-llama2-7b-fast-instruct-GGUF/resolve/main/Logical-elyza-llama2-7b-fast-instruct.IQ3_M.gguf) | IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Logical-elyza-llama2-7b-fast-instruct-GGUF/resolve/main/Logical-elyza-llama2-7b-fast-instruct.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Logical-elyza-llama2-7b-fast-instruct-GGUF/resolve/main/Logical-elyza-llama2-7b-fast-instruct.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Logical-elyza-llama2-7b-fast-instruct-GGUF/resolve/main/Logical-elyza-llama2-7b-fast-instruct.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Logical-elyza-llama2-7b-fast-instruct-GGUF/resolve/main/Logical-elyza-llama2-7b-fast-instruct.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Logical-elyza-llama2-7b-fast-instruct-GGUF/resolve/main/Logical-elyza-llama2-7b-fast-instruct.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Logical-elyza-llama2-7b-fast-instruct-GGUF/resolve/main/Logical-elyza-llama2-7b-fast-instruct.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Logical-elyza-llama2-7b-fast-instruct-GGUF/resolve/main/Logical-elyza-llama2-7b-fast-instruct.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Logical-elyza-llama2-7b-fast-instruct-GGUF/resolve/main/Logical-elyza-llama2-7b-fast-instruct.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Logical-elyza-llama2-7b-fast-instruct-GGUF/resolve/main/Logical-elyza-llama2-7b-fast-instruct.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Logical-elyza-llama2-7b-fast-instruct-GGUF/resolve/main/Logical-elyza-llama2-7b-fast-instruct.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Snowflake/snowflake-arctic-instruct | Snowflake | "2024-05-21T01:08:33Z" | 11,413 | 340 | transformers | [
"transformers",
"safetensors",
"arctic",
"text-generation",
"snowflake",
"moe",
"conversational",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-04-21T05:26:08Z" | ---
license: apache-2.0
tags:
- snowflake
- arctic
- moe
---
## Model Details
Arctic is a dense-MoE Hybrid transformer architecture pre-trained from scratch by the Snowflake AI
Research Team. We are releasing model checkpoints for both the base and instruct-tuned versions of
Arctic under an Apache-2.0 license. This means you can use them freely in your own research,
prototypes, and products. Please see our blog
[Snowflake Arctic: The Best LLM for Enterprise AI โ Efficiently Intelligent, Truly Open](https://www.snowflake.com/blog/arctic-open-efficient-foundation-language-models-snowflake/)
for more information on Arctic and links to other relevant resources such as our series of cookbooks
covering topics around training your own custom MoE models, how to produce high-quality training data,
and much more.
* [Arctic-Base](https://huggingface.co/Snowflake/snowflake-arctic-base/)
* [Arctic-Instruct](https://huggingface.co/Snowflake/snowflake-arctic-instruct/)
For the latest details about Snowflake Arctic including tutorials, etc., please refer to our GitHub repo:
* https://github.com/Snowflake-Labs/snowflake-arctic
Try a live demo with our [Streamlit app](https://huggingface.co/spaces/Snowflake/snowflake-arctic-st-demo).
**Model developers** Snowflake AI Research Team
**License** Apache-2.0
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Release Date** April, 24th 2024.
## Model Architecture
Arctic combines a 10B dense transformer model with a residual 128x3.66B MoE MLP resulting in 480B
total and 17B active parameters chosen using a top-2 gating. For more details about Arctic's model
Architecture, training process, data, etc. [see our series of cookbooks](https://www.snowflake.com/en/data-cloud/arctic/cookbook/).
## Usage
Arctic is currently supported with `transformers` by leveraging the
[custom code feature](https://huggingface.co/docs/transformers/en/custom_models#using-a-model-with-custom-code),
to use this you simply need to add `trust_remote_code=True` to your AutoTokenizer and AutoModelForCausalLM calls.
However, we recommend that you use a `transformers` version at or above 4.39:
```python
pip install transformers>=4.39.0
```
Arctic leverages several features from [DeepSpeed](https://github.com/microsoft/DeepSpeed), you will need to
install the DeepSpeed 0.14.2 or higher to get all of these required features:
```python
pip install deepspeed>=0.14.2
```
### Inference examples
Due to the model size we recommend using a single 8xH100 instance from your
favorite cloud provider such as: AWS [p5.48xlarge](https://aws.amazon.com/ec2/instance-types/p5/),
Azure [ND96isr_H100_v5](https://learn.microsoft.com/en-us/azure/virtual-machines/nd-h100-v5-series), etc.
In this example we are using FP8 quantization provided by DeepSpeed in the backend, we can also use FP6
quantization by specifying `q_bits=6` in the `QuantizationConfig` config. The `"150GiB"` setting
for max_memory is required until we can get DeepSpeed's FP quantization supported natively as a
[HFQuantizer](https://huggingface.co/docs/transformers/main/en/hf_quantizer#build-a-new-hfquantizer-class) which we
are actively working on.
```python
import os
# enable hf_transfer for faster ckpt download
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from deepspeed.linear.config import QuantizationConfig
tokenizer = AutoTokenizer.from_pretrained(
"Snowflake/snowflake-arctic-instruct",
trust_remote_code=True
)
quant_config = QuantizationConfig(q_bits=8)
model = AutoModelForCausalLM.from_pretrained(
"Snowflake/snowflake-arctic-instruct",
trust_remote_code=True,
low_cpu_mem_usage=True,
device_map="auto",
ds_quantization_config=quant_config,
max_memory={i: "150GiB" for i in range(8)},
torch_dtype=torch.bfloat16)
content = "5x + 35 = 7x - 60 + 10. Solve for x"
messages = [{"role": "user", "content": content}]
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to("cuda")
outputs = model.generate(input_ids=input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
```
The Arctic GitHub page has additional code snippets and examples around running inference:
* Example with pure-HF: https://github.com/Snowflake-Labs/snowflake-arctic/blob/main/inference
* Tutorial using vLLM: https://github.com/Snowflake-Labs/snowflake-arctic/tree/main/inference/vllm |
RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v3-gguf | RichardErkhov | "2024-06-21T08:38:27Z" | 11,408 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-21T06:58:06Z" | Entry not found |
TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ | TheBloke | "2024-01-16T11:05:43Z" | 11,407 | 25 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"conversational",
"en",
"base_model:NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-01-16T08:42:54Z" | ---
base_model: NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
inference: false
language:
- en
license: apache-2.0
model-index:
- name: Nous-Hermes-2-Mixtral-8x7B-DPO
results: []
model_creator: NousResearch
model_name: Nous Hermes 2 Mixtral 8X7B DPO
model_type: mixtral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Nous Hermes 2 Mixtral 8X7B DPO - GPTQ
- Model creator: [NousResearch](https://huggingface.co/NousResearch)
- Original model: [Nous Hermes 2 Mixtral 8X7B DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)
<!-- description start -->
# Description
This repo contains GPTQ model files for [NousResearch's Nous Hermes 2 Mixtral 8X7B DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF)
* [NousResearch's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 23.81 GB | No | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 24.70 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 27.42 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 18.01 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 18.85 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 47.04 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 48.10 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ`:
```shell
mkdir Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ
huggingface-cli download TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ --local-dir Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ
huggingface-cli download TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ --local-dir Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(
prompt_template,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, ้ฟๆ, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjรคreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: NousResearch's Nous Hermes 2 Mixtral 8X7B DPO
# Nous Hermes 2 - Mixtral 8x7B - DPO

## Model description
Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the [Mixtral 8x7B MoE LLM](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1).
The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.
This is the SFT + DPO version of Mixtral Hermes 2, we have also released an SFT only version, for people to find which works best for them, which can be found here: https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT
## We are grateful to Together.ai for sponsoring our compute during the many experiments both training Mixtral and working on DPO!
# Table of Contents
1. [Example Outputs](#example-outputs)
2. [Benchmark Results](#benchmark-results)
- GPT4All
- AGIEval
- BigBench
- Comparison to Mixtral-Instruct
3. [Prompt Format](#prompt-format)
4. [Inference Example Code](#inference-code)
5. [Quantized Models](#quantized-models)
## Example Outputs
### Writing Code for Data Visualization

### Writing Cyberpunk Psychedelic Poems

### Performing Backtranslation to Create Prompts from Input Text

## Benchmark Results
Nous-Hermes 2 on Mixtral 8x7B is a major improvement across the board on the benchmarks below compared to the base Mixtral model, and is the first model to beat the flagship Mixtral Finetune by MistralAI.
## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5990|ยฑ |0.0143|
| | |acc_norm|0.6425|ยฑ |0.0140|
|arc_easy | 0|acc |0.8657|ยฑ |0.0070|
| | |acc_norm|0.8636|ยฑ |0.0070|
|boolq | 1|acc |0.8783|ยฑ |0.0057|
|hellaswag | 0|acc |0.6661|ยฑ |0.0047|
| | |acc_norm|0.8489|ยฑ |0.0036|
|openbookqa | 0|acc |0.3440|ยฑ |0.0213|
| | |acc_norm|0.4660|ยฑ |0.0223|
|piqa | 0|acc |0.8324|ยฑ |0.0087|
| | |acc_norm|0.8379|ยฑ |0.0086|
|winogrande | 0|acc |0.7616|ยฑ |0.0120|
```
Average: 75.70
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2402|ยฑ |0.0269|
| | |acc_norm|0.2520|ยฑ |0.0273|
|agieval_logiqa_en | 0|acc |0.4117|ยฑ |0.0193|
| | |acc_norm|0.4055|ยฑ |0.0193|
|agieval_lsat_ar | 0|acc |0.2348|ยฑ |0.0280|
| | |acc_norm|0.2087|ยฑ |0.0269|
|agieval_lsat_lr | 0|acc |0.5549|ยฑ |0.0220|
| | |acc_norm|0.5294|ยฑ |0.0221|
|agieval_lsat_rc | 0|acc |0.6617|ยฑ |0.0289|
| | |acc_norm|0.6357|ยฑ |0.0294|
|agieval_sat_en | 0|acc |0.8010|ยฑ |0.0279|
| | |acc_norm|0.7913|ยฑ |0.0284|
|agieval_sat_en_without_passage| 0|acc |0.4806|ยฑ |0.0349|
| | |acc_norm|0.4612|ยฑ |0.0348|
|agieval_sat_math | 0|acc |0.4909|ยฑ |0.0338|
| | |acc_norm|0.4000|ยฑ |0.0331|
```
Average: 46.05
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.6105|ยฑ |0.0355|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7182|ยฑ |0.0235|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.5736|ยฑ |0.0308|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.4596|ยฑ |0.0263|
| | |exact_str_match |0.0000|ยฑ |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3500|ยฑ |0.0214|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2500|ยฑ |0.0164|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5200|ยฑ |0.0289|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.3540|ยฑ |0.0214|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|ยฑ |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6900|ยฑ |0.0103|
|bigbench_ruin_names | 0|multiple_choice_grade|0.6317|ยฑ |0.0228|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2535|ยฑ |0.0138|
|bigbench_snarks | 0|multiple_choice_grade|0.7293|ยฑ |0.0331|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.6744|ยฑ |0.0149|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.7400|ยฑ |0.0139|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2176|ยฑ |0.0117|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1543|ยฑ |0.0086|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5200|ยฑ |0.0289|
```
Average: 49.70
# Benchmark Comparison Charts
## GPT4All

## AGI-Eval

## BigBench Reasoning Test

## Comparison to Mixtral Instruct:
Our benchmarks show gains in many benchmarks against Mixtral Instruct v0.1, on average, beating the flagship Mixtral model.

# Prompt Format
Nous Hermes 2 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
When quantized versions of the model are released, I recommend using LM Studio for chatting with Nous Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: even in 4bit, it will require more than 24GB of VRAM)
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import LlamaTokenizer, MixtralForCausalLM
import bitsandbytes, flash_attn
tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO', trust_remote_code=True)
model = MixtralForCausalLM.from_pretrained(
"NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
# Quantized Models:
## All sizes of GGUF Quantizations are available here:
### SFT+DPO Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF
### SFT Only Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT-GGUF
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
Yntec/LEOSAMsFilmGirlUltra | Yntec | "2024-05-26T23:43:43Z" | 11,400 | 2 | diffusers | [
"diffusers",
"safetensors",
"Base Model",
"Film",
"Portraits",
"LEOSAM",
"text-to-image",
"stable-diffusion",
"stable-diffusion-diffusers",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-05-26T01:10:24Z" | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Base Model
- Film
- Portraits
- LEOSAM
- text-to-image
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
---
# LEOSAM's FilmGirlUltra
If you are tired of seeing always the same faces SD1.5 models output, this model is for you (even tough it's also SD1.5.) This 768x768 version of the model uses the EulerDiscreteScheduler.
Samples and prompts:

(Click for larger)
Top left: portrait of a girl posing with 1990s style hair pulled back in french twist wearing streetwear 90s fashion: oversize denim jacket and t-shirt, baggy ripped jeans, canvas sneakers at clothes store hdr, cinematic shot, feminine colors, 90s style, HairDetail,, feminine colors, ambient lighting, quarter turn,1/4 body pose
Top right: 90s VHS TV Photograph of young Joaquin Phoenix as The joker. Death Prank behind the scenes
Bottom left: kodachrome camera transparency, dramatic lighting with wife and daughter enjoying pie with candles. sitting with a pretty cute little girl, Gift Birthday Theme by Gil_Elvgren and Haddon_Sundblom
Bottom right: analog 1990 movie screenshot film grain portrait, Santa Claus sitting with pretty cute little girl in Zone 51, cute faces and eyes, closeup, Extraterrestrial, PARTY HARD BACKGROUND, Space Ship Delivering Presents, Space Ship Decorated With Garlands and Balls, Snowstorm
Original page: https://civitai.com/models/33208/leosams-filmgirl-ultra
DPM++ 512x512 version: https://huggingface.co/sam749/LEOSAM-s-ilmGirl-ltra-Ultra-ase-odel |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.