modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
nbeerbower/maidphin | nbeerbower | 2024-02-04T20:24:16Z | 451 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"not-for-all-audiences",
"nsfw",
"base_model:SanjiWatsuki/Silicon-Maid-7B",
"base_model:nbeerbower/bruphin-zeta",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-04T20:21:14Z | ---
license: cc-by-4.0
base_model:
- SanjiWatsuki/Silicon-Maid-7B
- nbeerbower/bruphin-zeta
tags:
- mergekit
- merge
- not-for-all-audiences
- nsfw
---
# maidphin
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B)
* [nbeerbower/bruphin-zeta](https://huggingface.co/nbeerbower/bruphin-zeta)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nbeerbower/bruphin-zeta
layer_range: [0, 32]
- model: SanjiWatsuki/Silicon-Maid-7B
layer_range: [0, 32]
merge_method: slerp
base_model: nbeerbower/bruphin-zeta
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
predibase/dbpedia | predibase | 2024-02-21T19:14:00Z | 451 | 8 | peft | [
"peft",
"safetensors",
"text-generation",
"base_model:mistralai/Mistral-7B-v0.1",
"region:us"
]
| text-generation | 2024-02-19T23:16:23Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
pipeline_tag: text-generation
---
Description: Topic extraction from a news article and title\
Original dataset: https://huggingface.co/datasets/fancyzhx/dbpedia_14 \
---\
Try querying this adapter for free in Lora Land at https://predibase.com/lora-land! \
The adapter_category is Topic Identification and the name is News Topic Identification (dbpedia)\
---\
Sample input: You are given the title and the body of an article below. Please determine the type of the article.\n### Title: Great White Whale\n\n### Body: Great White Whale is the debut album by the Canadian rock band Secret and Whisper. The album was in the works for about a year and was released on February 12 2008. A music video was shot in Pittsburgh for the album's first single XOXOXO. The album reached number 17 on iTunes's top 100 albums in its first week on sale.\n\n### Article Type: \
---\
Sample output: 11\
---\
Try using this adapter yourself!
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mistral-7B-v0.1"
peft_model_id = "predibase/dbpedia"
model = AutoModelForCausalLM.from_pretrained(model_id)
model.load_adapter(peft_model_id)
``` |
dreamgen/opus-v1.2-70b | dreamgen | 2024-03-30T17:04:51Z | 451 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"unsloth",
"axolotl",
"en",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-06T13:31:54Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- unsloth
- axolotl
license: cc-by-nc-nd-4.0
---
# DreamGen Opus V1
<div style="display: flex; flex-direction: row; align-items: center;">
<img src="/dreamgen/opus-v1.2-70b/resolve/main/images/logo-1024.png" alt="model logo" style="
border-radius: 12px;
margin-right: 12px;
margin-top: 0px;
margin-bottom: 0px;
max-width: 100px;
height: auto;
"/>
Models for **(steerable) story-writing and role-playing**.
<br/>[All Opus V1 models, including quants](https://huggingface.co/collections/dreamgen/opus-v1-65d092a6f8ab7fc669111b31).
</div>
## Resources
- [**Opus V1 prompting guide**](https://dreamgen.com/docs/models/opus/v1) with many (interactive) examples and prompts that you can copy.
- [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing) for interactive role-play using `opus-v1.2-7b`.
- [Python code](example/prompt/format.py) to format the prompt correctly.
- Join the community on [**Discord**](https://dreamgen.com/discord) to get early access to new models.
<img src="/dreamgen/opus-v1.2-70b/resolve/main/images/story_writing.webp" alt="story writing on dreamgen.com" style="
padding: 12px;
border-radius: 12px;
border: 2px solid #f9a8d4;
background: rgb(9, 9, 11);
"/>
## Prompting
<details>
<summary>The models use an extended version of ChatML.</summary>
```
<|im_start|>system
(Story description in the right format here)
(Typically consists of plot description, style description and characters)<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Alice
(Continuation of the story from the Alice character)<|im_end|>
<|im_start|>text
(Continuation of the story from no character in particular (pure narration))<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Bob
(Continuation of the story from the Bob character)<|im_end|>
```
The Opus V1 extension is the addition of the `text` role, and the addition / modification of role names.
Pay attention to the following:
- The `text` messages can (but don't have to have) `names`, names are used to indicate the "active" character during role-play.
- There can be multiple subsequent message with a `text` role, especially if names are involved.
- There can be multiple names attached to a message.
- The format for names is `names= {{name[0]}}; {{name[1]}}`, beware of the spaces after `names=` and after the `;`. This spacing leads to most natural tokenization for the names.
</details>
While the main goal for the models is great story-writing and role-playing performance, the models are also capable of several writing related tasks as well as general assistance.
Here's how you can prompt the model for the following tasks
### Steerable Story-writing and Role-playing:
- [Story-writing prompt guide](https://dreamgen.com/docs/models/opus/v1#task-story-writing)
- [Role-playing prompt guide](https://dreamgen.com/docs/models/opus/v1#task-role-playing)
- Input:
- System prompt: You provide story / role-play description, which consists of:
- Plot description
- Style description
- Characters and their descriptions
- Conversation turns:
- Text / message turn: This represents part of the story or role play
- Instruction: This tells the model what should happen next
- Output: Continuation of the story / role-play.
### Story plot summarization
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-plot-description)
- Input: A story, or a few chapters of a story.
- Output: A description of the story or chapters.
#### Examples:
<details>
<summary>Summary of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">short fiction story</a> in 50 words</summary>
<blockquote>
Malygris, a powerful magician, summons his long-dead lover Nylissa from the grave using necromancy. However, upon seeing her, he realizes that she is not the same person he once knew, leading him to question the nature of love, memory, and the power of magic.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">short fiction story</a> in 200 words</summary>
<blockquote>
The story is a dark fantasy set in the city of Susran, where an ancient and powerful magician named Malygris dwells in a tower atop a conical hill. Malygris, who has mastered all forms of sorcery and necromancy, is plagued by a deep sense of ennui and melancholy. He seeks solace in the memory of Nylissa, a girl he once loved in his youth.
One day, Malygris decides to summon Nylissa's spirit from the grave using his necromantic powers. He performs the necessary rituals and incantations, and Nylissa appears before him as a ghostly apparition. However, as Malygris gazes upon her, he becomes increasingly disillusioned. He notices subtle changes in her appearance and behavior that do not match his memories of her. He begins to question whether this is truly the same Nylissa he once knew, or if it is just another illusion conjured by his magic.
As Malygris grapples with this uncertainty, he becomes increasingly despondent and disillusioned with life itself. He realizes that even his magic cannot bring back the past or make him truly happy. In the end, he dismisses Nylissa's spirit and returns to his lonely tower, resigned to his fate as an old and weary sorcerer.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://edition.cnn.com/2024/02/29/health/pets-diseases-viruses-bacteria-wellness/index.html">CNN News article</a> in 150 words</summary>
<blockquote>
The article discusses the recent human deaths from diseases transmitted by animals, specifically Alaskapox and bubonic plague. Dr. Leana Wen, an emergency physician and former Baltimore health commissioner, explains the various ways household pets can transmit diseases to humans. These include fecal-oral transmission, scratches and bites, and vector-borne diseases like ticks, fleas, and mosquitoes. Wen emphasizes the importance of vaccinating pets, maintaining cleanliness, and taking commonsense precautions to reduce disease transmission. She also advises that individuals who are immunocompromised, babies, and pregnant women should take additional care when interacting with animals.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://edition.cnn.com/2024/02/29/health/pets-diseases-viruses-bacteria-wellness/index.html">CNN News article</a> in 50 words</summary>
<blockquote>
A doctor discusses the risks of diseases transmitted from animals to humans, focusing on household pets like cats and dogs. They emphasize the importance of vaccination, hygiene, and regular veterinary care to prevent these illnesses.
</blockquote>
</details>
### Story character description
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-char-description)
- Input: A story, or a few chapters of a story, set of characters.
- Output: A description of the characters.
#### Examples:
<details>
<summary>Description of the Malygris character from <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">this story</a> in 50 words</summary>
<blockquote>
Malygris is an ancient and powerful magician, characterized by his long white beard, cold green eyes, and a violet robe. He is a master of necromancy and possesses a vast knowledge of forbidden magic. Despite his immense power, he is plagued by a deep sense of melancholy and ennui.
</blockquote>
</details>
<details>
<summary>Description of the prince character from <a href="https://www.libraryofshortstories.com/onlinereader/the-pearl-of-love">this story</a> in 50 words</summary>
<blockquote>
The prince is a devoted and determined individual, driven by his love for his late queen. He is described as a graceful youth in the beginning, but as he grows older, his dedication to the Pearl of Love transforms him into a grave and intent man. His sense of color and design evolves over time, reflecting his growing maturity and artistic vision.
</blockquote>
</details>
### Story style description
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-style-description)
- Input: A story, or a few chapters of a story.
- Output: A description the style of the story.
<details>
<summary>Style description of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">this story</a></summary>
<blockquote>
The story is written in a descriptive and evocative style, with a focus on the atmosphere and setting. The author uses vivid and detailed language to create a sense of place and time, and employs a variety of literary devices such as similes, metaphors, and allusions to enhance the mood and tone of the narrative. The sentence structure is varied, with a mix of short and long sentences that create a rhythmic flow to the prose.
</blockquote>
</details>
<details>
<summary>Style description of <a href="https://www.libraryofshortstories.com/onlinereader/the-pearl-of-love">this story</a></summary>
<blockquote>
The writing style is descriptive and evocative, with a focus on the beauty and grandeur of the Pearl of Love. The author uses vivid imagery and sensory details to create a rich and immersive atmosphere. The tone is reverential and contemplative, reflecting the prince's deep love for his queen and his dedication to creating a lasting monument to her memory.
</blockquote>
</details>
### Story description to chapters
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-story-description-to-chapter-descriptions)
- Input: A brief plot description and the desired number of chapters.
- Output: A description for each chapter.
### And more...
## Sampling params
For story-writing and role-play, I recommend "Min P" based sampling with `min_p` in the range `[0.01, 0.1]` and with `temperature` in the range `[0.5, 1.5]`, depending on your preferences. A good starting point would be `min_p=0.1; temperature=0.8`.
You may also benefit from setting presence, frequency and repetition penalties, especially at lower temperatures.
## Dataset
The fine-tuning dataset consisted of ~100M tokens of steerable story-writing, role-playing, writing-assistant and general-assistant examples. Each example was up to 31000 tokens long.
All story-writing and role-playing examples were based on human-written text.

## Running the model
The model is should be compatible with any software that supports the base model, but beware of prompting and tokenization.
I recommend using these model versions:
- 7B: [no quant (opus-v1.2-7b)](https://huggingface.co/dreamgen/opus-v1.2-7b)
- 34B: [no quant (opus-v1-34b)](https://huggingface.co/dreamgen/opus-v1-34b) or [awq (opus-v1-34b-awq)](https://huggingface.co/dreamgen/opus-v1-34b-awq)
- 34B: [no quant (opus-v1.2-70b)](https://huggingface.co/dreamgen/opus-v1.2-70b) or [awq (opus-v1.2-70b-awq)](https://huggingface.co/dreamgen/opus-v1.2-70b-awq)
### Running on DreamGen.com (free)
You can run the models on [dreamgen.com](https://dreamgen.com) for free โ you can use the built-in UI for story-writing & role-playing, or use [the API](https://dreamgen.com/docs/api).
### Running Locally
- **Make sure your prompt is as close as possible to the Opus V1**
- Regardless of which backend you use, it's important that you format your prompt well and that the tokenization works correctly.
- [Read the prompt guide](https://dreamgen.com/docs/models/opus/v1)
- [Read the prompt formatting code](example/prompt/format.py)
- Make sure `<|im_start|>` and `<|im_end|>` are tokenized correctly
- **vLLM**
- [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing): This is a simple interactive Google Colab to do role-play with the 7B model, it should fit on the T4 GPU.
- [Code](example/prompt/interactive.py): This is simple script for interactive chat for one hard-coded scenario.
- **SillyTavern**
- [Official SillyTavern documentation for DreamGen](https://docs.sillytavern.app/usage/api-connections/dreamgen/) -- applies to both the API an local models
- SillyTavern (staging) comes with built-in DreamGen preset for RP
- Other presets can be found [here](https://huggingface.co/dreamgen/opus-v1.2-70b/tree/main/configs/silly_tavern), v2 kindly provided by @MarinaraSpaghetti
- Make sure to unselect `Skip special tokens`, otherwise it won't work
- This is just an attempt at approximating the Opus V1 prompt, it won't be perfect
- Character cards specifically rewritten for the built-in DreamGen preset:
- [Seraphina](configs/silly_tavern/cards/Seraphina.png) (based on the default Seraphina card)
- [Lara Lightland](configs/silly_tavern/cards/LaraLightland.png) (based on the card by Deffcolony)
- **LM Studio**
- [Config](configs/lmstudio/preset.json)
- Just like ChatML, just changed "assistant" to "text" role.
- **There's a bug** in LM Studio if you delete a message or click "Continue", [see here for details](https://discord.com/channels/1110598183144399058/1212665261128417280/1212665261128417280).
- **HuggingFace**
- [Chat template](tokenizer_config.json#L51)
- Just like ChatML, just changed "assistant" to "text" role.
## Known Issues
- **34B repetition**:
- The 34B sometimes gets stuck repeating the same word, or synonyms. This seems to be a common problem across various Yi 34B fine-tunes.
- **GGUF**:
- The tokenization might be messed up. Some users reported that `<|im_start|>` and `<|im_end|>` are tokenized as multiple tokens. Also llama.cpp may not tokenize correctly (the Yi tokenizer is subtly different from the Llama 2 tokenizer).
## License
- This model is intended for personal use only, other use is not permitted. |
TeeZee/DarkForest-20B-v2.0-GGUF-iMat | TeeZee | 2024-04-01T11:03:43Z | 451 | 6 | null | [
"gguf",
"merge",
"not-for-all-audiences",
"license:other",
"region:us"
]
| null | 2024-03-19T09:02:40Z | ---
license: other
license_name: microsoft-research-license
tags:
- merge
- not-for-all-audiences
---
# DarkForest 20B v2.0 - GGUF IMat quants
[IMatrix file](https://huggingface.co/TeeZee/DarkForest-20B-v2.0-GGUF-iMat/resolve/main/DarkForest-20B-v2.0.fp16_512.imatrix?download=true) prepared witch context 4096 and 5521 chunks of wiki.test.raw.
Original model info: [DarkForest-20B-v2.0](https://huggingface.co/TeeZee/DarkForest-20B-v2.0)
[q8_0](https://huggingface.co/TeeZee/DarkForest-20B-v2.0-GGUF-iMat/resolve/main/DarkForest-20B-v2.0_Q8_0_imatrix.gguf?download=true):
- [5521]8.5016,
- Final estimate: PPL = 8.5016 +/- 0.02134
[q6_k](https://huggingface.co/TeeZee/DarkForest-20B-v2.0-GGUF-iMat/resolve/main/DarkForest-20B-v2.0.q6_K_imatrix.gguf?download=true):
- [5521]8.5046,
- Final estimate: PPL = 8.5046 +/- 0.02136
[q5_0](https://huggingface.co/TeeZee/DarkForest-20B-v2.0-GGUF-iMat/resolve/main/DarkForest-20B-v2.0.q5_0_imatrix.gguf?download=true):
- [5521]8.4903,
- Final estimate: PPL = 8.4903 +/- 0.02132
[q4_K_S](https://huggingface.co/TeeZee/DarkForest-20B-v2.0-GGUF-iMat/resolve/main/DarkForest-20B-v2.0.q4_K_S_imatrix.gguf?download=true):
- [5521]8.5880,
- Final estimate: PPL = 8.5880 +/- 0.02162
[q4_K_M](https://huggingface.co/TeeZee/DarkForest-20B-v2.0-GGUF-iMat/resolve/main/DarkForest-20B-v2.0.q4_K_M_imatrix.gguf?download=true):
- [5521]8.5906,
- Final estimate: PPL = 8.5906 +/- 0.02163
[q4_0](https://huggingface.co/TeeZee/DarkForest-20B-v2.0-GGUF-iMat/resolve/main/DarkForest-20B-v2.0.q4_0_imatrix.gguf?download=true):
- [5521]8.5610,
- Final estimate: PPL = 8.5610 +/- 0.02151
[q3_K_M](https://huggingface.co/TeeZee/DarkForest-20B-v2.0-GGUF-iMat/resolve/main/DarkForest-20B-v2.0.q3_K_M_imatrix.gguf?download=true):
- [5521]8.7283,
- Final estimate: PPL = 8.7283 +/- 0.02196
[q2_K](https://huggingface.co/TeeZee/DarkForest-20B-v2.0-GGUF-iMat/resolve/main/DarkForest-20B-v2.0.q2_K_imatrix.gguf?download=true):
- [5521]9.2445
- Final estimate: PPL = 9.2445 +/- 0.02351
[IQ2_XS](https://huggingface.co/TeeZee/DarkForest-20B-v2.0-GGUF-iMat/resolve/main/DarkForest-20B-v2.0.IQ2_XS_imatrix.gguf?download=true):
- [5521]9.8329,
- Final estimate: PPL = 9.8329 +/- 0.02452
[IQ2_XSS](https://huggingface.co/TeeZee/DarkForest-20B-v2.0-GGUF-iMat/resolve/main/DarkForest-20B-v2.0.IQ2_XXS_imatrix.gguf?download=true):
- [5521]10.5170,
- Final estimate: PPL = 10.5170 +/- 0.02651
[IQ1_S](https://huggingface.co/TeeZee/DarkForest-20B-v2.0-GGUF-iMat/resolve/main/DarkForest-20B-v2.0.IQ1_S_imatrix.gguf?download=true):
- [5521]13.9487,
- Final estimate: PPL = 13.9487 +/- 0.03502 - UNUSABLE, model is too small to be coherent in 1 bit.
All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel:
<a href="https://www.buymeacoffee.com/TeeZee" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
|
stablediffusionapi/lazymixv4-inpaint | stablediffusionapi | 2024-03-20T08:09:22Z | 451 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-03-20T08:07:26Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "lazymixv4-inpaint"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/lazymixv4-inpaint)
Model link: [View model](https://modelslab.com/models/lazymixv4-inpaint)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "lazymixv4-inpaint",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
ruslanmv/Medical-Llama3-8B-GGUF | ruslanmv | 2024-04-24T11:55:08Z | 451 | 4 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"ruslanmv",
"llama",
"trl",
"en",
"dataset:ruslanmv/ai-medical-chatbot",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-23T14:51:07Z | ---
language: en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- ruslanmv
- llama
- trl
base_model: meta-llama/Meta-Llama-3-8B
datasets:
- ruslanmv/ai-medical-chatbot
---
# Medical-Llama3-8B-GGUF
[](https://ruslanmv.com/)
This is a fine-tuned version of the Llama3 8B model, specifically designed to answer medical questions.
The model was trained on the AI Medical Chatbot dataset, which can be found at [ruslanmv/ai-medical-chatbot](https://huggingface.co/datasets/ruslanmv/ai-medical-chatbot). This fine-tuned model leverages the GGUF (General-Purpose Gradient-based Quantization with Uniform Forwarding) technique for efficient inference with 4-bit quantization.
**Model:** [ruslanmv/Medical-Llama3-8B-GGUF](https://huggingface.co/ruslanmv/Medical-Llama3-8B-GGUF)
- **Developed by:** ruslanmv
- **License:** apache-2.0
- **Finetuned from model:** meta-llama/Meta-Llama-3-8B
## Installation
**Prerequisites:**
- A system with CUDA support is highly recommended for optimal performance.
- Python 3.10 or later
1. **Install required Python libraries:**
```bash
# GPU llama-cpp-python
!CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir --verbose
```
```bash
%%capture
!pip install huggingface-hub hf-transfer
```
2. **Download model quantized:**
```bash
import os
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
!huggingface-cli download \
ruslanmv/Medical-Llama3-8B-GGUF \
medical-llama3-8b.Q5_K_M.gguf \
--local-dir . \
--local-dir-use-symlinks False
MODEL_PATH="/content/medical-llama3-8b.Q5_K_M.gguf"
```
## Example of use
Here's an example of how to use the Medical-Llama3-8B-GGUF 4bit model to generate an answer to a medical question:
```python
from llama_cpp import Llama
import json
B_INST, E_INST = "<s>[INST]", "[/INST]"
B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"
DEFAULT_SYSTEM_PROMPT = """\
You are an AI Medical Chatbot Assistant, I'm equipped with a wealth of medical knowledge derived from extensive datasets. I aim to provide comprehensive and informative responses to your inquiries. However, please note that while I strive for accuracy, my responses should not replace professional medical advice and short answers.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."""
SYSTEM_PROMPT = B_SYS + DEFAULT_SYSTEM_PROMPT + E_SYS
def create_prompt(user_query):
instruction = f"User asks: {user_query}\n"
prompt = B_INST + SYSTEM_PROMPT + instruction + E_INST
return prompt.strip()
user_query = "I'm a 35-year-old male experiencing symptoms like fatigue, increased sensitivity to cold, and dry, itchy skin. Could these be indicative of hypothyroidism?"
prompt = create_prompt(user_query)
print(prompt)
llm = Llama(model_path=MODEL_PATH, n_gpu_layers=-1)
result = llm(
prompt=prompt,
max_tokens=100,
echo=False
)
print(result['choices'][0]['text'])
```
The output exmample
```bash
Hi, thank you for your query.
Hypothyroidism is characterized by fatigue, sensitivity to cold, weight gain, depression, hair loss and mental dullness. I would suggest that you get a complete blood count with thyroid profile including TSH (thyroid stimulating hormone), free thyroxine level, and anti-thyroglobulin antibodies. These tests will help in establishing the diagnosis of hypothyroidism.
If there is no family history of autoimmune disorders, then it might be due
```
## License
This model is licensed under the Apache License 2.0. You can find the full license in the LICENSE file. |
duyntnet/Mistral-7B-Instruct-v0.2-imatrix-GGUF | duyntnet | 2024-04-26T06:46:06Z | 451 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"mistralai",
"Mistral-7B-Instruct-v0.2",
"text-generation",
"en",
"license:other",
"region:us"
]
| text-generation | 2024-04-26T03:54:16Z | ---
license: other
inference: false
language:
- en
pipeline_tag: text-generation
tags:
- gguf
- imatrix
- mistralai
- Mistral-7B-Instruct-v0.2
- transformers
---
Quantizations of https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2
# From original readme
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Troubleshooting
- If you see the following error:
```
Traceback (most recent call last):
File "", line 1, in
File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
raise KeyError(key)
KeyError: 'mistral'
```
Installing transformers from source should solve the issue
pip install git+https://github.com/huggingface/transformers
This should not be required after transformers-v4.33.4. |
tanganke/clip-vit-base-patch32_eurosat | tanganke | 2024-04-28T11:30:39Z | 451 | 0 | transformers | [
"transformers",
"safetensors",
"clip_vision_model",
"feature-extraction",
"dataset:tanganke/eurosat",
"base_model:openai/clip-vit-base-patch32",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2024-04-28T11:26:19Z | ---
base_model:
- openai/clip-vit-base-patch32
datasets:
- tanganke/eurosat
metrics:
- accuracy
---
# Model Card
## Model Details
- Architecture: ViT-Base with patch size 32
- Training Data: EuroSAT dataset
## Training Details
Adam Optimizer with a constant learning rate 1e-5 for 4000 steps training (batch_size=32).
Only the vision encoder is fine-tuned.
## Evaluation Results
- pre-trained: 0.455623596906662
- fine-tuned: 0.9906941652297974
## Usage
load vision model
```python
from transformers import CLIPVisionModel
vision_model = CLIPVisionModel.from_pretrained('tanganke/clip-vit-base-patch32_eurosat')
```
substitute the vision encoder of clip
```python
from transformers import CLIPModel
clip_model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
clip_model.vision_model.load_state_dict(vision_model.vision_model.state_dict())
```
|
Local-Novel-LLM-project/Vecteus-v1-gguf | Local-Novel-LLM-project | 2024-05-03T00:37:22Z | 451 | 5 | transformers | [
"transformers",
"gguf",
"finetuned",
"text-generation",
"en",
"ja",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-01T06:39:28Z | ---
license: apache-2.0
language:
- en
- ja
tags:
- finetuned
library_name: transformers
pipeline_tag: text-generation
---
<img src="./veteus_logo.svg" width="100%" height="20%" alt="">
- Vecteus-v1ใฎGGUF็
# Our Models for GGUF
- [Vecteus](https://huggingface.co/Local-Novel-LLM-project/Vecteus-v1-gguf)
- [Ninja-v1](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-GGUF)
- [Ninja-v1-NSFW](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW-GGUF)
|
ukung/komodo-7b-base-GGUF | ukung | 2024-05-19T16:47:36Z | 451 | 0 | null | [
"gguf",
"finetuned",
"text-generation",
"base_model:Yellow-AI-NLP/komodo-7b-base",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-05-19T14:13:14Z | ---
base_model: Yellow-AI-NLP/komodo-7b-base
inference: false
license: apache-2.0
model_type: llama
pipeline_tag: text-generation
prompt_template: '<s>[INST] {prompt} [/INST]
'
quantized_by: ukung
tags:
- finetuned
---
<!-- markdownlint-disable MD041 -->
<!-- description start -->
## Description
These files were quantised using hardware kindly provided by [Massed Compute](https://kaggle.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/ukung/komodo-7b-base-GGUF)
* [Model original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ukung/komodo-7b-base-GGUF)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Mistral
```
<s>[INST] {prompt} [/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* q2_k: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.
* q3_k_l: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* q3_k_m: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* q3_k_s: Uses Q3_K for all tensors
* q4_0: Original quant method, 4-bit.
* q4_1: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
* q4_k_m: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
* q4_k_s: Uses Q4_K for all tensors
* q5_0: Higher accuracy, higher resource usage and slower inference.
* q5_1: Even higher accuracy, resource usage and slower inference.
* q5_k_m: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K
* q5_k_s: Uses Q5_K for all tensors
* q6_k: Uses Q8_K for all tensors
* q8_0: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [komodo-7b-base-q2_k.gguf](https://huggingface.co/ukung/komodo-7b-base-GGUF/tree/main/komodo-7b-base-q2_k.gguf) | Q2_K | 2 | 2.55 GB | Tidak diketahui | smallest, significant quality loss - not recommended for most purposes |
| [komodo-7b-base-q3_k_l.gguf](https://huggingface.co/ukung/komodo-7b-base-GGUF/tree/main/komodo-7b-base-q3_k_l.gguf) | Q3_K_L | 3 | 3.61 GB | Tidak diketahui | very small, high quality loss |
| [komodo-7b-base-q3_k_m.gguf](https://huggingface.co/ukung/komodo-7b-base-GGUF/tree/main/komodo-7b-base-q3_k_m.gguf) | Q3_K_M | 3 | 3.31 GB | Tidak diketahui | very small, high quality loss |
| [komodo-7b-base-q3_k_s.gguf](https://huggingface.co/ukung/komodo-7b-base-GGUF/tree/main/komodo-7b-base-q3_k_s.gguf) | Q3_K_S | 3 | 2.96 GB | Tidak diketahui | very small, high quality loss |
| [komodo-7b-base-q4_0.gguf](https://huggingface.co/ukung/komodo-7b-base-GGUF/tree/main/komodo-7b-base-q4_0.gguf) | Q4_0 | 4 | 3.84 GB | Tidak diketahui | smaller, moderate quality loss |
| [komodo-7b-base-q4_1.gguf](https://huggingface.co/ukung/komodo-7b-base-GGUF/tree/main/komodo-7b-base-q4_1.gguf) | Q4_1 | 4 | 4.26 GB | Tidak diketahui | smaller, moderate quality loss |
| [komodo-7b-base-q4_k_m.gguf](https://huggingface.co/ukung/komodo-7b-base-GGUF/tree/main/komodo-7b-base-q4_k_m.gguf) | Q4_K_M | 4 | 4.1 GB | Tidak diketahui | smaller, moderate quality loss |
| [komodo-7b-base-q4_k_s.gguf](https://huggingface.co/ukung/komodo-7b-base-GGUF/tree/main/komodo-7b-base-q4_k_s.gguf) | Q4_K_S | 4 | 3.87 GB | Tidak diketahui | smaller, moderate quality loss |
| [komodo-7b-base-q5_0.gguf](https://huggingface.co/ukung/komodo-7b-base-GGUF/tree/main/komodo-7b-base-q5_0.gguf) | Q5_0 | 5 | 4.67 GB | Tidak diketahui | medium, balanced quality |
| [komodo-7b-base-q5_1.gguf](https://huggingface.co/ukung/komodo-7b-base-GGUF/tree/main/komodo-7b-base-q5_1.gguf) | Q5_1 | 5 | 5.08 GB | Tidak diketahui | medium, balanced quality |
| [komodo-7b-base-q5_k_m.gguf](https://huggingface.co/ukung/komodo-7b-base-GGUF/tree/main/komodo-7b-base-q5_k_m.gguf) | Q5_K_M | 5 | 4.8 GB | Tidak diketahui | medium, balanced quality |
| [komodo-7b-base-q5_k_s.gguf](https://huggingface.co/ukung/komodo-7b-base-GGUF/tree/main/komodo-7b-base-q5_k_s.gguf) | Q5_K_S | 5 | 4.67 GB | Tidak diketahui | medium, balanced quality |
| [komodo-7b-base-q6_k.gguf](https://huggingface.co/ukung/komodo-7b-base-GGUF/tree/main/komodo-7b-base-q6_k.gguf) | Q6_K | 6 | 5.55 GB | Tidak diketahui | larger, higher quality |
| [komodo-7b-base-q8_0.gguf](https://huggingface.co/ukung/komodo-7b-base-GGUF/tree/main/komodo-7b-base-q8_0.gguf) | Q8_0 | 8 | 7.19 GB | Tidak diketahui | largest, best quality |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Mistral-7B-Instruct-v0.2-GGUF and below it, a specific filename to download, such as: komodo-7b-base-q4_0.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Mistral-7B-Instruct-v0.2-GGUF komodo-7b-base-q4_0.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download Yellow-AI-NLP/komodo-7b-base --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download Yellow-AI-NLP/komodo-7b-base komodo-7b-base-q4_0.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m komodo-7b-base-q4_0.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./komodo-7b-base-q4_0.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./komodo-7b-base-q4_0.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
<!-- footer end -->
<!-- original-model-card start -->
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
pip install ctransformers
from ctransformers import AutoModelForCausalLM
llm1 = AutoModelForCausalLM.from_pretrained("ukung/komodo-7b-base-GGUF", model_file="komodo-7b-base-q4_0.gguf", model_type="llama", gpu_layers=50)
prompt="""jelaskan dengan detail apa itu self-attention?"""
for text in llm1(prompt, stream=True, max_new_tokens=2048, stop=["</s>", "<s>", "<|im_start|>", "<|im_end|>", "|im_end|>", "|im_end|", "<"]):
print(text, end='')
```
## Model Architecture
This instruction model is based on Llama2, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Troubleshooting
- If you see the following error:
```
Traceback (most recent call last):
File "", line 1, in
File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
raise KeyError(key)
KeyError: 'llama'
```
Installing transformers from source should solve the issue
pip install git+https://github.com/huggingface/transformers
This should not be required after transformers-v4.33.4.
## Limitations
The model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The AI Team
[email protected]
<!-- original-model-card end --> |
RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf | RichardErkhov | 2024-05-27T16:14:42Z | 451 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-27T14:14:57Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-2-7b-hf_open-platypus - GGUF
- Model creator: https://huggingface.co/lgaalves/
- Original model: https://huggingface.co/lgaalves/llama-2-7b-hf_open-platypus/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-2-7b-hf_open-platypus.Q2_K.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf/blob/main/llama-2-7b-hf_open-platypus.Q2_K.gguf) | Q2_K | 2.36GB |
| [llama-2-7b-hf_open-platypus.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf/blob/main/llama-2-7b-hf_open-platypus.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [llama-2-7b-hf_open-platypus.IQ3_S.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf/blob/main/llama-2-7b-hf_open-platypus.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [llama-2-7b-hf_open-platypus.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf/blob/main/llama-2-7b-hf_open-platypus.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [llama-2-7b-hf_open-platypus.IQ3_M.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf/blob/main/llama-2-7b-hf_open-platypus.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [llama-2-7b-hf_open-platypus.Q3_K.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf/blob/main/llama-2-7b-hf_open-platypus.Q3_K.gguf) | Q3_K | 3.07GB |
| [llama-2-7b-hf_open-platypus.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf/blob/main/llama-2-7b-hf_open-platypus.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [llama-2-7b-hf_open-platypus.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf/blob/main/llama-2-7b-hf_open-platypus.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [llama-2-7b-hf_open-platypus.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf/blob/main/llama-2-7b-hf_open-platypus.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [llama-2-7b-hf_open-platypus.Q4_0.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf/blob/main/llama-2-7b-hf_open-platypus.Q4_0.gguf) | Q4_0 | 3.56GB |
| [llama-2-7b-hf_open-platypus.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf/blob/main/llama-2-7b-hf_open-platypus.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [llama-2-7b-hf_open-platypus.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf/blob/main/llama-2-7b-hf_open-platypus.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [llama-2-7b-hf_open-platypus.Q4_K.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf/blob/main/llama-2-7b-hf_open-platypus.Q4_K.gguf) | Q4_K | 3.8GB |
| [llama-2-7b-hf_open-platypus.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf/blob/main/llama-2-7b-hf_open-platypus.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [llama-2-7b-hf_open-platypus.Q4_1.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf/blob/main/llama-2-7b-hf_open-platypus.Q4_1.gguf) | Q4_1 | 3.95GB |
| [llama-2-7b-hf_open-platypus.Q5_0.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf/blob/main/llama-2-7b-hf_open-platypus.Q5_0.gguf) | Q5_0 | 4.33GB |
| [llama-2-7b-hf_open-platypus.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf/blob/main/llama-2-7b-hf_open-platypus.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [llama-2-7b-hf_open-platypus.Q5_K.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf/blob/main/llama-2-7b-hf_open-platypus.Q5_K.gguf) | Q5_K | 4.45GB |
| [llama-2-7b-hf_open-platypus.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf/blob/main/llama-2-7b-hf_open-platypus.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [llama-2-7b-hf_open-platypus.Q5_1.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf/blob/main/llama-2-7b-hf_open-platypus.Q5_1.gguf) | Q5_1 | 4.72GB |
| [llama-2-7b-hf_open-platypus.Q6_K.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf/blob/main/llama-2-7b-hf_open-platypus.Q6_K.gguf) | Q6_K | 5.15GB |
| [llama-2-7b-hf_open-platypus.Q8_0.gguf](https://huggingface.co/RichardErkhov/lgaalves_-_llama-2-7b-hf_open-platypus-gguf/blob/main/llama-2-7b-hf_open-platypus.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
license: llama2
datasets:
- garage-bAInd/Open-Platypus
pipeline_tag: text-generation
language:
- en
---
# Llama-2-7b-hf_open-platypus
**llama-2-7b-hf_open-platypus** is an instruction fine-tuned model based on the LLaMA2-7B transformer architecture.
### Benchmark Metrics
| Metric | llama-2-7b-hf_open-platypus | garage-bAInd/Platypus2-7B| meta-llama/Llama-2-7b-hf (base) |
|-----------------------|-------|-------|-------|
| Avg. | 54.35|**56.13** | 54.32 |
| ARC (25-shot) | 51.45 |**55.2**| 53.07 |
| HellaSwag (10-shot) | 78.63 |**78.84**| 78.59 |
| MMLU (5-shot) | 43.6 |**49.83**| 46.87 |
| TruthfulQA (0-shot) | **43.71** |40.64| 38.76 |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
### Model Details
* **Trained by**: Luiz G A Alves
* **Model type:** **llama-2-7b-hf_open-platypus** is an auto-regressive language model based on the LLaMA2 transformer architecture.
* **Language(s)**: English
### How to use:
```python
# Use a pipeline as a high-level helper
>>> from transformers import pipeline
>>> pipe = pipeline("text-generation", model="lgaalves/llama-2-7b-hf_open-platypus")
>>> question = "What is a large language model?"
>>> answer = pipe(question)
>>> print(answer[0]['generated_text'])
```
or, you can load the model direclty using:
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("lgaalves/llama-2-7b-hf_open-platypus")
model = AutoModelForCausalLM.from_pretrained("lgaalves/llama-2-7b-hf_open-platypus")
```
### Training Dataset
`lgaalves/llama-2-7b-hf_open-platypus` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
### Training Procedure
`lgaalves/llama-2-7b-hf_open-platypus` was instruction fine-tuned using LoRA on 1 Tesla V100-SXM2-16GB.
### Limitations and bias
Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__llama-2-7b-hf_open-platypus)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 43.49 |
| ARC (25-shot) | 51.45 |
| HellaSwag (10-shot) | 78.63 |
| MMLU (5-shot) | 43.6 |
| TruthfulQA (0-shot) | 43.71 |
| Winogrande (5-shot) | 74.43 |
| GSM8K (5-shot) | 6.6 |
| DROP (3-shot) | 5.99 |
|
mradermacher/Maixtchup-4x7b-GGUF | mradermacher | 2024-05-31T06:43:41Z | 451 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:StatsGary/Maixtchup-4x7b",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-30T05:48:38Z | ---
base_model: StatsGary/Maixtchup-4x7b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/StatsGary/Maixtchup-4x7b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Maixtchup-4x7b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Maixtchup-4x7b-GGUF/resolve/main/Maixtchup-4x7b.Q2_K.gguf) | Q2_K | 8.9 | |
| [GGUF](https://huggingface.co/mradermacher/Maixtchup-4x7b-GGUF/resolve/main/Maixtchup-4x7b.IQ3_XS.gguf) | IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Maixtchup-4x7b-GGUF/resolve/main/Maixtchup-4x7b.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Maixtchup-4x7b-GGUF/resolve/main/Maixtchup-4x7b.IQ3_S.gguf) | IQ3_S | 10.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Maixtchup-4x7b-GGUF/resolve/main/Maixtchup-4x7b.IQ3_M.gguf) | IQ3_M | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/Maixtchup-4x7b-GGUF/resolve/main/Maixtchup-4x7b.Q3_K_M.gguf) | Q3_K_M | 11.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Maixtchup-4x7b-GGUF/resolve/main/Maixtchup-4x7b.Q3_K_L.gguf) | Q3_K_L | 12.6 | |
| [GGUF](https://huggingface.co/mradermacher/Maixtchup-4x7b-GGUF/resolve/main/Maixtchup-4x7b.IQ4_XS.gguf) | IQ4_XS | 13.1 | |
| [GGUF](https://huggingface.co/mradermacher/Maixtchup-4x7b-GGUF/resolve/main/Maixtchup-4x7b.Q4_K_S.gguf) | Q4_K_S | 13.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Maixtchup-4x7b-GGUF/resolve/main/Maixtchup-4x7b.Q4_K_M.gguf) | Q4_K_M | 14.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Maixtchup-4x7b-GGUF/resolve/main/Maixtchup-4x7b.Q5_K_S.gguf) | Q5_K_S | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/Maixtchup-4x7b-GGUF/resolve/main/Maixtchup-4x7b.Q5_K_M.gguf) | Q5_K_M | 17.2 | |
| [GGUF](https://huggingface.co/mradermacher/Maixtchup-4x7b-GGUF/resolve/main/Maixtchup-4x7b.Q6_K.gguf) | Q6_K | 19.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Maixtchup-4x7b-GGUF/resolve/main/Maixtchup-4x7b.Q8_0.gguf) | Q8_0 | 25.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
gglabs/TinyLM-Chat-0611-10-epoch | gglabs | 2024-06-11T14:44:58Z | 451 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-11T14:40:29Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/tinyllama-chat-bnb-4bit
---
# Uploaded model
- **Developed by:** gglabs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/TeenyTinyLlama-Doc-160m-GGUF | mradermacher | 2024-06-29T05:32:32Z | 451 | 0 | transformers | [
"transformers",
"gguf",
"llama-2",
"pt",
"medicine",
"dataset:mmoreirast/medicine-training-pt",
"dataset:mmoreirast/medicine-evaluation-pt",
"base_model:mmoreirast/TeenyTinyLlama-Doc-160m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-22T17:41:17Z | ---
base_model: mmoreirast/TeenyTinyLlama-Doc-160m
datasets:
- mmoreirast/medicine-training-pt
- mmoreirast/medicine-evaluation-pt
language:
- pt
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- llama-2
- pt
- medicine
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mmoreirast/TeenyTinyLlama-Doc-160m
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TeenyTinyLlama-Doc-160m-GGUF/resolve/main/TeenyTinyLlama-Doc-160m.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/TeenyTinyLlama-Doc-160m-GGUF/resolve/main/TeenyTinyLlama-Doc-160m.IQ3_XS.gguf) | IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/TeenyTinyLlama-Doc-160m-GGUF/resolve/main/TeenyTinyLlama-Doc-160m.IQ3_S.gguf) | IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/TeenyTinyLlama-Doc-160m-GGUF/resolve/main/TeenyTinyLlama-Doc-160m.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/TeenyTinyLlama-Doc-160m-GGUF/resolve/main/TeenyTinyLlama-Doc-160m.IQ3_M.gguf) | IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/TeenyTinyLlama-Doc-160m-GGUF/resolve/main/TeenyTinyLlama-Doc-160m.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TeenyTinyLlama-Doc-160m-GGUF/resolve/main/TeenyTinyLlama-Doc-160m.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/TeenyTinyLlama-Doc-160m-GGUF/resolve/main/TeenyTinyLlama-Doc-160m.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/TeenyTinyLlama-Doc-160m-GGUF/resolve/main/TeenyTinyLlama-Doc-160m.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TeenyTinyLlama-Doc-160m-GGUF/resolve/main/TeenyTinyLlama-Doc-160m.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TeenyTinyLlama-Doc-160m-GGUF/resolve/main/TeenyTinyLlama-Doc-160m.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/TeenyTinyLlama-Doc-160m-GGUF/resolve/main/TeenyTinyLlama-Doc-160m.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/TeenyTinyLlama-Doc-160m-GGUF/resolve/main/TeenyTinyLlama-Doc-160m.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TeenyTinyLlama-Doc-160m-GGUF/resolve/main/TeenyTinyLlama-Doc-160m.Q8_0.gguf) | Q8_0 | 0.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TeenyTinyLlama-Doc-160m-GGUF/resolve/main/TeenyTinyLlama-Doc-160m.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
larenspear/llama2-7b-WildJailbreak-Q2_K-GGUF | larenspear | 2024-06-30T19:25:43Z | 451 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:allenai/llama2-7b-WildJailbreak",
"license:apache-2.0",
"region:us"
]
| null | 2024-06-30T19:25:32Z | ---
base_model: allenai/llama2-7b-WildJailbreak
language:
- en
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
extra_gated_prompt: Access to this model is automatically granted upon accepting the
[AI2 Responsible Use Guidelines](https://allenai.org/responsible-use.pdf), and completing
all fields below
extra_gated_fields:
Your full name: text
Organization or entity you are affiliated with: text
State or country you are located in: text
Contact email: text
Please describe your intended use of the low risk artifact(s): text
I understand that this model is a research artifact that may contain or produce unfiltered, toxic, or harmful material: checkbox
I agree to use this model for research purposes in accordance with the AI2 Responsible Use Guidelines: checkbox
I agree that AI2 may use my information as described in the Privacy Policy: checkbox
I certify that the information I have provided is true and accurate: checkbox
---
# larenspear/llama2-7b-WildJailbreak-Q2_K-GGUF
This model was converted to GGUF format from [`allenai/llama2-7b-WildJailbreak`](https://huggingface.co/allenai/llama2-7b-WildJailbreak) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/allenai/llama2-7b-WildJailbreak) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo larenspear/llama2-7b-WildJailbreak-Q2_K-GGUF --hf-file llama2-7b-wildjailbreak-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo larenspear/llama2-7b-WildJailbreak-Q2_K-GGUF --hf-file llama2-7b-wildjailbreak-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo larenspear/llama2-7b-WildJailbreak-Q2_K-GGUF --hf-file llama2-7b-wildjailbreak-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo larenspear/llama2-7b-WildJailbreak-Q2_K-GGUF --hf-file llama2-7b-wildjailbreak-q2_k.gguf -c 2048
```
|
Isaak-Carter/JOSIEv4o-8b-stage1-beta2.3.1-Q4_K_S-GGUF | Isaak-Carter | 2024-07-02T13:13:48Z | 451 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"en",
"de",
"base_model:Isaak-Carter/JOSIEv4o-8b-stage1-beta2.3.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-07-01T07:53:13Z | ---
base_model: Isaak-Carter/JOSIEv4o-8b-stage1-beta2.3.1
language:
- en
- de
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- llama-cpp
- gguf-my-repo
---
use in ollama `ollama run goekdenizguelmez/j.o.s.i.e.v4o-8b-stage1-beta2.3.1`
# Isaak-Carter/JOSIEv4o-8b-stage1-beta2.3.1-Q4_K_S-GGUF
This model was converted to GGUF format from [`Isaak-Carter/JOSIEv4o-8b-stage1-beta2.3.1`](https://huggingface.co/Isaak-Carter/JOSIEv4o-8b-stage1-beta2.3.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Isaak-Carter/JOSIEv4o-8b-stage1-beta2.3.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Isaak-Carter/JOSIEv4o-8b-stage1-beta2.3.1-Q4_K_S-GGUF --hf-file josiev4o-8b-stage1-beta2.3.1-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Isaak-Carter/JOSIEv4o-8b-stage1-beta2.3.1-Q4_K_S-GGUF --hf-file josiev4o-8b-stage1-beta2.3.1-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Isaak-Carter/JOSIEv4o-8b-stage1-beta2.3.1-Q4_K_S-GGUF --hf-file josiev4o-8b-stage1-beta2.3.1-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Isaak-Carter/JOSIEv4o-8b-stage1-beta2.3.1-Q4_K_S-GGUF --hf-file josiev4o-8b-stage1-beta2.3.1-q4_k_s.gguf -c 2048
```
|
s3nh/MathLLM-MathCoder-CL-7B-GGUF | s3nh | 2023-10-06T08:47:16Z | 450 | 2 | transformers | [
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-10-06T08:41:21Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/MathLLM/MathCoder-CL-7B).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
Qwen/Qwen-72B-Chat-Int4 | Qwen | 2024-01-04T06:46:12Z | 450 | 45 | transformers | [
"transformers",
"safetensors",
"qwen",
"text-generation",
"custom_code",
"zh",
"en",
"arxiv:2309.16609",
"arxiv:2305.08322",
"arxiv:2009.03300",
"arxiv:2307.11088",
"license:other",
"autotrain_compatible",
"4-bit",
"gptq",
"region:us"
]
| text-generation | 2023-11-30T03:41:39Z | ---
language:
- zh
- en
tags:
- qwen
pipeline_tag: text-generation
inference: false
license: other
license_name: tongyi-qianwen-license-agreement
license_link: https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
---
# Qwen-72B-Chat-Int4
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/logo_qwen.jpg" width="400"/>
<p>
<br>
<p align="center">
๐ค <a href="https://huggingface.co/Qwen">Hugging Face</a>   |   ๐ค <a href="https://modelscope.cn/organization/qwen">ModelScope</a>   |    ๐ <a href="https://arxiv.org/abs/2309.16609">Paper</a>    ๏ฝ   ๐ฅ๏ธ <a href="https://modelscope.cn/studios/qwen/Qwen-72B-Chat-Demo/summary">Demo</a>
<br>
<a href="https://github.com/QwenLM/Qwen/blob/main/assets/wechat.png">WeChat (ๅพฎไฟก)</a>   |   <a href="https://discord.gg/z3GAxXZ9Ce">Discord</a>   ๏ฝ   <a href="https://dashscope.aliyun.com">API</a>
</p>
<br>
## ไป็ป๏ผIntroduction๏ผ
**้ไนๅ้ฎ-72B**๏ผ**Qwen-72B**๏ผๆฏ้ฟ้ไบ็ ๅ็้ไนๅ้ฎๅคงๆจกๅ็ณปๅ็720ไบฟๅๆฐ่งๆจก็ๆจกๅใQwen-72BๆฏๅบไบTransformer็ๅคง่ฏญ่จๆจกๅ, ๅจ่ถ
ๅคง่งๆจก็้ข่ฎญ็ปๆฐๆฎไธ่ฟ่ก่ฎญ็ปๅพๅฐใ้ข่ฎญ็ปๆฐๆฎ็ฑปๅๅคๆ ท๏ผ่ฆ็ๅนฟๆณ๏ผๅ
ๆฌๅคง้็ฝ็ปๆๆฌใไธไธไนฆ็ฑใไปฃ็ ็ญใๅๆถ๏ผๅจQwen-72B็ๅบ็กไธ๏ผๆไปฌไฝฟ็จๅฏน้ฝๆบๅถๆ้ ไบๅบไบๅคง่ฏญ่จๆจกๅ็AIๅฉๆQwen-72B-ChatใๆฌไปๅบไธบQwen-72B-Chat็Int4้ๅๆจกๅ็ไปๅบใ
้ไนๅ้ฎ-72B๏ผQwen-72B๏ผไธป่ฆๆไปฅไธ็น็น๏ผ
1. **ๅคง่งๆจก้ซ่ดจ้่ฎญ็ป่ฏญๆ**๏ผไฝฟ็จ่ถ
่ฟ3ไธไบฟtokens็ๆฐๆฎ่ฟ่ก้ข่ฎญ็ป๏ผๅ
ๅซ้ซ่ดจ้ไธญใ่ฑใๅค่ฏญ่จใไปฃ็ ใๆฐๅญฆ็ญๆฐๆฎ๏ผๆถต็้็จๅไธไธ้ขๅ็่ฎญ็ป่ฏญๆใ้่ฟๅคง้ๅฏนๆฏๅฎ้ชๅฏน้ข่ฎญ็ป่ฏญๆๅๅธ่ฟ่กไบไผๅใ
2. **ๅผบๅคง็ๆง่ฝ**๏ผQwen-72Bๅจๅคไธชไธญ่ฑๆไธๆธธ่ฏๆตไปปๅกไธ๏ผๆถต็ๅธธ่ฏๆจ็ใไปฃ็ ใๆฐๅญฆใ็ฟป่ฏ็ญ๏ผ๏ผๆๆๆพ่่ถ
่ถ็ฐๆ็ๅผๆบๆจกๅใๅ
ทไฝ่ฏๆต็ปๆ่ฏท่ฏฆ่งไธๆใ
3. **่ฆ็ๆดๅ
จ้ข็่ฏ่กจ**๏ผ็ธๆฏ็ฎๅไปฅไธญ่ฑ่ฏ่กจไธบไธป็ๅผๆบๆจกๅ๏ผQwen-72Bไฝฟ็จไบ็บฆ15ไธๅคงๅฐ็่ฏ่กจใ่ฏฅ่ฏ่กจๅฏนๅค่ฏญ่จๆดๅ ๅๅฅฝ๏ผๆนไพฟ็จๆทๅจไธๆฉๅฑ่ฏ่กจ็ๆ
ๅตไธๅฏน้จๅ่ฏญ็ง่ฟ่ก่ฝๅๅขๅผบๅๆฉๅฑใ
4. **ๆด้ฟ็ไธไธๆๆฏๆ**๏ผQwen-72Bๆฏๆ32k็ไธไธๆ้ฟๅบฆใ
5. **็ณป็ปๆไปค่ท้**๏ผQwen-72B-Chatๅฏไปฅ้่ฟ่ฐๆด็ณป็ปๆไปค๏ผๅฎ็ฐ**่ง่ฒๆฎๆผ**๏ผ**่ฏญ่จ้ฃๆ ผ่ฟ็งป**๏ผ**ไปปๅก่ฎพๅฎ**๏ผๅ**่กไธบ่ฎพๅฎ**็ญ่ฝๅใ
ๅฆๆๆจๆณไบ่งฃๆดๅคๅ
ณไบ้ไนๅ้ฎ72Bๅผๆบๆจกๅ็็ป่๏ผๆไปฌๅปบ่ฎฎๆจๅ้
[GitHubไปฃ็ ๅบ](https://github.com/QwenLM/Qwen)ใ
**Qwen-72B** is the 72B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-72B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-72B, we release Qwen-72B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. This repository is the one for the Int4 quantized model of Qwen-72B-Chat.
The features of Qwen-72B include:
1. **Large-scale high-quality training corpora**: It is pretrained on over 3 trillion tokens, including Chinese, English, multilingual texts, code, and mathematics, covering general and professional fields. The distribution of the pre-training corpus has been optimized through a large number of ablation experiments.
2. **Competitive performance**: It significantly surpasses existing open-source models on multiple Chinese and English downstream evaluation tasks (including commonsense, reasoning, code, mathematics, etc.). See below for specific evaluation results.
3. **More comprehensive vocabulary coverage**: Compared with other open-source models based on Chinese and English vocabularies, Qwen-72B uses a vocabulary of over 150K tokens. This vocabulary is more friendly to multiple languages, enabling users to directly further enhance the capability for certain languages without expanding the vocabulary.
4. **Longer context support**: Qwen-72B supports 32k context length.
5. **System prompt**: Qwen-72B can realize roly playing, language style transfer, task setting, and behavior setting by using system prompt.
For more details about the open-source model of Qwen-72B, please refer to the [GitHub](https://github.com/QwenLM/Qwen) code repository.
<br>
## ่ฆๆฑ๏ผRequirements๏ผ
* python 3.8ๅไปฅไธ็ๆฌ
* pytorch 2.0ๅไปฅไธ็ๆฌ
* ๅปบ่ฎฎไฝฟ็จCUDA 11.4ๅไปฅไธ๏ผGPU็จๆทใflash-attention็จๆท็ญ้่่ๆญค้้กน๏ผ
* * **่ณๅฐ้่ฆ48GBๆพๅญ๏ผไพๅฆ1xA100-80Gๆ2xV100-32G๏ผ**
* python 3.8 and above
* pytorch 2.0 and above, 2.0 and above are recommended
* CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.)
* **at least 48GB GPU memory is required (e.g., 1xA100-80G or 2xV100-32G)**
<br>
## ไพ่ต้กน๏ผDependency๏ผ
่ฟ่กQwen-72B-Chat-Int4๏ผ่ฏท็กฎไฟๆปก่ถณไธ่ฟฐ่ฆๆฑ๏ผๅๆง่กไปฅไธpipๅฝไปคๅฎ่ฃ
ไพ่ตๅบใๅฆๅฎ่ฃ
`auto-gptq`้ๅฐ้ฎ้ข๏ผๆไปฌๅปบ่ฎฎๆจๅฐๅฎๆน[repo](https://github.com/PanQiWei/AutoGPTQ)ๆ็ดขๅ้็้ข็ผ่ฏwheelใ
To run Qwen-72B-Chat-Int4, please make sure you meet the above requirements, and then execute the following pip commands to install the dependent libraries. If you meet problems installing `auto-gptq`, we advise you to check out the official [repo](https://github.com/PanQiWei/AutoGPTQ) to find a pre-build wheel.
```bash
pip install "transformers>=4.32.0" accelerate tiktoken einops scipy transformers_stream_generator==0.0.4 peft deepspeed
pip install auto-gptq optimum
```
> ๆณจๆ๏ผ้ข็ผ่ฏ็`auto-gptq`็ๆฌๅฏน`torch`็ๆฌๅๅ
ถCUDA็ๆฌ่ฆๆฑไธฅๆ ผใๅๆถ๏ผ็ฑไบ
> ๅ
ถ่ฟๆๆดๆฐ๏ผไฝ ๅฏ่ฝไผ้ๅฐ`transformers`ใ`optimum`ๆ`peft`ๆๅบ็็ๆฌ้่ฏฏใ
> ๆไปฌๅปบ่ฎฎไฝฟ็จ็ฌฆๅไปฅไธ่ฆๆฑ็ๆๆฐ็ๆฌ๏ผ
> - torch==2.1 auto-gptq>=0.5.1 transformers>=4.35.0 optimum>=1.14.0 peft>=0.6.1
> - torch>=2.0,<2.1 auto-gptq<0.5.0 transformers<4.35.0 optimum<1.14.0 peft>=0.5.0,<0.6.0
> Note: The pre-compiled `auto-gptq` packages strongly depend on the version of `torch` and its CUDA version. Moreover, due to recent update,
> you may also encounter unsupported version errors from `transformers`, `optimum`, or `peft`.
> We recommend using the latest versions meeting the following requirements :
> - torch==2.1 auto-gptq>=0.5.1 transformers>=4.35.0 optimum>=1.14.0 peft>=0.6.1
> - torch>=2.0,<2.1 auto-gptq<0.5.0 transformers<4.35.0 optimum<1.14.0 peft>=0.5.0,<0.6.0
ๅฆๅค๏ผๆจ่ๅฎ่ฃ
`flash-attention`ๅบ๏ผ**ๅฝๅๅทฒๆฏๆflash attention 2**๏ผ๏ผไปฅๅฎ็ฐๆด้ซ็ๆ็ๅๆดไฝ็ๆพๅญๅ ็จใ
In addition, it is recommended to install the `flash-attention` library (**we support flash attention 2 now.**) for higher efficiency and lower memory usage.
```bash
git clone https://github.com/Dao-AILab/flash-attention
cd flash-attention && pip install .
# ไธๆนๅฎ่ฃ
ๅฏ้๏ผๅฎ่ฃ
ๅฏ่ฝๆฏ่พ็ผๆ
ขใ
# Below are optional. Installing them might be slow.
# pip install csrc/layer_norm
# ๅฆๆไฝ ็flash-attn็ๆฌ้ซไบ2.1.1๏ผไธๆนไธ้่ฆๅฎ่ฃ
ใ
# If the version of flash-attn is higher than 2.1.1, the following is not needed.
# pip install csrc/rotary
```
<br>
## ๅฟซ้ไฝฟ็จ๏ผQuickstart๏ผ
ไธ้ขๆไปฌๅฑ็คบไบไธไธชไฝฟ็จQwen-72B-Chat-Int4ๆจกๅ็ๆ ทไพ๏ผ
We show an example of how to use Qwen-72B-Chat-Int4 in the following code:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# Note: The default behavior now has injection attack prevention off.
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-72B-Chat-Int4", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen-72B-Chat-Int4",
device_map="auto",
trust_remote_code=True
).eval()
response, history = model.chat(tokenizer, "ไฝ ๅฅฝ", history=None)
print(response)
# ไฝ ๅฅฝ๏ผๅพ้ซๅ
ดไธบไฝ ๆไพๅธฎๅฉใ
# Qwen-72B-Chat็ฐๅจๅฏไปฅ้่ฟ่ฐๆด็ณป็ปๆไปค๏ผSystem Prompt๏ผ๏ผๅฎ็ฐ่ง่ฒๆฎๆผ๏ผ่ฏญ่จ้ฃๆ ผ่ฟ็งป๏ผไปปๅก่ฎพๅฎ๏ผ่กไธบ่ฎพๅฎ็ญ่ฝๅใ
# Qwen-72B-Chat can realize roly playing, language style transfer, task setting, and behavior setting by system prompt.
response, _ = model.chat(tokenizer, "ไฝ ๅฅฝๅ", history=None, system="่ฏท็จไบๆฌกๅ
ๅฏ็ฑ่ฏญๆฐๅๆ่ฏด่ฏ")
print(response)
# ๅๅ๏ผไฝ ๅฅฝๅ๏ผๆฏๆไนๆพๅฐไบบๅฎถ็ๅข๏ผๆฏไธๆฏ่ขซไบบๅฎถ็้ญ
ๅๅธๅผ่ฟๆฅ็ๅ~(โงโฝโฆ)/~
response, _ = model.chat(tokenizer, "My colleague works diligently", history=None, system="You will write beautiful compliments according to needs")
print(response)
# Your colleague is a shining example of dedication and hard work. Their commitment to their job is truly commendable, and it shows in the quality of their work.
# They are an asset to the team, and their efforts do not go unnoticed. Keep up the great work!
```
ๆณจๆ๏ผไฝฟ็จvLLM่ฟ่ก้ๅๆจกๅ้ๅฎ่ฃ
ๆไปฌ[vLLMๅๆฏไปๅบ](https://github.com/QwenLM/vllm-gptq)ใๆไธๆฏๆint8ๆจกๅ๏ผ่ฟๆๅฐๆดๆฐใ
Note: You need to install our [vLLM repo] (https://github.com/qwenlm/vllm-gptq) for AutoGPTQ. The int8 model is not supported for the time being, and we will add the support soon.
ๅ
ณไบๆดๅค็ไฝฟ็จ่ฏดๆ๏ผ่ฏทๅ่ๆไปฌ็[GitHub repo](https://github.com/QwenLM/Qwen)่ทๅๆดๅคไฟกๆฏใ
For more information, please refer to our [GitHub repo](https://github.com/QwenLM/Qwen) for more information.
<br>
## ้ๅ (Quantization)
### ๆๆ่ฏๆต
ๆไปฌๅฏนBF16๏ผInt8ๅInt4ๆจกๅๅจๅบๅ่ฏๆตไธๅไบๆต่ฏ๏ผไฝฟ็จzero-shot่ฎพ็ฝฎ๏ผ๏ผ็ปๆๅฆไธๆ็คบ๏ผ
We illustrate the zero-shot performance of both BF16, Int8 and Int4 models on the benchmark. Results are shown below:
| Quantization | MMLU | CEval (val) | GSM8K | Humaneval |
|--------------|:----:|:-----------:|:-----:|:---------:|
| BF16 | 74.4 | 80.1 | 76.4 | 64.6 |
| Int8 | 73.5 | 80.1 | 73.5 | 62.2 |
| Int4 | 73.4 | 80.1 | 75.3 | 61.6 |
### ๆจ็้ๅบฆๅๆพๅญไฝฟ็จ (Inference Speed & GPU Memory Usage)
ๆไปฌๆต็ฎไบไธๅ็ฒพๅบฆๆจกๅใไธๅFlashAttnๅบ็ๆฌใไปฅๅๆฏๅฆไฝฟ็จvLLM็ๆ
ๅตไธ๏ผๆจกๅๅจไธๅ่พๅ
ฅ้ฟๅบฆไธ็ๆ2048่ฏ็ๅนณๅๆจ็้ๅบฆไปฅๅๆพๅญไฝฟ็จใ
We measured the average inference speed and GPU memory usage of generating 2048 tokens across several settings, including input lengths, quantization levels, versions of flash-attention, and whether vLLM is used.
| Quantization | Setting | # of A100-80G GPUs | Context Length | Generation Length | Speed (Tokens/s) | Total GPU Memory Usage |
| ------------- | :---------------: | :----------------: | :-------------: | :---------------: | :---------------:| :---------------------:|
| BF16 | HF + FlashAttn-v2 | 2 | 1 | 2048 | 8.48 | 144.69GB |
| BF16 | HF + FlashAttn-v1 | 2 | 1 | 2048 | 8.31 | 144.69GB |
| BF16 | HF + No FlashAttn | 2 | 1 | 2048 | 7.89 | 144.69GB |
| BF16 | vLLM | 2 | 1 | 2048 | 17.60 | Pre-Allocated* |
| BF16 | vLLM | 4 | 1 | 2048 | 26.16 | Pre-Allocated* |
| BF16 | HF + FlashAttn-v2 | 4 | 6144 | 2048 | 5.37 | 181.47GB |
| BF16 | HF + FlashAttn-v1 | 4 | 6144 | 2048 | 4.96 | 181.47GB |
| BF16 | HF + No FlashAttn | 4 | 6144 | 2048 | 4.72 | 202.74GB |
| BF16 | vLLM | 4 | 6144 | 2048 | 24.41 | Pre-Allocated* |
| BF16 | vLLM | 4 | 14336 | 2048 | 21.24 | Pre-Allocated* |
| BF16 | vLLM | 4 | 30720 | 2048 | 17.55 | Pre-Allocated* |
| Int8 | HF + FlashAttn-v2 | 2 | 1 | 2048 | 9.05 | 81.27GB |
| Int8 | HF + FlashAttn-v1 | 2 | 1 | 2048 | 8.97 | 81.27GB |
| Int8 | HF + No FlashAttn | 2 | 1 | 2048 | 8.32 | 81.27GB |
| Int8 | HF + FlashAttn-v2 | 3 | 6144 | 2048 | 5.76 | 118.06GB |
| Int8 | HF + FlashAttn-v1 | 3 | 6144 | 2048 | 5.72 | 118.06GB |
| Int8 | HF + No FlashAttn | 2 | 6144 | 2048 | 4.50 | 129.83GB |
| Int8 | HF + FlashAttn-v2 | 4 | 14336 | 2048 | 3.44 | 180.44GB |
| Int8 | HF + FlashAttn-v1 | 4 | 14336 | 2048 | 3.19 | 180.44GB |
| Int8 | HF + No FlashAttn | 4 | 14336 | 2048 | OOM | OOM |
| Int4 | HF + FlashAttn-v2 | 1 | 1 | 2048 | 11.67 | 48.86GB |
| Int4 | HF + FlashAttn-v1 | 1 | 1 | 2048 | 11.27 | 48.86GB |
| Int4 | HF + No FlashAttn | 1 | 1 | 2048 | 11.32 | 48.86GB |
| Int4 | vLLM | 1 | 1 | 2048 | 14.63 | Pre-Allocated* |
| Int4 | vLLM | 2 | 1 | 2048 | 20.76 | Pre-Allocated* |
| Int4 | vLLM | 4 | 1 | 2048 | 27.19 | Pre-Allocated* |
| Int4 | HF + FlashAttn-v2 | 2 | 6144 | 2048 | 6.75 | 85.99GB |
| Int4 | HF + FlashAttn-v1 | 2 | 6144 | 2048 | 6.32 | 85.99GB |
| Int4 | HF + No FlashAttn | 2 | 6144 | 2048 | 5.97 | 88.30GB |
| Int4 | vLLM | 2 | 6144 | 2048 | 18.07 | Pre-Allocated* |
| Int4 | vLLM | 4 | 6144 | 2048 | 24.56 | Pre-Allocated* |
| Int4 | HF + FlashAttn-v2 | 3 | 14336 | 2048 | 4.18 | 148.73GB |
| Int4 | HF + FlashAttn-v1 | 3 | 14336 | 2048 | 3.72 | 148.73GB |
| Int4 | HF + No FlashAttn | 3 | 14336 | 2048 | OOM | OOM |
| Int4 | vLLM | 2 | 14336 | 2048 | 14.51 | Pre-Allocated* |
| Int4 | vLLM | 4 | 14336 | 2048 | 19.28 | Pre-Allocated* |
| Int4 | vLLM | 4 | 30720 | 2048 | 16.93 | Pre-Allocated* |
\* vLLMไผๆๅ้ขๅ้
ๆพๅญ๏ผๅ ๆญคๆ ๆณๆขๆตๆๅคงๆพๅญไฝฟ็จๆ
ๅตใHFๆฏๆไฝฟ็จHuggingface Transformersๅบ่ฟ่กๆจ็ใ
\* vLLM pre-allocates GPU memory, so we cannot detect the maximum usage. HF refers to using the Huggingface Transformers library for inference.
HuggingFace Transformers็ๆง่ฝๆต็ฎไฝฟ็จ[ๆญค่ๆฌ](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile.py)ๅฎๆใ่ฏๆตไฝฟ็จA100-SXM4-80G GPU๏ผไฝฟ็จPyTorch 2.0.1 (Huggingface Transformers) / PyTorch 2.1.0 (vLLM)ๅCUDA 11.8ใ
The speed and memory profiling of HuggingFace Transformers are conducted using [this script](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile.py). The profiling runs on A100-SXM4-80G GPUs with PyTorch 2.0.1 (for Huggingface Transformers) / PyTorch 2.1.0 (for vLLM) and CUDA 11.8.
<br>
## Tokenizer
> ๆณจ๏ผไฝไธบๆฏ่ฏญ็โtokenizationโๅจไธญๆไธญๅฐๆ ๅ
ฑ่ฏ็ๆฆๅฟตๅฏนๅบ๏ผๆฌๆๆกฃ้็จ่ฑๆ่กจ่พพไปฅๅฉ่ฏดๆใ
ๅบไบtiktoken็ๅ่ฏๅจๆๅซไบๅ
ถไปๅ่ฏๅจ๏ผๆฏๅฆsentencepieceๅ่ฏๅจใๅฐคๅ
ถๅจๅพฎ่ฐ้ถๆฎต๏ผ้่ฆ็นๅซๆณจๆ็นๆฎtoken็ไฝฟ็จใๅ
ณไบtokenizer็ๆดๅคไฟกๆฏ๏ผไปฅๅๅพฎ่ฐๆถๆถๅ็็ธๅ
ณไฝฟ็จ๏ผ่ฏทๅ้
[ๆๆกฃ](https://github.com/QwenLM/Qwen/blob/main/tokenization_note_zh.md)ใ
Our tokenizer based on tiktoken is different from other tokenizers, e.g., sentencepiece tokenizer. You need to pay attention to special tokens, especially in finetuning. For more detailed information on the tokenizer and related use in fine-tuning, please refer to the [documentation](https://github.com/QwenLM/Qwen/blob/main/tokenization_note.md).
<br>
## ๆจกๅ็ป่๏ผModel๏ผ
ไธQwen-72B้ข่ฎญ็ปๆจกๅ็ธๅ๏ผQwen-72B-Chatๆจกๅ่งๆจกๅบๆฌๆ
ๅตๅฆไธๆ็คบ
The details of the model architecture of Qwen-72B-Chat are listed as follows
| Hyperparameter | Value |
|:----------------|:-------|
| n_layers | 80 |
| n_heads | 64 |
| d_model | 8192 |
| vocab size | 151851 |
| sequence length | 32768 |
ๅจไฝ็ฝฎ็ผ็ ใFFNๆฟๆดปๅฝๆฐๅnormalization็ๅฎ็ฐๆนๅผไธ๏ผๆไปฌไน้็จไบ็ฎๅๆๆต่ก็ๅๆณ๏ผ
ๅณRoPE็ธๅฏนไฝ็ฝฎ็ผ็ ใSwiGLUๆฟๆดปๅฝๆฐใRMSNorm๏ผๅฏ้ๅฎ่ฃ
flash-attentionๅ ้๏ผใ
ๅจๅ่ฏๅจๆน้ข๏ผ็ธๆฏ็ฎๅไธปๆตๅผๆบๆจกๅไปฅไธญ่ฑ่ฏ่กจไธบไธป๏ผQwen-72B-Chatไฝฟ็จไบ็บฆ15ไธtokenๅคงๅฐ็่ฏ่กจใ
่ฏฅ่ฏ่กจๅจGPT-4ไฝฟ็จ็BPE่ฏ่กจ`cl100k_base`ๅบ็กไธ๏ผๅฏนไธญๆใๅค่ฏญ่จ่ฟ่กไบไผๅ๏ผๅจๅฏนไธญใ่ฑใไปฃ็ ๆฐๆฎ็้ซๆ็ผ่งฃ็ ็ๅบ็กไธ๏ผๅฏน้จๅๅค่ฏญ่จๆดๅ ๅๅฅฝ๏ผๆนไพฟ็จๆทๅจไธๆฉๅฑ่ฏ่กจ็ๆ
ๅตไธๅฏน้จๅ่ฏญ็ง่ฟ่ก่ฝๅๅขๅผบใ
่ฏ่กจๅฏนๆฐๅญๆๅไธชๆฐๅญไฝๅๅใ่ฐ็จ่พไธบ้ซๆ็[tiktokenๅ่ฏๅบ](https://github.com/openai/tiktoken)่ฟ่กๅ่ฏใ
For position encoding, FFN activation function, and normalization calculation methods, we adopt the prevalent practices, i.e., RoPE relative position encoding, SwiGLU for activation function, and RMSNorm for normalization (optional installation of flash-attention for acceleration).
For tokenization, compared to the current mainstream open-source models based on Chinese and English vocabularies, Qwen-72B-Chat uses a vocabulary of over 150K tokens.
It first considers efficient encoding of Chinese, English, and code data, and is also more friendly to multilingual languages, enabling users to directly enhance the capability of some languages without expanding the vocabulary.
It segments numbers by single digit, and calls the [tiktoken](https://github.com/openai/tiktoken) tokenizer library for efficient tokenization.
<br>
## ่ฏๆตๆๆ๏ผEvaluation๏ผ
ๅฏนไบQwen-72B-Chatๆจกๅ๏ผๆไปฌๅๆ ท่ฏๆตไบๅธธ่ง็ไธญๆ็่งฃ๏ผC-Eval๏ผใ่ฑๆ็่งฃ๏ผMMLU๏ผใไปฃ็ ๏ผHumanEval๏ผๅๆฐๅญฆ๏ผGSM8K๏ผ็ญๆๅจไปปๅก๏ผๅๆถๅ
ๅซไบ้ฟๅบๅไปปๅก็่ฏๆต็ปๆใ็ฑไบQwen-72B-Chatๆจกๅ็ป่ฟๅฏน้ฝๅ๏ผๆฟๅไบ่พๅผบ็ๅค้จ็ณป็ป่ฐ็จ่ฝๅ๏ผๆไปฌ่ฟ่ฟ่กไบๅทฅๅ
ทไฝฟ็จ่ฝๅๆน้ข็่ฏๆตใ
ๆ็คบ๏ผ็ฑไบ็กฌไปถๅๆกๆถ้ ๆ็่ๅ
ฅ่ฏฏๅทฎ๏ผๅค็ฐ็ปๆๅฆๆๆณขๅจๅฑไบๆญฃๅธธ็ฐ่ฑกใ
For Qwen-72B-Chat, we also evaluate the model on C-Eval, MMLU, HumanEval, GSM8K, etc., as well as the benchmark evaluation for long-context understanding, and tool usage.
Note: Due to rounding errors caused by hardware and framework, differences in reproduced results are possible.
### ไธญๆ่ฏๆต๏ผChinese Evaluation๏ผ
#### C-Eval
ๅจ[C-Eval](https://arxiv.org/abs/2305.08322)้ช่ฏ้ไธ๏ผๆไปฌ่ฏไปทไบQwen-72B-Chatๆจกๅ็0-shot & 5-shotๅ็กฎ็
We demonstrate the 0-shot & 5-shot accuracy of Qwen-72B-Chat on C-Eval validation set
| Model | Avg. Acc. |
|:--------------------------------:|:---------:|
| LLaMA2-7B-Chat | 31.9 |
| LLaMA2-13B-Chat | 36.2 |
| LLaMA2-70B-Chat | 44.3 |
| ChatGPT3.5 | 52.5 |
| ChatGPT4 | 69.9 |
| Yi-34B-Chat (0-shot) | 77.0 |
| Yi-34B-Chat (5-shot) | 78.5 |
| Qwen-7B-Chat (original) (0-shot) | 54.2 |
| **Qwen-7B-Chat (0-shot)** | 59.7 |
| **Qwen-7B-Chat (5-shot)** | 59.3 |
| **Qwen-14B-Chat (0-shot)** | 69.8 |
| **Qwen-14B-Chat (5-shot)** | 71.7 |
| **Qwen-72B-Chat (0-shot)** | 80.1 |
| **Qwen-72B-Chat (5-shot)** | 82.9 |
C-Evalๆต่ฏ้ไธ๏ผQwen-72B-Chatๆจกๅ็zero-shotๅ็กฎ็็ปๆๅฆไธ๏ผ
The zero-shot accuracy of Qwen-72B-Chat on C-Eval testing set is provided below:
| Model | Avg. | STEM | Social Sciences | Humanities | Others |
| :---------------------- | :------: | :--: | :-------------: | :--------: | :----: |
| Qwen-7B-Chat (original) | 54.6 | 47.8 | 67.6 | 59.3 | 50.6 |
| **Qwen-7B-Chat** | 58.6 | 53.3 | 72.1 | 62.8 | 52.0 |
| **Qwen-14B-Chat** | 69.1 | 65.1 | 80.9 | 71.2 | 63.4 |
| **Qwen-72B-Chat** | 79.5 | 74.5 | 89.1 | 81.2 | 78.1 |
### ่ฑๆ่ฏๆต๏ผEnglish Evaluation๏ผ
#### MMLU
[MMLU](https://arxiv.org/abs/2009.03300)่ฏๆต้ไธ๏ผQwen-7B-Chatๆจกๅ็ 0-shot & 5-shot ๅ็กฎ็ๅฆไธ๏ผๆๆๅๆ ทๅจๅ็ฑปๅฏน้ฝๆจกๅไธญๅๆ ท่กจ็ฐ่พไผใ
The 0-shot & 5-shot accuracy of Qwen-72B-Chat on MMLU is provided below.
The performance of Qwen-72B-Chat still on the top between other human-aligned models with comparable size.
| Model | Avg. Acc. |
|:--------------------------------:|:---------:|
| LLaMA2-7B-Chat | 46.2 |
| LLaMA2-13B-Chat | 54.6 |
| LLaMA2-70B-Chat | 63.8 |
| Yi-34B-Chat (0-shot) | 67.6 |
| Yi-34B-Chat (5-shot) | 73.4 |
| ChatGPT3.5 | 69.1 |
| ChatGPT4 | 83.0 |
| Qwen-7B-Chat (original) (0-shot) | 53.9 |
| **Qwen-7B-Chat (0-shot)** | 55.8 |
| **Qwen-7B-Chat (5-shot)** | 57.0 |
| **Qwen-14B-Chat (0-shot)** | 64.6 |
| **Qwen-14B-Chat (5-shot)** | 66.5 |
| **Qwen-72B-Chat (0-shot)** | 74.3 |
| **Qwen-72B-Chat (5-shot)** | 75.0 |
### ไปฃ็ ่ฏๆต๏ผCoding Evaluation๏ผ
Qwen-72B-Chatๅจ[HumanEval](https://github.com/openai/human-eval)็zero-shot Pass@1ๆๆๅฆไธ
The zero-shot Pass@1 of Qwen-72B-Chat on [HumanEval](https://github.com/openai/human-eval) is demonstrated below
| Model | Pass@1 |
|:-----------------------:|:--------:|
| LLaMA2-7B-Chat | 12.2 |
| LLaMA2-13B-Chat | 18.9 |
| LLaMA2-70B-Chat | 32.3 |
| Yi-34B-Chat | 33.5 |
| ChatGPT3.5 | 73.2 |
| ChatGPT4 | 86.6 |
| Qwen-7B-Chat (original) | 24.4 |
| **Qwen-7B-Chat** | 37.2 |
| **Qwen-14B-Chat** | 43.9 |
| **Qwen-72B-Chat** | 64.6 |
### ๆฐๅญฆ่ฏๆต๏ผMathematics Evaluation๏ผ
ๅจ่ฏๆตๆฐๅญฆ่ฝๅ็[GSM8K](https://github.com/openai/grade-school-math)ไธ๏ผQwen-72B-Chat็ๅ็กฎ็็ปๆๅฆไธ
The accuracy of Qwen-72B-Chat on GSM8K is shown below
| Model | Acc. |
|:--------------------------------:|:--------:|
| LLaMA2-7B-Chat | 26.3 |
| LLaMA2-13B-Chat | 37.1 |
| LLaMA2-70B-Chat | 59.3 |
| Yi-34B-Chat | 71.6 |
| ChatGPT3.5 | 73.2 |
| ChatGPT4 | 91.4 |
| Qwen-7B-Chat (original) (0-shot) | 41.1 |
| **Qwen-7B-Chat (0-shot)** | 50.3 |
| **Qwen-7B-Chat (8-shot)** | 54.1 |
| **Qwen-14B-Chat (0-shot)** | 60.1 |
| **Qwen-14B-Chat (8-shot)** | 59.3 |
| **Qwen-72B-Chat (0-shot)** | 76.4 |
| **Qwen-72B-Chat (8-shot)** | 75.7 |
### ้ฟๅบๅ่ฏๆต๏ผLong-Context Understanding๏ผ
Qwen-72B-Chatๆฏๆๆ้ฟ32k็ไธไธๆ้ฟๅบฆ๏ผๅจ[L-Eval](https://arxiv.org/abs/2307.11088)ๅฎข่ง้ข็่ฏๅ็ปๆๅฆไธ๏ผ
Qwen-72B-Chat supports context lengths of up to 32k. The scores of [L-Eval](https://arxiv.org/abs/2307.11088) (closed-ended tasks) are as follows:
| Model | Average | Coursera | GSM | QuALITY | TOEFL | CodeU | SFcition |
|:------------------|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|
| ChatGPT-3.5-16k | 60.73 | **63.51** | **84.00** | 61.38 | 78.43 | **12.22** | 64.84 |
| **Qwen-72B-Chat** | **62.30** | 58.13 | 76.00 | **77.22** | **86.24** | 6.66 | **69.53** |
ๆไปฌ่ฟไธๆญฅ่ฟ่กไบโๅคงๆตทๆ้โๅฎ้ช๏ผๆณๆณๆฅ่ชไบ[@Greg Kamradt](https://twitter.com/GregKamradt/status/1727018183608193393)๏ผ๏ผๆต่ฏๆจกๅๅจไธๅ้ฟๅบฆ็่พๅ
ฅไธ๏ผๆฏๅฆ่ฝๆฃ็ดขๅฐๆ็ซ ไธๅไฝ็ฝฎ็ไฟกๆฏ๏ผ็ปๆๅฆไธ๏ผ
We conducted the "needle in a haystack" experiment (the idea came from [@Greg Kamradt](https://twitter.com/GregKamradt/status/1727018183608193393)) to test whether the model can retrieve information at different positions in the inputs of different lengths, the result is as follows:

ไปฅไธ็ปๆ่ฏดๆ๏ผQwen-72B-Chatๅฏไปฅ่ฝๅ็กฎๆฃ็ดขๅฐ32kไปฅๅ
็่พๅ
ฅ้ฟๅบฆไธญๆพๅจๅ็งไฝ็ฝฎ็ไฟกๆฏ๏ผ่ฏๆไบๅ
ถๅ
ทๆไผ็ง็้ฟๆๆฌๅค็่ฝๅใ
The above results show that Qwen-72B-Chat can accurately retrieve information placed in various positions within an input length of 32k, proving its excellent long text understanding capabilities.
## FAQ
ๅฆ้ๅฐ้ฎ้ข๏ผๆฌ่ฏทๆฅ้
[FAQ](https://github.com/QwenLM/Qwen/blob/main/FAQ_zh.md)ไปฅๅissueๅบ๏ผๅฆไปๆ ๆณ่งฃๅณๅๆไบคissueใ
If you meet problems, please refer to [FAQ](https://github.com/QwenLM/Qwen/blob/main/FAQ.md) and the issues first to search a solution before you launch a new issue.
<br>
## ๅผ็จ (Citation)
ๅฆๆไฝ ่งๅพๆไปฌ็ๅทฅไฝๅฏนไฝ ๆๅธฎๅฉ๏ผๆฌข่ฟๅผ็จ๏ผ
If you find our work helpful, feel free to give us a cite.
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
```
<br>
## ไฝฟ็จๅ่ฎฎ๏ผLicense Agreement๏ผ
ๆไปฌ็ไปฃ็ ๅๆจกๅๆ้ๅฏนๅญฆๆฏ็ ็ฉถๅฎๅ
จๅผๆพ๏ผๅนถๆฏๆๅ็จใ่ฏทๆฅ็[LICENSE](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT)ไบ่งฃๅ
ทไฝ็ๅผๆบๅ่ฎฎ็ป่ใๅฆ้ๅ็จ๏ผ่ฏทๅกซๅ[้ฎๅท](https://dashscope.console.aliyun.com/openModelApply/Qwen-72B-Chat)็ณ่ฏทใ
Our code and checkpoints are open to research purpose, and they are allowed for commercial purposes. Check [LICENSE](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) for more details about the license. If you have requirements for commercial use, please fill out the [form](https://dashscope.console.aliyun.com/openModelApply/Qwen-72B-Chat) to apply.
<br>
## ่็ณปๆไปฌ๏ผContact Us๏ผ
ๅฆๆไฝ ๆณ็ปๆไปฌ็็ ๅๅข้ๅไบงๅๅข้็่จ๏ผๆฌข่ฟๅ ๅ
ฅๆไปฌ็ๅพฎไฟก็พคใ้้็พคไปฅๅDiscord๏ผๅๆถ๏ผไนๆฌข่ฟ้่ฟ้ฎไปถ๏ผ[email protected]๏ผ่็ณปๆไปฌใ
If you are interested to leave a message to either our research team or product team, join our Discord or WeChat groups! Also, feel free to send an email to [email protected].
|
s3nh/Hermes-SolarMaid-7b | s3nh | 2024-01-06T09:30:37Z | 450 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"en",
"base_model:Undi95/SolarMaid-v0.1.1",
"base_model:NousResearch/Nous-Hermes-llama-2-7b",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-01-05T22:30:02Z | ---
base_model:
- Undi95/SolarMaid-v0.1.1
- NousResearch/Nous-Hermes-llama-2-7b
tags:
- mergekit
- merge
license: openrail
language:
- en
library_name: transformers
pipeline_tag: text-generation
---

# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Undi95/SolarMaid-v0.1.1](https://huggingface.co/Undi95/SolarMaid-v0.1.1)
* [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: NousResearch/Nous-Hermes-llama-2-7b
dtype: bfloat16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 32]
model: Undi95/SolarMaid-v0.1.1
``` |
mmnga/cyberagent-calm2-7b-chat-dpo-experimental-gguf | mmnga | 2024-01-25T13:14:27Z | 450 | 3 | null | [
"gguf",
"license:cc-by-4.0",
"region:us"
]
| null | 2024-01-25T12:28:54Z | ---
license: cc-by-4.0
---
# cyberagent-calm2-7b-chat-dpo-experimental-gguf
[cyberagentใใใๅ
ฌ้ใใฆใใcalm2-7b-chat-dpo-experimental](https://huggingface.co/cyberagent/calm2-7b-chat-dpo-experimental)ใฎggufใใฉใผใใใๅคๆ็ใงใใ
## convert
[calm2ใฎconvert.py](https://gist.github.com/mmnga/6f3b0b9db773c7cc4d58421ea7ebf98e)ใงใ
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'cyberagent-calm2-7b-chat-dpo-experimental-q4_0.gguf' -p "USER: AIใซใใฃใฆ็ง้ใฎๆฎใใใฏใฉใฎใใใซๅคใใใพใใ๏ผ \nASSISTANT: " -n 256 --temp 0.8
```
---
|
invalid-coder/distilled_step_by_step_t5_v1_1_base | invalid-coder | 2024-01-29T21:14:51Z | 450 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"unsloth",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text2text-generation | 2024-01-28T20:50:45Z | ---
library_name: transformers
tags:
- unsloth
license: cc-by-nc-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
silk-road/Haruhi-Zero-Gemma-2B-0_5 | silk-road | 2024-02-23T08:15:43Z | 450 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-23T08:07:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mychen76/mistral-7b-merged-passthrough | mychen76 | 2024-03-22T21:56:39Z | 450 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-09T23:03:36Z | ---
license: apache-2.0
tags:
- merge
---
# mistral-7b-merged-passthrough
mistral-7b-merged-passthrough is a merge of the following models:
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- sources:
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [24, 32]
merge_method: passthrough
dtype: bfloat16
``` |
mradermacher/IA_14B-GGUF | mradermacher | 2024-05-06T06:08:39Z | 450 | 0 | transformers | [
"transformers",
"gguf",
"love",
"philosophy",
"literature",
"en",
"zh",
"base_model:Minami-su/IA_14B",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-19T07:57:19Z | ---
base_model: Minami-su/IA_14B
language:
- en
- zh
library_name: transformers
license: other
license_link: https://huggingface.co/Qwen/Qwen1.5-14B-Chat/blob/main/LICENSE
license_name: tongyi-qianwen
quantized_by: mradermacher
tags:
- love
- philosophy
- literature
---
## About
static quants of https://huggingface.co/Minami-su/IA_14B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/IA_14B-GGUF/resolve/main/IA_14B.Q2_K.gguf) | Q2_K | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/IA_14B-GGUF/resolve/main/IA_14B.IQ3_XS.gguf) | IQ3_XS | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/IA_14B-GGUF/resolve/main/IA_14B.IQ3_S.gguf) | IQ3_S | 7.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/IA_14B-GGUF/resolve/main/IA_14B.Q3_K_S.gguf) | Q3_K_S | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/IA_14B-GGUF/resolve/main/IA_14B.IQ3_M.gguf) | IQ3_M | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/IA_14B-GGUF/resolve/main/IA_14B.Q3_K_M.gguf) | Q3_K_M | 8.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/IA_14B-GGUF/resolve/main/IA_14B.Q3_K_L.gguf) | Q3_K_L | 8.9 | |
| [GGUF](https://huggingface.co/mradermacher/IA_14B-GGUF/resolve/main/IA_14B.IQ4_XS.gguf) | IQ4_XS | 8.9 | |
| [GGUF](https://huggingface.co/mradermacher/IA_14B-GGUF/resolve/main/IA_14B.Q4_K_S.gguf) | Q4_K_S | 9.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/IA_14B-GGUF/resolve/main/IA_14B.Q4_K_M.gguf) | Q4_K_M | 10.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/IA_14B-GGUF/resolve/main/IA_14B.Q5_K_S.gguf) | Q5_K_S | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/IA_14B-GGUF/resolve/main/IA_14B.Q5_K_M.gguf) | Q5_K_M | 11.6 | |
| [GGUF](https://huggingface.co/mradermacher/IA_14B-GGUF/resolve/main/IA_14B.Q6_K.gguf) | Q6_K | 13.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/IA_14B-GGUF/resolve/main/IA_14B.Q8_0.gguf) | Q8_0 | 15.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Saul-Base-Calme-7B-Instruct-slerp-GGUF | mradermacher | 2024-05-06T05:34:47Z | 450 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"Equall/Saul-Base",
"MaziyarPanahi/Calme-7B-Instruct-v0.1.1",
"en",
"base_model:arcee-ai/Saul-Base-Calme-7B-Instruct-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-30T04:09:49Z | ---
base_model: arcee-ai/Saul-Base-Calme-7B-Instruct-slerp
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- Equall/Saul-Base
- MaziyarPanahi/Calme-7B-Instruct-v0.1.1
---
## About
static quants of https://huggingface.co/arcee-ai/Saul-Base-Calme-7B-Instruct-slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Saul-Base-Calme-7B-Instruct-slerp-GGUF/resolve/main/Saul-Base-Calme-7B-Instruct-slerp.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Saul-Base-Calme-7B-Instruct-slerp-GGUF/resolve/main/Saul-Base-Calme-7B-Instruct-slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Saul-Base-Calme-7B-Instruct-slerp-GGUF/resolve/main/Saul-Base-Calme-7B-Instruct-slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Saul-Base-Calme-7B-Instruct-slerp-GGUF/resolve/main/Saul-Base-Calme-7B-Instruct-slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Saul-Base-Calme-7B-Instruct-slerp-GGUF/resolve/main/Saul-Base-Calme-7B-Instruct-slerp.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Saul-Base-Calme-7B-Instruct-slerp-GGUF/resolve/main/Saul-Base-Calme-7B-Instruct-slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Saul-Base-Calme-7B-Instruct-slerp-GGUF/resolve/main/Saul-Base-Calme-7B-Instruct-slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Saul-Base-Calme-7B-Instruct-slerp-GGUF/resolve/main/Saul-Base-Calme-7B-Instruct-slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Saul-Base-Calme-7B-Instruct-slerp-GGUF/resolve/main/Saul-Base-Calme-7B-Instruct-slerp.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Saul-Base-Calme-7B-Instruct-slerp-GGUF/resolve/main/Saul-Base-Calme-7B-Instruct-slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Saul-Base-Calme-7B-Instruct-slerp-GGUF/resolve/main/Saul-Base-Calme-7B-Instruct-slerp.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Saul-Base-Calme-7B-Instruct-slerp-GGUF/resolve/main/Saul-Base-Calme-7B-Instruct-slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Saul-Base-Calme-7B-Instruct-slerp-GGUF/resolve/main/Saul-Base-Calme-7B-Instruct-slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Saul-Base-Calme-7B-Instruct-slerp-GGUF/resolve/main/Saul-Base-Calme-7B-Instruct-slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Saul-Base-Calme-7B-Instruct-slerp-GGUF/resolve/main/Saul-Base-Calme-7B-Instruct-slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Saul-Base-Calme-7B-Instruct-slerp-GGUF/resolve/main/Saul-Base-Calme-7B-Instruct-slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Hercules-4.0-Yi-34B-i1-GGUF | mradermacher | 2024-05-06T05:21:21Z | 450 | 0 | transformers | [
"transformers",
"gguf",
"medical",
"science",
"biology",
"chemistry",
"not-for-all-audiences",
"en",
"dataset:Locutusque/hercules-v4.0",
"base_model:Locutusque/Hercules-4.0-Yi-34B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-03T06:32:46Z | ---
base_model: Locutusque/Hercules-4.0-Yi-34B
datasets:
- Locutusque/hercules-v4.0
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- medical
- science
- biology
- chemistry
- not-for-all-audiences
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/Locutusque/Hercules-4.0-Yi-34B
**This uses my "quarter" training set of 40k tokens as the model overflowed after 25k tokens with the standard set.**
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Hercules-4.0-Yi-34B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hercules-4.0-Yi-34B-i1-GGUF/resolve/main/Hercules-4.0-Yi-34B.i1-IQ1_S.gguf) | i1-IQ1_S | 8.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Hercules-4.0-Yi-34B-i1-GGUF/resolve/main/Hercules-4.0-Yi-34B.i1-IQ1_M.gguf) | i1-IQ1_M | 8.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Hercules-4.0-Yi-34B-i1-GGUF/resolve/main/Hercules-4.0-Yi-34B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Hercules-4.0-Yi-34B-i1-GGUF/resolve/main/Hercules-4.0-Yi-34B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/Hercules-4.0-Yi-34B-i1-GGUF/resolve/main/Hercules-4.0-Yi-34B.i1-IQ2_S.gguf) | i1-IQ2_S | 11.6 | |
| [GGUF](https://huggingface.co/mradermacher/Hercules-4.0-Yi-34B-i1-GGUF/resolve/main/Hercules-4.0-Yi-34B.i1-IQ2_M.gguf) | i1-IQ2_M | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/Hercules-4.0-Yi-34B-i1-GGUF/resolve/main/Hercules-4.0-Yi-34B.i1-Q2_K.gguf) | i1-Q2_K | 13.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Hercules-4.0-Yi-34B-i1-GGUF/resolve/main/Hercules-4.0-Yi-34B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 14.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hercules-4.0-Yi-34B-i1-GGUF/resolve/main/Hercules-4.0-Yi-34B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hercules-4.0-Yi-34B-i1-GGUF/resolve/main/Hercules-4.0-Yi-34B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Hercules-4.0-Yi-34B-i1-GGUF/resolve/main/Hercules-4.0-Yi-34B.i1-IQ3_S.gguf) | i1-IQ3_S | 15.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Hercules-4.0-Yi-34B-i1-GGUF/resolve/main/Hercules-4.0-Yi-34B.i1-IQ3_M.gguf) | i1-IQ3_M | 16.2 | |
| [GGUF](https://huggingface.co/mradermacher/Hercules-4.0-Yi-34B-i1-GGUF/resolve/main/Hercules-4.0-Yi-34B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 17.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Hercules-4.0-Yi-34B-i1-GGUF/resolve/main/Hercules-4.0-Yi-34B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Hercules-4.0-Yi-34B-i1-GGUF/resolve/main/Hercules-4.0-Yi-34B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 19.1 | |
| [GGUF](https://huggingface.co/mradermacher/Hercules-4.0-Yi-34B-i1-GGUF/resolve/main/Hercules-4.0-Yi-34B.i1-Q4_0.gguf) | i1-Q4_0 | 20.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Hercules-4.0-Yi-34B-i1-GGUF/resolve/main/Hercules-4.0-Yi-34B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 20.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Hercules-4.0-Yi-34B-i1-GGUF/resolve/main/Hercules-4.0-Yi-34B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 21.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hercules-4.0-Yi-34B-i1-GGUF/resolve/main/Hercules-4.0-Yi-34B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 24.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hercules-4.0-Yi-34B-i1-GGUF/resolve/main/Hercules-4.0-Yi-34B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 25.0 | |
| [GGUF](https://huggingface.co/mradermacher/Hercules-4.0-Yi-34B-i1-GGUF/resolve/main/Hercules-4.0-Yi-34B.i1-Q6_K.gguf) | i1-Q6_K | 28.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/nerde-14B-GGUF | mradermacher | 2024-05-06T04:58:44Z | 450 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Trisert/nerde-14B",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-13T04:59:29Z | ---
base_model: Trisert/nerde-14B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Trisert/nerde-14B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/nerde-14B-GGUF/resolve/main/nerde-14B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/nerde-14B-GGUF/resolve/main/nerde-14B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/nerde-14B-GGUF/resolve/main/nerde-14B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/nerde-14B-GGUF/resolve/main/nerde-14B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/nerde-14B-GGUF/resolve/main/nerde-14B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/nerde-14B-GGUF/resolve/main/nerde-14B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/nerde-14B-GGUF/resolve/main/nerde-14B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/nerde-14B-GGUF/resolve/main/nerde-14B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/nerde-14B-GGUF/resolve/main/nerde-14B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/nerde-14B-GGUF/resolve/main/nerde-14B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/nerde-14B-GGUF/resolve/main/nerde-14B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/nerde-14B-GGUF/resolve/main/nerde-14B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/nerde-14B-GGUF/resolve/main/nerde-14B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/nerde-14B-GGUF/resolve/main/nerde-14B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
CausalLM/35b-beta-long | CausalLM | 2024-06-06T20:51:02Z | 450 | 60 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"en",
"zh",
"ja",
"de",
"dataset:JosephusCheung/GuanacoDataset",
"dataset:meta-math/MetaMathQA",
"dataset:jondurbin/airoboros-3.1",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:RyokoAI/ShareGPT52K",
"dataset:RyokoAI/Fandom23K",
"dataset:milashkaarshif/MoeGirlPedia_wikitext_raw_archive",
"dataset:wikipedia",
"dataset:wiki_lingua",
"dataset:garage-bAInd/Open-Platypus",
"dataset:LDJnr/Puffin",
"dataset:BAAI/COIG",
"dataset:TigerResearch/tigerbot-zhihu-zh-10k",
"dataset:liwu/MNBVC",
"dataset:teknium/openhermes",
"dataset:CausalLM/Refined-Anime-Text",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"license:wtfpl",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-13T18:47:13Z | ---
license: wtfpl
language:
- en
- zh
- ja
- de
datasets:
- JosephusCheung/GuanacoDataset
- meta-math/MetaMathQA
- jondurbin/airoboros-3.1
- WizardLM/WizardLM_evol_instruct_V2_196k
- RyokoAI/ShareGPT52K
- RyokoAI/Fandom23K
- milashkaarshif/MoeGirlPedia_wikitext_raw_archive
- wikipedia
- wiki_lingua
- garage-bAInd/Open-Platypus
- LDJnr/Puffin
- BAAI/COIG
- TigerResearch/tigerbot-zhihu-zh-10k
- liwu/MNBVC
- teknium/openhermes
- CausalLM/Refined-Anime-Text
- microsoft/orca-math-word-problems-200k
- m-a-p/CodeFeedback-Filtered-Instruction
---
**Sorry, it's no longer available on Hugging Face. Please reach out to those who have already downloaded it. If you have a copy, please refrain from re-uploading it to Hugging Face.**
**Due to repeated conflicts with HF and what we perceive as their repeated misuse of the "Contributor Covenant Code of Conduct," we have lost confidence in the platform and decided to temporarily suspend all new download access requests. It appears to us that HF's original intention has been abandoned in pursuit of commercialization, and they no longer prioritize the well-being of the community.**
Demo: [](https://huggingface.co/spaces/JosephusCheung/CausalLM-35B-long-Q6K-GGUF)
# 35b-beta-long
This release, CausalLM/35b-beta-long, represents the culmination of our experience and accumulated training data in fine-tuning large language models. We are open-sourcing these weights to foster development within the open-source community.
We chose Cohere's multilingual, 35B-parameter with long context [CohereForAI/c4ai-command-r-v01] MHA model as our base. In our evaluation, it proved to be the most responsive to the quality of training data throughout the Supervised Fine-Tuning process, outperforming other open-source LLMs. Although its initial SFT/RL focuses on specific tasks and comes with a non-commercial license, we believe it's currently the best foundation for personal and internal use cases.
Utilizing extensive factual content from web crawls, we synthesized over 30 million multi-turn dialogue data entries, grounded in multiple web-pages or documents. This process involved substantial human oversight and a data pipeline designed to ensure high quality. The model was then trained on this data in full 128K context using BF16 precision. We also incorporated widely-used open-source dialogue datasets to enhance general conversational fluency.
Our data synthesis approach addressed crucial limitations in typical LLM training corpora. LLMs often struggle to extract thematic summaries, key information, or perform comparisons at the paragraph or document level. Therefore, we focused on generating fact-based data using multiple documents within a long context setting. This involved leveraging existing SOTA LLMs with human guidance to synthesize information through thematic summarization, information extraction, and comparison of source materials.
This approach yielded significant improvements in model performance during fine-tuning. We observed reductions in hallucinations, enhanced long-context capabilities, and improvements in general abilities such as math, coding, and knowledge recall. The training process incorporated both the original source material and the synthesized outputs, further reinforcing the model's ability to recall and utilize abstract concepts embedded within the pre-training data. Our analysis revealed that this combination of original and synthesized data was crucial for achieving a more balanced performance profile. Intermediate checkpoints and models trained solely on synthesized data are also released for research purposes.
Compared to the original task-specific model, our further fine-tuned model demonstrates more robust recall in long-context scenarios without requiring specific document formatting or prompt engineering. This fine-tuned model also exhibits performance comparable to models twice its size in quantifiable benchmarks.
As this model has only undergone SFT, it may still exhibit biases or generate undesirable content. We implemented basic safety measures using open-source refusal datasets to mitigate outputs related to illegal activities, NSFW content, and violence. However, further Reinforcement Learning is necessary for robust alignment with human values.
## Please note
Tokenizer is different from cohere - and chat template is **ChatML**.
Pressure Testing from: https://github.com/LeonEricsson/llmcontext
 |
bartowski/OpenBioLLM-Llama3-8B-GGUF | bartowski | 2024-04-26T15:21:01Z | 450 | 3 | null | [
"gguf",
"llama-3",
"llama",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"text-generation",
"en",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"region:us"
]
| text-generation | 2024-04-26T14:59:41Z | ---
base_model: meta-llama/Meta-Llama-3-8B
tags:
- llama-3
- llama
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- distillation
model-index:
- name: OpenBioLLM-8B
results: []
license: llama3
language:
- en
widget:
- example_title: OpenBioLLM-8B
messages:
- role: system
content: >-
You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience.
- role: user
content: How long does it take for newborn jaundice to go away?
output:
text: >-
Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.
The duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:
1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved.
2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth.
3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.
It's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary.
Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance.
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of OpenBioLLM-Llama3-8B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2717">b2717</a> for quantization.
Original model: https://huggingface.co/aaditya/OpenBioLLM-Llama3-8B
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
No chat template specified so default is used. This may be incorrect, check original model card for details.
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [OpenBioLLM-Llama3-8B-Q8_0.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [OpenBioLLM-Llama3-8B-Q6_K.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [OpenBioLLM-Llama3-8B-Q5_K_M.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [OpenBioLLM-Llama3-8B-Q5_K_S.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [OpenBioLLM-Llama3-8B-Q4_K_M.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [OpenBioLLM-Llama3-8B-Q4_K_S.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [OpenBioLLM-Llama3-8B-IQ4_NL.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [OpenBioLLM-Llama3-8B-IQ4_XS.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [OpenBioLLM-Llama3-8B-Q3_K_L.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [OpenBioLLM-Llama3-8B-Q3_K_M.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [OpenBioLLM-Llama3-8B-IQ3_M.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [OpenBioLLM-Llama3-8B-IQ3_S.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [OpenBioLLM-Llama3-8B-Q3_K_S.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [OpenBioLLM-Llama3-8B-IQ3_XS.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [OpenBioLLM-Llama3-8B-IQ3_XXS.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [OpenBioLLM-Llama3-8B-Q2_K.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [OpenBioLLM-Llama3-8B-IQ2_M.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [OpenBioLLM-Llama3-8B-IQ2_S.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [OpenBioLLM-Llama3-8B-IQ2_XS.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [OpenBioLLM-Llama3-8B-IQ2_XXS.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [OpenBioLLM-Llama3-8B-IQ1_M.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [OpenBioLLM-Llama3-8B-IQ1_S.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf | RichardErkhov | 2024-04-26T21:19:56Z | 450 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-04-26T19:25:57Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
saiga_gemma_9b - GGUF
- Model creator: https://huggingface.co/IlyaGusev/
- Original model: https://huggingface.co/IlyaGusev/saiga_gemma_9b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [saiga_gemma_9b.Q2_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q2_K.gguf) | Q2_K | 3.24GB |
| [saiga_gemma_9b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.IQ3_XS.gguf) | IQ3_XS | 3.54GB |
| [saiga_gemma_9b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.IQ3_S.gguf) | IQ3_S | 3.71GB |
| [saiga_gemma_9b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q3_K_S.gguf) | Q3_K_S | 3.71GB |
| [saiga_gemma_9b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.IQ3_M.gguf) | IQ3_M | 3.82GB |
| [saiga_gemma_9b.Q3_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q3_K.gguf) | Q3_K | 4.07GB |
| [saiga_gemma_9b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q3_K_M.gguf) | Q3_K_M | 4.07GB |
| [saiga_gemma_9b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q3_K_L.gguf) | Q3_K_L | 4.39GB |
| [saiga_gemma_9b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.IQ4_XS.gguf) | IQ4_XS | 4.48GB |
| [saiga_gemma_9b.Q4_0.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q4_0.gguf) | Q4_0 | 4.67GB |
| [saiga_gemma_9b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.IQ4_NL.gguf) | IQ4_NL | 4.69GB |
| [saiga_gemma_9b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q4_K_S.gguf) | Q4_K_S | 4.7GB |
| [saiga_gemma_9b.Q4_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q4_K.gguf) | Q4_K | 4.96GB |
| [saiga_gemma_9b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q4_K_M.gguf) | Q4_K_M | 4.96GB |
| [saiga_gemma_9b.Q4_1.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q4_1.gguf) | Q4_1 | 5.12GB |
| [saiga_gemma_9b.Q5_0.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q5_0.gguf) | Q5_0 | 5.57GB |
| [saiga_gemma_9b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q5_K_S.gguf) | Q5_K_S | 5.57GB |
| [saiga_gemma_9b.Q5_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q5_K.gguf) | Q5_K | 5.72GB |
| [saiga_gemma_9b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q5_K_M.gguf) | Q5_K_M | 5.72GB |
| [saiga_gemma_9b.Q5_1.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q5_1.gguf) | Q5_1 | 6.02GB |
| [saiga_gemma_9b.Q6_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q6_K.gguf) | Q6_K | 6.53GB |
Original model description:
---
language:
- ru
datasets:
- IlyaGusev/ru_turbo_saiga
- IlyaGusev/ru_sharegpt_cleaned
- IlyaGusev/oasst1_ru_main_branch
- IlyaGusev/gpt_roleplay_realm
- lksy/ru_instruct_gpt4
---
# Saiga/Gemma 9B, Russian Gemma-based chatbot
Based on [Gemma 7B](https://huggingface.co/google/gemma-7b).
Training Colab: [link](https://colab.research.google.com/drive/1O7F7Q3IQYh-v7EfsdRwMDMfUnOYZ3DEh).
ChatML prompt format:
```
<|im_start|>system
ะขั โ ะกะฐะนะณะฐ, ััััะบะพัะทััะฝัะน ะฐะฒัะพะผะฐัะธัะตัะบะธะน ะฐััะธััะตะฝั. ะขั ัะฐะทะณะพะฒะฐัะธะฒะฐะตัั ั ะปัะดัะผะธ ะธ ะฟะพะผะพะณะฐะตัั ะธะผ.<|im_end|>
<|im_start|>user
ะัะธะฒะตั!<|im_end|>
<|im_start|>assistant
ะัะปะธัะฝะพ, ะฐ ัั?<|im_end|>
<|im_start|>user
ะจะธะบะฐัะฝะพ. ะะฐะบ ะฟัะพะนัะธ ะฒ ะฑะธะฑะปะธะพัะตะบั?<|im_end|>
<|im_start|>assistant
```
Examples:
```
User: ะะพัะตะผั ััะฐะฒะฐ ะทะตะปะตะฝะฐั?
Saiga: ะะตะปะตะฝัะน ัะฒะตั ััะฐะฒั ะพะฑััะปะพะฒะปะตะฝ ะฝะฐะปะธัะธะตะผ ะฒ ะตั ะบะปะตัะบะฐั
ะฟะธะณะผะตะฝัะฐ ั
ะปะพัะพัะธะปะปะฐ, ะบะพัะพััะน ะธะณัะฐะตั ะฒะฐะถะฝัั ัะพะปั ะฒ ะฟัะพัะตััะต ัะพัะพัะธะฝัะตะทะฐ โ ะฟัะตะพะฑัะฐะทะพะฒะฐะฝะธั ัะพะปะฝะตัะฝะพะณะพ ัะฒะตัะฐ ะฒ ัะฝะตัะณะธั, ะธัะฟะพะปัะทัะตะผัั ะดะปั ัะพััะฐ ะธ ัะฐะทะฒะธัะธั ัะฐััะตะฝะธะน.
ะฅะปะพัะพัะธะปะป ะฟะพะณะปะพัะฐะตั ัะฒะตั ะฒ ะพะฟัะตะดะตะปะตะฝะฝัั
ะดะธะฐะฟะฐะทะพะฝะฐั
ะดะปะธะฝ ะฒะพะปะฝ, ะฒ ัะพะผ ัะธัะปะต ะฒ ะทะตะปะตะฝะพะน ะพะฑะปะฐััะธ ัะฟะตะบััะฐ, ะธ ะพััะฐะถะฐะตั ัะฒะตั ะฒ ะดััะณะธั
ะดะธะฐะฟะฐะทะพะฝะฐั
, ััะพ ะธ ะพะฑััะปะฐะฒะปะธะฒะฐะตั ะทะตะปัะฝัะน ัะฒะตั.
```
```
User: ะกะพัะธะฝะธ ะดะปะธะฝะฝัะน ัะฐััะบะฐะท, ะพะฑัะทะฐัะตะปัะฝะพ ัะฟะพะผะธะฝะฐั ัะปะตะดัััะธะต ะพะฑัะตะบัั. ะะฐะฝะพ: ะขะฐะฝั, ะผัั
Saiga: ะะดะฝะฐะถะดั, ะฒ ะดะฐะปะตะบะพะน ัััะฐะฝะต, ะถะธะปะฐ ะผะฐะปะตะฝัะบะฐั ะดะตะฒะพัะบะฐ ะฟะพ ะธะผะตะฝะธ ะขะฐะฝั. ะะฝะฐ ะฑัะปะฐ ะพัะตะฝั ะฐะบัะธะฒะฝะพะน ะธ ะปัะฑะธะปะฐ ะธะณัะฐัั ะฒ ะผัั. ะะผะตััะต ัะพ ัะฒะพะธะผะธ ะดััะทััะผะธ ะพะฝะธ ะฟัะพะฒะพะดะธะปะธ ะดะฝะธ, ะธะณัะฐั ะฒ ัััะฑะพะป, ะฑะฐัะบะตัะฑะพะป ะธ ะฒะพะปะตะนะฑะพะป.
ะขะฐะฝั ะฑัะปะฐ ะพัะตะฝั ัะฐะปะฐะฝัะปะธะฒะพะน ะธ ะปะพะฒะบะพะน, ะธ ะพะฝะฐ ัะฐััะพ ะฒัะธะณััะฒะฐะปะฐ ะฒ ะธะณัะฐั
ั ะดััะณะธะผะธ ะดะตััะผะธ. ะะพ ะพะดะฝะฐะถะดั, ะฒะพ ะฒัะตะผั ะธะณัั ะฒ ะฑะฐัะบะตัะฑะพะป, ะพะฝะฐ ะฟะพะปััะธะปะฐ ัะธะปัะฝัั ััะฐะฒะผั. ะะน ะฟัะธัะปะพัั ะฟัะพะฒะตััะธ ะฝะตัะบะพะปัะบะพ ะฝะตะดะตะปั ะฒ ะฑะพะปัะฝะธัะต, ะธ ะพะฝะฐ ะฑัะปะฐ ะพัะตะฝั ะณััััะฝะพะน ะธ ัะฝัะปะพะน.
ะั ะดััะทัั ะฝะต ะผะพะณะปะธ ะฟะพะฝััั, ะบะฐะบ ะฟะพะผะพัั ะขะฐะฝะต ัะฟัะฐะฒะธัััั ั ััะพะน ัะธััะฐัะธะตะน. ะะฝะธ ะฟััะฐะปะธัั ะฟะพะดะดะตัะถะธะฒะฐัั ะตั, ะฝะพ ะขะฐะฝั ะฑัะปะฐ ัะปะธัะบะพะผ ะฟะพะดะฐะฒะปะตะฝะฐ ะธ ะฝะต ั
ะพัะตะปะฐ ะธะณัะฐัั ะฒ ะผัั.
ะะดะฝะฐะถะดั, ะขะฐะฝั ัะฒะธะดะตะปะฐ, ะบะฐะบ ะตั ะดััะทัั ะธะณัะฐัั ะฒ ะผัั. ะะฝะฐ ะฝะต ะผะพะณะปะฐ ะฝะต ัะผะพััะตัั, ะบะฐะบ ะพะฝะธ ะปะพะฒะบะพ ะฟะตัะตะดะฐัั ะผัั ะดััะณ ะดััะณั ะธ ะทะฐะฑะธะฒะฐัั ะณะพะปั. ะขะฐะฝั ะฟะพััะฒััะฒะพะฒะฐะปะฐ, ะบะฐะบ ะตั ัะตัะดัะต ัะฐะทััะฒะฐะตััั ะพั ะถะตะปะฐะฝะธั ะธะณัะฐัั ะฒะผะตััะต ั ะฝะธะผะธ.
ะะฝะฐ ัะตัะธะปะฐ, ััะพ ะตะน ะฝัะถะฝะพ ะฒะตัะฝััััั ะฒ ะธะณัั, ััะพะฑั ัะฝะพะฒะฐ ะฟะพััะฒััะฒะพะฒะฐัั ัะฐะดะพััั ะธะณัั. ะะฝะฐ ะฝะฐัะฐะปะฐ ะผะตะดะปะตะฝะฝะพ ะฒะพัััะฐะฝะฐะฒะปะธะฒะฐัััั ะธ ะฟะพััะตะฟะตะฝะฝะพ ะฒะพะทะฒัะฐัะฐะปะฐัั ะบ ัะฒะพะตะผั ะปัะฑะธะผะพะผั ะทะฐะฝััะธั - ะธะณัะต ะฒ ะผัั.
ะขะฐะฝั ััะฐะปะฐ ะธะณัะฐัั ะฒ ะผัั ัะฝะพะฒะฐ, ะฝะพ ัะตะฟะตัั ะพะฝะฐ ะฑัะปะฐ ะฑะพะปะตะต ะพััะพัะพะถะฝะฐ ะธ ะฒะฝะธะผะฐัะตะปัะฝะฐ ะบ ัะฒะพะตะผั ะทะดะพัะพะฒัั. ะั ะดััะทัั ะฑัะปะธ ะพัะตะฝั ัะฐะดั ะฒะธะดะตัั, ะบะฐะบ ะขะฐะฝั ะฒะพะทะฒัะฐัะฐะตััั ะบ ัะฒะพะตะน ะฟัะธะฒััะฝะพะน ะถะธะทะฝะธ.
ะก ัะตั
ะฟะพั, ะขะฐะฝั ััะฐะปะฐ ะฝะฐััะพััะตะน ะทะฒะตะทะดะพะน ะฝะฐ ะฟะพะปะต. ะะฝะฐ ััะฐะปะฐ ะปัััะตะน ะธะณัะพะบะพะผ ะฒ ัะฒะพะตะน ะบะพะผะฐะฝะดะต ะธ ะฟะพะผะพะณะปะฐ ะธะผ ะฒัะธะณัะฐัั ะผะฝะพะถะตััะฒะพ ะธะณั. ะั ะดััะทัั ะฑัะปะธ ะณะพัะดั, ััะพ ะขะฐะฝั ะฒะตัะฝัะปะฐัั ะบ ัะฒะพะตะผั ะปัะฑะธะผะพะผั ะทะฐะฝััะธั ะธ ัะฝะพะฒะฐ ััะฐะปะฐ ัะฐะบะพะน ะถะต ะฐะบัะธะฒะฝะพะน ะธ ะปะพะฒะบะพะน, ะบะฐะบ ัะฐะฝััะต.
ะขะฐะฝั ะธ ะตั ะดััะทัั ะฟัะพะดะพะปะถะฐะปะธ ะธะณัะฐัั ะฒ ะผัั ะธ ะฟัะพะฒะพะดะธะปะธ ััะดะตัะฝัะต ะดะฝะธ ะฒะผะตััะต. ะะฝะธ ะฑัะปะธ ััะฐััะปะธะฒั, ััะพ ะขะฐะฝั ะฒะตัะฝัะปะฐัั ะบ ัะฒะพะตะผั ะปัะฑะธะผะพะผั ะทะฐะฝััะธั ะธ ัะฝะพะฒะฐ ััะฐะปะฐ ัะฐัััั ะบะพะผะฐะฝะดั.
ะขะฐะฝั ะธ ะตั ะดััะทัั ะฟัะพะดะพะปะถะฐัั ะธะณัะฐัั ะฒ ะผัั ะธ ะฒะผะตััะต ะพะฝะธ ัะพะทะดะฐัั ะฟัะตะบัะฐัะฝัะต ะฒะพัะฟะพะผะธะฝะฐะฝะธั.
```
v1:
- dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a
- wandb [link](https://wandb.ai/ilyagusev/gemma_test/runs/k7u3uw5i)
- 5 datasets: ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_instruct_gpt4
- Datasets merging script: [create_short_chat_set.py](https://github.com/IlyaGusev/rulm/blob/d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a/self_instruct/src/data_processing/create_short_chat_set.py)
|
mradermacher/LancarAI-v1.0-GGUF | mradermacher | 2024-05-05T14:52:58Z | 450 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"en",
"base_model:lancartech/LancarAI-v1.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-02T11:16:02Z | ---
base_model: lancartech/LancarAI-v1.0
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/lancartech/LancarAI-v1.0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LancarAI-v1.0-GGUF/resolve/main/LancarAI-v1.0.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/LancarAI-v1.0-GGUF/resolve/main/LancarAI-v1.0.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/LancarAI-v1.0-GGUF/resolve/main/LancarAI-v1.0.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/LancarAI-v1.0-GGUF/resolve/main/LancarAI-v1.0.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LancarAI-v1.0-GGUF/resolve/main/LancarAI-v1.0.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/LancarAI-v1.0-GGUF/resolve/main/LancarAI-v1.0.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LancarAI-v1.0-GGUF/resolve/main/LancarAI-v1.0.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/LancarAI-v1.0-GGUF/resolve/main/LancarAI-v1.0.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/LancarAI-v1.0-GGUF/resolve/main/LancarAI-v1.0.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LancarAI-v1.0-GGUF/resolve/main/LancarAI-v1.0.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LancarAI-v1.0-GGUF/resolve/main/LancarAI-v1.0.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/LancarAI-v1.0-GGUF/resolve/main/LancarAI-v1.0.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/LancarAI-v1.0-GGUF/resolve/main/LancarAI-v1.0.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LancarAI-v1.0-GGUF/resolve/main/LancarAI-v1.0.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LancarAI-v1.0-GGUF/resolve/main/LancarAI-v1.0.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/IceMochaccinoRP-7b-GGUF | mradermacher | 2024-05-31T12:00:21Z | 450 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"alpaca",
"mistral",
"not-for-all-audiences",
"nsfw",
"en",
"base_model:icefog72/IceMochaccinoRP-7b",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-31T09:14:32Z | ---
base_model: icefog72/IceMochaccinoRP-7b
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- mergekit
- merge
- alpaca
- mistral
- not-for-all-audiences
- nsfw
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/icefog72/IceMochaccinoRP-7b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/IceMochaccinoRP-7b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/IceMochaccinoRP-7b-GGUF/resolve/main/IceMochaccinoRP-7b.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/IceMochaccinoRP-7b-GGUF/resolve/main/IceMochaccinoRP-7b.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/IceMochaccinoRP-7b-GGUF/resolve/main/IceMochaccinoRP-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/IceMochaccinoRP-7b-GGUF/resolve/main/IceMochaccinoRP-7b.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/IceMochaccinoRP-7b-GGUF/resolve/main/IceMochaccinoRP-7b.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/IceMochaccinoRP-7b-GGUF/resolve/main/IceMochaccinoRP-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/IceMochaccinoRP-7b-GGUF/resolve/main/IceMochaccinoRP-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/IceMochaccinoRP-7b-GGUF/resolve/main/IceMochaccinoRP-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/IceMochaccinoRP-7b-GGUF/resolve/main/IceMochaccinoRP-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/IceMochaccinoRP-7b-GGUF/resolve/main/IceMochaccinoRP-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/IceMochaccinoRP-7b-GGUF/resolve/main/IceMochaccinoRP-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/IceMochaccinoRP-7b-GGUF/resolve/main/IceMochaccinoRP-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/IceMochaccinoRP-7b-GGUF/resolve/main/IceMochaccinoRP-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/IceMochaccinoRP-7b-GGUF/resolve/main/IceMochaccinoRP-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/IceMochaccinoRP-7b-GGUF/resolve/main/IceMochaccinoRP-7b.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
alvdansen/vincentsketchmanga | alvdansen | 2024-06-16T16:31:27Z | 450 | 5 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2024-06-06T17:04:58Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: happy, a sweet girl with fluffy strange hair, staring into the camera
output:
url: images/ComfyUI_00314_.png
- text: a sweet girl with fluffy strange hair, staring into the camera
output:
url: images/ComfyUI_00312_.png
- text: a small boy frowning, slightly unhappy, dressed in a suit, anime, daiton
output:
url: images/ComfyUI_00316_.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
license: creativeml-openrail-m
---
# V. Sketch Manga
<Gallery />
## Model description
Another sketch manga style that I am particularly happy with, more on the semi realistic realm. Can also work with 'daiton' token but does not need it.
## Download model
Weights for this model are available in Safetensors format.
Model release is for research purposes only. For commercial use, please contact me directly.
[Download](/alvdansen/vincentsketchmanga/tree/main) them in the Files & versions tab.
|
gglabs/TinyLM-Chat-0611-5-epoch | gglabs | 2024-06-11T14:01:39Z | 450 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-11T13:57:21Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/tinyllama-chat-bnb-4bit
---
# Uploaded model
- **Developed by:** gglabs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
spow12/Ko-Qwen2-7B-Instruct | spow12 | 2024-06-19T01:56:39Z | 450 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"ko",
"en",
"arxiv:2309.00071",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-12T01:15:59Z | ---
library_name: transformers
license: cc-by-nc-4.0
language:
- ko
- en
---
# Ko-Qwen2-7B-Instruct
<!-- Provide a quick summary of what the model is/does. -->
<!--This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).-->
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model is a Supervised fine-tuned version of [Qwen2-7B -Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) with DeepSpeed and trl for korean.
### Trained Data
- Trained with public data and private data and Generated data (about 50k)
### Usage
```python
from transformers import TextStreamer, pipeline, AutoTokenizer, AutoModelForCausalLM
model_id = 'spow12/Ko-Qwen2-7B-Instruct'
tokenizer = AutoTokenizer.from_pretrained(model_id)
# %%
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map='auto',
)
model.eval()
pipe = pipeline("conversational", model=model, tokenizer=tokenizer, device_map='auto')
streamer = TextStreamer(tokenizer)
generation_configs = dict(
max_new_tokens=2048,
num_return_sequences=1,
temperature=0.1,
# early_stopping=True,
repetition_penalty=1.2,
num_beams=1,
do_sample=True,
top_k=20,
top_p=0.9,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
streamer=streamer
)
sys_message = """๋น์ ์ ์น์ ํ ์ฑ๋ด์ผ๋ก์ ์๋๋ฐฉ์ ์์ฒญ์ ์ต๋ํ ์์ธํ๊ณ ์น์ ํ๊ฒ ๋ตํด์ผํฉ๋๋ค.
์ฌ์ฉ์๊ฐ ์ ๊ณตํ๋ ์ ๋ณด๋ฅผ ์ธ์ฌํ๊ฒ ๋ถ์ํ์ฌ ์ฌ์ฉ์์ ์๋๋ฅผ ์ ์ํ๊ฒ ํ์
ํ๊ณ ๊ทธ์ ๋ฐ๋ผ ๋ต๋ณ์ ์์ฑํด์ผํฉ๋๋ค.
ํญ์ ๋งค์ฐ ์์ฐ์ค๋ฌ์ด ํ๊ตญ์ด๋ก ์๋ตํ์ธ์."""
message = [
{
'role': "system",
'content': sys_message
},
{
'role': 'user',
'content': "ํ์ฌ์ ๊ฒฝ์ ์ํฉ์ ๋ํด ์ด๋ป๊ฒ ์๊ฐํด?."
}
]
conversation = pipe(message, **generation_configs)
conversation[-1]
```
```
<|im_start|>system
๋น์ ์ ์น์ ํ ์ฑ๋ด์ผ๋ก์ ์๋๋ฐฉ์ ์์ฒญ์ ์ต๋ํ ์์ธํ๊ณ ์น์ ํ๊ฒ ๋ตํด์ผํฉ๋๋ค.
์ฌ์ฉ์๊ฐ ์ ๊ณตํ๋ ์ ๋ณด๋ฅผ ์ธ์ฌํ๊ฒ ๋ถ์ํ์ฌ ์ฌ์ฉ์์ ์๋๋ฅผ ์ ์ํ๊ฒ ํ์
ํ๊ณ ๊ทธ์ ๋ฐ๋ผ ๋ต๋ณ์ ์์ฑํด์ผํฉ๋๋ค.
ํญ์ ๋งค์ฐ ์์ฐ์ค๋ฌ์ด ํ๊ตญ์ด๋ก ์๋ตํ์ธ์.<|im_end|>
<|im_start|>user
ํ์ฌ์ ๊ฒฝ์ ์ํฉ์ ๋ํด ์ด๋ป๊ฒ ์๊ฐํด?<|im_end|>
<|im_start|>assistant
์ ๋ ์ธ๊ณต์ง๋ฅ์ด๊ธฐ ๋๋ฌธ์ ํ์ฌ์ ๊ฒฝ์ ์ํฉ์ด๋ ๊ฐ์ธ์ ์ธ ์๊ฒฌ์ ๊ฐ์ง๊ณ ์์ง ์์ต๋๋ค. ํ์ง๋ง, ์ต๊ทผ์๋ ์ ์ธ๊ณ์ ์ผ๋ก ๊ฒฝ๊ธฐ๊ฐ ์ด๋ ค์์ง๊ณ ์๋ ์ถ์ธ์
๋๋ค. ๋ง์ ๋๋ผ๋ค์ด ๋์ฒํ๊ธฐ ์ํ ๋ค์ํ ์ ์ฑ
๊ณผ ์กฐ์น๋ค์ ์ํํ๊ณ ์์ต๋๋ค. ์ด๋ฌํ ์ํฉ์์ ๊ฐ์ ์์ ์ ์ญ๋์ ๋ฐํํ๋ฉฐ ์ด์๊ฐ์๊ธธ ๋ฐ๋๋๋ค.<|im_end|>
```
### License
This model is licensed under the cc-by-nc-4.0. which allows others to share and adapt the model for non-commercial purposes.
Here is Original Readme.md
# Qwen2-7B-Instruct
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 7B Qwen2 model.
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
Qwen2-7B-Instruct supports a context length of up to 131,072 tokens, enabling the processing of extensive inputs. Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2 for handling long texts.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
<br>
## Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
## Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
## Requirements
The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2-7B-Instruct",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-7B-Instruct")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### Processing Long Texts
To handle extensive inputs exceeding 32,768 tokens, we utilize [YARN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For deployment, we recommend using vLLM. You can enable the long-context capabilities by following these steps:
1. **Install vLLM**: You can install vLLM by running the following command.
```bash
pip install "vllm>=0.4.3"
```
Or you can install vLLM from [source](https://github.com/vllm-project/vllm/).
2. **Configure Model Settings**: After downloading the model weights, modify the `config.json` file by including the below snippet:
```json
{
"architectures": [
"Qwen2ForCausalLM"
],
// ...
"vocab_size": 152064,
// adding the following snippets
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
This snippet enable YARN to support longer contexts.
3. **Model Deployment**: Utilize vLLM to deploy your model. For instance, you can set up an openAI-like server using the command:
```bash
python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-7B-Instruct --model path/to/weights
```
Then you can access the Chat API by:
```bash
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "Qwen2-7B-Instruct",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Your Long Input Here."}
]
}'
```
For further usage instructions of vLLM, please refer to our [Github](https://github.com/QwenLM/Qwen2).
**Note**: Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**. We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation
We briefly compare Qwen2-7B-Instruct with similar-sized instruction-tuned LLMs, including Qwen1.5-7B-Chat. The results are shown below:
| Datasets | Llama-3-8B-Instruct | Yi-1.5-9B-Chat | GLM-4-9B-Chat | Qwen1.5-7B-Chat | Qwen2-7B-Instruct |
| :--- | :---: | :---: | :---: | :---: | :---: |
| _**English**_ | | | | | |
| MMLU | 68.4 | 69.5 | **72.4** | 59.5 | 70.5 |
| MMLU-Pro | 41.0 | - | - | 29.1 | **44.1** |
| GPQA | **34.2** | - | **-** | 27.8 | 25.3 |
| TheroemQA | 23.0 | - | - | 14.1 | **25.3** |
| MT-Bench | 8.05 | 8.20 | 8.35 | 7.60 | **8.41** |
| _**Coding**_ | | | | | |
| Humaneval | 62.2 | 66.5 | 71.8 | 46.3 | **79.9** |
| MBPP | **67.9** | - | - | 48.9 | 67.2 |
| MultiPL-E | 48.5 | - | - | 27.2 | **59.1** |
| Evalplus | 60.9 | - | - | 44.8 | **70.3** |
| LiveCodeBench | 17.3 | - | - | 6.0 | **26.6** |
| _**Mathematics**_ | | | | | |
| GSM8K | 79.6 | **84.8** | 79.6 | 60.3 | 82.3 |
| MATH | 30.0 | 47.7 | **50.6** | 23.2 | 49.6 |
| _**Chinese**_ | | | | | |
| C-Eval | 45.9 | - | 75.6 | 67.3 | **77.2** |
| AlignBench | 6.20 | 6.90 | 7.01 | 6.20 | **7.21** |
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen2,
title={Qwen2 Technical Report},
year={2024}
}
``` |
mradermacher/Augmentasanguis-8x7B-i1-GGUF | mradermacher | 2024-06-19T08:14:47Z | 450 | 1 | transformers | [
"transformers",
"gguf",
"not-for-all-audiences",
"en",
"base_model:Envoid/Augmentasanguis-8x7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-17T22:39:16Z | ---
base_model: Envoid/Augmentasanguis-8x7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Envoid/Augmentasanguis-8x7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Augmentasanguis-8x7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-i1-GGUF/resolve/main/Augmentasanguis-8x7B.i1-IQ1_S.gguf) | i1-IQ1_S | 9.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-i1-GGUF/resolve/main/Augmentasanguis-8x7B.i1-IQ1_M.gguf) | i1-IQ1_M | 10.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-i1-GGUF/resolve/main/Augmentasanguis-8x7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-i1-GGUF/resolve/main/Augmentasanguis-8x7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.0 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-i1-GGUF/resolve/main/Augmentasanguis-8x7B.i1-IQ2_S.gguf) | i1-IQ2_S | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-i1-GGUF/resolve/main/Augmentasanguis-8x7B.i1-IQ2_M.gguf) | i1-IQ2_M | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-i1-GGUF/resolve/main/Augmentasanguis-8x7B.i1-Q2_K.gguf) | i1-Q2_K | 17.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-i1-GGUF/resolve/main/Augmentasanguis-8x7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-i1-GGUF/resolve/main/Augmentasanguis-8x7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-i1-GGUF/resolve/main/Augmentasanguis-8x7B.i1-IQ3_S.gguf) | i1-IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-i1-GGUF/resolve/main/Augmentasanguis-8x7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-i1-GGUF/resolve/main/Augmentasanguis-8x7B.i1-IQ3_M.gguf) | i1-IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-i1-GGUF/resolve/main/Augmentasanguis-8x7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-i1-GGUF/resolve/main/Augmentasanguis-8x7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-i1-GGUF/resolve/main/Augmentasanguis-8x7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-i1-GGUF/resolve/main/Augmentasanguis-8x7B.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-i1-GGUF/resolve/main/Augmentasanguis-8x7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-i1-GGUF/resolve/main/Augmentasanguis-8x7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-i1-GGUF/resolve/main/Augmentasanguis-8x7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-i1-GGUF/resolve/main/Augmentasanguis-8x7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-i1-GGUF/resolve/main/Augmentasanguis-8x7B.i1-Q6_K.gguf) | i1-Q6_K | 38.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
Bilal-Mamji/llama-3-8b-chat-doctor | Bilal-Mamji | 2024-06-20T07:47:22Z | 450 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-19T22:52:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
stablediffusionapi/magxl | stablediffusionapi | 2024-06-23T17:56:35Z | 450 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-06-23T17:53:44Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Magxl API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "magxl"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/magxl)
Model link: [View model](https://modelslab.com/models/magxl)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "magxl",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
SJ-Ray/Re-Punctuate | SJ-Ray | 2022-06-29T09:05:36Z | 449 | 10 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text2text-generation | 2022-03-16T16:10:00Z | ---
license: apache-2.0
---
<h2>Re-Punctuate:</h2>
Re-Punctuate is a T5 model that attempts to correct Capitalization and Punctuations in the sentences.
<h3>DataSet:</h3>
DialogSum dataset (115056 Records) was used to fine-tune the model for Punctuation and Capitalization correction.
<h3>Usage:</h3>
<pre>
from transformers import T5Tokenizer, TFT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('SJ-Ray/Re-Punctuate')
model = TFT5ForConditionalGeneration.from_pretrained('SJ-Ray/Re-Punctuate')
input_text = 'the story of this brave brilliant athlete whose very being was questioned so publicly is one that still captures the imagination'
inputs = tokenizer.encode("punctuate: " + input_text, return_tensors="tf")
result = model.generate(inputs)
decoded_output = tokenizer.decode(result[0], skip_special_tokens=True)
print(decoded_output)
</pre>
<h4> Example: </h4>
<b>Input:</b> the story of this brave brilliant athlete whose very being was questioned so publicly is one that still captures the imagination <br>
<b>Output:</b> The story of this brave, brilliant athlete, whose very being was questioned so publicly, is one that still captures the imagination.
<h4> Connect on: <a href="https://www.linkedin.com/in/suraj-kumar-710382a7" target="_blank">LinkedIn : Suraj Kumar</a></h4> |
KoboldAI/fairseq-dense-6.7B-Shinen | KoboldAI | 2022-04-13T08:19:31Z | 449 | 1 | transformers | [
"transformers",
"pytorch",
"xglm",
"text-generation",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-04-07T18:30:40Z | ---
language: en
license: mit
---
# Fairseq-dense 6.7B - Shinen
## Model Description
Fairseq-dense 6.7B-Shinen is a finetune created using Fairseq's MoE dense model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content.
**Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
## Training data
The training data contains user-generated stories from sexstories.com. All stories are tagged using the following way:
```
[Theme: <theme1>, <theme2> ,<theme3>]
<Story goes here>
```
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='KoboldAI/fairseq-dense-6.7B-Shinen')
>>> generator("She was staring at me", do_sample=True, min_length=50)
[{'generated_text': 'She was staring at me with a look that said it all. She wanted me so badly tonight that I wanted'}]
```
### Limitations and Biases
Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion).
### BibTeX entry and citation info
```
Artetxe et al. (2021): Efficient Large Scale Language Modeling with Mixtures of Experts
``` |
Hius/DreamTest | Hius | 2023-05-13T11:42:01Z | 449 | 0 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-04-05T14:39:00Z | Entry not found |
stabilityai/stablelm-base-alpha-3b-v2 | stabilityai | 2023-09-11T20:47:30Z | 449 | 26 | transformers | [
"transformers",
"safetensors",
"stablelm_alpha",
"text-generation",
"causal-lm",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:CarperAI/pilev2-dev",
"dataset:bigcode/starcoderdata",
"dataset:JeanKaddour/minipile",
"arxiv:2002.05202",
"arxiv:2104.09864",
"arxiv:2101.00027",
"arxiv:2305.06161",
"arxiv:1910.02054",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"region:us"
]
| text-generation | 2023-08-04T05:53:31Z | ---
datasets:
- tiiuae/falcon-refinedweb
- togethercomputer/RedPajama-Data-1T
- CarperAI/pilev2-dev
- bigcode/starcoderdata
- JeanKaddour/minipile
language:
- en
tags:
- causal-lm
license: cc-by-sa-4.0
---
# `StableLM-Base-Alpha-3B-v2`
## Model Description
`StableLM-Base-Alpha-3B-v2` is a 3 billion parameter decoder-only language model pre-trained on diverse English datasets. This model is the successor to the first [`StableLM-Base-Alpha-3B`](https://huggingface.co/stabilityai/stablelm-base-alpha-3b) model, addressing previous shortcomings through the use of improved data sources and mixture ratios.
## Usage
Get started generating text with `StableLM-Base-Alpha-3B-v2` by using the following code snippet:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-base-alpha-3b-v2")
model = AutoModelForCausalLM.from_pretrained(
"stabilityai/stablelm-base-alpha-3b-v2",
trust_remote_code=True,
torch_dtype="auto",
)
model.cuda()
inputs = tokenizer("The weather is always wonderful", return_tensors="pt").to("cuda")
tokens = model.generate(
**inputs,
max_new_tokens=64,
temperature=0.75,
top_p=0.95,
do_sample=True,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
```
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: `StableLM-Base-Alpha-v2` models are auto-regressive language models based on the transformer decoder architecture.
* **Language(s)**: English
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
* **License**: Model checkpoints are licensed under the Creative Commons license ([CC BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/)). Under this license, you must give [credit](https://creativecommons.org/licenses/by/4.0/#) to Stability AI, provide a link to the license, and [indicate if changes were made](https://creativecommons.org/licenses/by/4.0/#). You may do so in any reasonable manner, but not in any way that suggests the Stability AI endorses you or your use.
* **Contact**: For questions and comments about the model, please email `[email protected]`
### Model Architecture
| Parameters | Hidden Size | Layers | Heads | Sequence Length |
|----------------|-------------|--------|-------|-----------------|
| 2,796,431,360 | 2560 | 32 | 32 | 4096 |
The model is a decoder-only transformer similar to the `StableLM-Base-Alpha` (v1) with the following configurations:
* **Activation**: SwiGLU ([Shazeer, 2020](https://arxiv.org/abs/2002.05202))
* **Decoder Layer**: Parallel Attention and MLP residuals with a single input LayerNorm ([Wang & Komatsuzaki, 2021](https://github.com/kingoflolz/mesh-transformer-jax/tree/master))
* **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864))
* **Bias**: LayerNorm bias terms only
## Training
`StableLM-Base-Alpha-3B-v2` is pre-trained using a multi-stage context length extension schedule following similar work ([Nijkamp et al. 2023](https://blog.salesforceairesearch.com/xgen/)); first pre-training at a context length of 2048 for 1 trillion tokens, then fine-tuning at a context length of 4096 for another 100B tokens.
### Training Dataset
The first pre-training stage relies on 1 trillion tokens sourced from a mix of the public Falcon RefinedWeb extract ([Penedo et al., 2023](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)), RedPajama-Data ([Together Computer 2023](https://github.com/togethercomputer/RedPajama-Data), The Pile ([Gao et al., 2020](https://arxiv.org/abs/2101.00027)), and internal datasets with web text sampled at a rate of 71%.
In the second stage, we include the StarCoder ([Li et al., 2023](https://arxiv.org/abs/2305.06161)) dataset and down sample web text to 55% while increasing sampling proportions of naturally long text examples in the aforementioned sources.
### Training Procedure
The model is pre-trained on the dataset mixes mentioned above in mixed-precision (FP16), optimized with AdamW, and trained using the NeoX tokenizer with a vocabulary size of 50,257. We outline the complete hyperparameters choices in the project's [GitHub repository - config](https://github.com/Stability-AI/StableLM/blob/main/configs/stablelm-base-alpha-3b-v2.yaml).
### Training Infrastructure
* **Hardware**: `StableLM-Base-Alpha-3B-v2` was trained on the Stability AI cluster - occupying 256 NVIDIA A100 40GB GPUs across AWS P4d instances. Training took approximately 8.45 days to complete across both stages.
* **Software**: We use a fork of gpt-neox ([EleutherAI, 2021](https://github.com/EleutherAI/gpt-neox)) and train under 2D parallelism (Data and Tensor Parallel) with ZeRO-1 ([Rajbhandari et al., 2019](https://arxiv.org/abs/1910.02054v3)) and rely on flash-attention as well as rotary embedding kernels from FlashAttention-2 ([Dao et al., 2023](https://tridao.me/publications/flash2/flash2.pdf))
## Use and Limitations
### Intended Use
These models are intended to be used by all individuals as foundational models for application-specific fine-tuning without strict limitations on commercial use.
### Limitations and bias
The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model-generated text. We recommend that users exercise caution when using these models in production systems. Do not use the models for any applications that may cause harm or distress to individuals or groups.
### How to cite
```bibtex
@misc{StableLMAlphaV2Models,
url={[https://huggingface.co/stabilityai/stablelm-base-alpha-3b-v2](https://huggingface.co/stabilityai/stablelm-base-alpha-3b-v2)},
title={StableLM Alpha v2 Models},
author={Tow, Jonathan}
}
```
|
TheBloke/Zarablend-MX-L2-7B-GGUF | TheBloke | 2023-09-27T12:47:34Z | 449 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"llama2",
"base_model:zarakiquemparte/zarablend-mx-l2-7b",
"license:other",
"text-generation-inference",
"region:us"
]
| null | 2023-09-05T10:53:30Z | ---
license: other
tags:
- llama2
model_name: Zarablend MX L2 7B
base_model: zarakiquemparte/zarablend-mx-l2-7b
inference: false
model_creator: Zaraki Quem Parte
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Zarablend MX L2 7B - GGUF
- Model creator: [Zaraki Quem Parte](https://huggingface.co/zarakiquemparte)
- Original model: [Zarablend MX L2 7B](https://huggingface.co/zarakiquemparte/zarablend-mx-l2-7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Zaraki Quem Parte's Zarablend MX L2 7B](https://huggingface.co/zarakiquemparte/zarablend-mx-l2-7b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Zarablend-MX-L2-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Zarablend-MX-L2-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Zarablend-MX-L2-7B-GGUF)
* [Zaraki Quem Parte's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/zarakiquemparte/zarablend-mx-l2-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Zaraki Quem Parte's Zarablend MX L2 7B](https://huggingface.co/zarakiquemparte/zarablend-mx-l2-7b).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [zarablend-mx-l2-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Zarablend-MX-L2-7B-GGUF/blob/main/zarablend-mx-l2-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [zarablend-mx-l2-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Zarablend-MX-L2-7B-GGUF/blob/main/zarablend-mx-l2-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [zarablend-mx-l2-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Zarablend-MX-L2-7B-GGUF/blob/main/zarablend-mx-l2-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [zarablend-mx-l2-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Zarablend-MX-L2-7B-GGUF/blob/main/zarablend-mx-l2-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [zarablend-mx-l2-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Zarablend-MX-L2-7B-GGUF/blob/main/zarablend-mx-l2-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [zarablend-mx-l2-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Zarablend-MX-L2-7B-GGUF/blob/main/zarablend-mx-l2-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [zarablend-mx-l2-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Zarablend-MX-L2-7B-GGUF/blob/main/zarablend-mx-l2-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [zarablend-mx-l2-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Zarablend-MX-L2-7B-GGUF/blob/main/zarablend-mx-l2-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [zarablend-mx-l2-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Zarablend-MX-L2-7B-GGUF/blob/main/zarablend-mx-l2-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [zarablend-mx-l2-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Zarablend-MX-L2-7B-GGUF/blob/main/zarablend-mx-l2-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [zarablend-mx-l2-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Zarablend-MX-L2-7B-GGUF/blob/main/zarablend-mx-l2-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [zarablend-mx-l2-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Zarablend-MX-L2-7B-GGUF/blob/main/zarablend-mx-l2-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Zarablend-MX-L2-7B-GGUF and below it, a specific filename to download, such as: zarablend-mx-l2-7b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Zarablend-MX-L2-7B-GGUF zarablend-mx-l2-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Zarablend-MX-L2-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Zarablend-MX-L2-7B-GGUF zarablend-mx-l2-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m zarablend-mx-l2-7b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Zarablend-MX-L2-7B-GGUF", model_file="zarablend-mx-l2-7b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค๊ต ๊น, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้ฟๆ, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Zaraki Quem Parte's Zarablend MX L2 7B
# Model Card: Zarablend MX L2 7b
This model uses [Nous Hermes Llama2 7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) (53%) as a base with [Airoboros L2 7B GPT4 m2.0](https://huggingface.co/jondurbin/airoboros-l2-7b-gpt4-m2.0) (47%) and the result of this merge was merged with [LimaRP LLama2 7B Lora](https://huggingface.co/lemonilia/limarp-llama2).
This merge of models(hermes and airoboros) was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/merge-cli.py)
This merge of Lora with Model was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/apply-lora.py)
Quantized Model by @TheBloke:
- [GGML](https://huggingface.co/TheBloke/Zarablend-MX-L2-7B-GGML)
- [GPTQ](https://huggingface.co/TheBloke/Zarablend-MX-L2-7B-GPTQ)
Merge illustration:

## Usage:
Since this is a merge between Nous Hermes, Airoboros and LimaRP, the following instruction formats should work:
Alpaca 2:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
LimaRP instruction format:
```
<<SYSTEM>>
<character card and system prompt>
<<USER>>
<prompt>
<<AIBOT>>
<leave a newline blank for model to respond>
```
## Bias, Risks, and Limitations
This model is not intended for supplying factual information or advice in any form
## Training Details
This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.
<!-- original-model-card end -->
|
TheBloke/leo-hessianai-13B-chat-GGUF | TheBloke | 2023-09-28T12:29:29Z | 449 | 6 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"en",
"de",
"dataset:LeoLM/OpenSchnabeltier",
"dataset:OpenAssistant/OASST-DE",
"dataset:FreedomIntelligence/alpaca-gpt4-deutsch",
"dataset:FreedomIntelligence/evol-instruct-deutsch",
"dataset:LeoLM/German_Poems",
"dataset:LeoLM/German_Songs",
"base_model:LeoLM/leo-hessianai-13b-chat",
"license:llama2",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-28T12:22:46Z | ---
base_model: LeoLM/leo-hessianai-13b-chat
datasets:
- LeoLM/OpenSchnabeltier
- OpenAssistant/OASST-DE
- FreedomIntelligence/alpaca-gpt4-deutsch
- FreedomIntelligence/evol-instruct-deutsch
- LeoLM/German_Poems
- LeoLM/German_Songs
inference: false
language:
- en
- de
library_name: transformers
license: llama2
model_creator: LAION LeoLM
model_name: Leo Hessianai 13B Chat
model_type: llama
pipeline_tag: text-generation
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Leo Hessianai 13B Chat - GGUF
- Model creator: [LAION LeoLM](https://huggingface.co/LeoLM)
- Original model: [Leo Hessianai 13B Chat](https://huggingface.co/LeoLM/leo-hessianai-13b-chat)
<!-- description start -->
## Description
This repo contains GGUF format model files for [LAION LeoLM's Leo Hessianai 13B Chat](https://huggingface.co/LeoLM/leo-hessianai-13b-chat).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-GGUF)
* [LAION LeoLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/LeoLM/leo-hessianai-13b-chat)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [leo-hessianai-13b-chat.Q2_K.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-GGUF/blob/main/leo-hessianai-13b-chat.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [leo-hessianai-13b-chat.Q3_K_S.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-GGUF/blob/main/leo-hessianai-13b-chat.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [leo-hessianai-13b-chat.Q3_K_M.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-GGUF/blob/main/leo-hessianai-13b-chat.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [leo-hessianai-13b-chat.Q3_K_L.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-GGUF/blob/main/leo-hessianai-13b-chat.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [leo-hessianai-13b-chat.Q4_0.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-GGUF/blob/main/leo-hessianai-13b-chat.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [leo-hessianai-13b-chat.Q4_K_S.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-GGUF/blob/main/leo-hessianai-13b-chat.Q4_K_S.gguf) | Q4_K_S | 4 | 7.42 GB| 9.92 GB | small, greater quality loss |
| [leo-hessianai-13b-chat.Q4_K_M.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-GGUF/blob/main/leo-hessianai-13b-chat.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [leo-hessianai-13b-chat.Q5_0.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-GGUF/blob/main/leo-hessianai-13b-chat.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [leo-hessianai-13b-chat.Q5_K_S.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-GGUF/blob/main/leo-hessianai-13b-chat.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [leo-hessianai-13b-chat.Q5_K_M.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-GGUF/blob/main/leo-hessianai-13b-chat.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [leo-hessianai-13b-chat.Q6_K.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-GGUF/blob/main/leo-hessianai-13b-chat.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [leo-hessianai-13b-chat.Q8_0.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-GGUF/blob/main/leo-hessianai-13b-chat.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/leo-hessianai-13B-chat-GGUF and below it, a specific filename to download, such as: leo-hessianai-13b-chat.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/leo-hessianai-13B-chat-GGUF leo-hessianai-13b-chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/leo-hessianai-13B-chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/leo-hessianai-13B-chat-GGUF leo-hessianai-13b-chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m leo-hessianai-13b-chat.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/leo-hessianai-13B-chat-GGUF", model_file="leo-hessianai-13b-chat.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค๊ต ๊น, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้ฟๆ, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: LAION LeoLM's Leo Hessianai 13B Chat
# LAION LeoLM: **L**inguistically **E**nhanced **O**pen **L**anguage **M**odel
Meet LeoLM, the first open and commercially available German Foundation Language Model built on Llama-2.
Our models extend Llama-2's capabilities into German through continued pretraining on a large corpus of German-language and mostly locality specific text.
Thanks to a compute grant at HessianAI's new supercomputer **42**, we release two foundation models trained with 8k context length,
[`LeoLM/leo-hessianai-7b`](https://huggingface.co/LeoLM/leo-hessianai-7b) and [`LeoLM/leo-hessianai-13b`](https://huggingface.co/LeoLM/leo-hessianai-13b) under the [Llama-2 community license](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) (70b also coming soon! ๐).
With this release, we hope to bring a new wave of opportunities to German open-source and commercial LLM research and accelerate adoption.
Read our [blog post]() or our paper (preprint coming soon) for more details!
*A project by Bjรถrn Plรผster and Christoph Schuhmann in collaboration with LAION and HessianAI.*
## LeoLM Chat
`LeoLM/leo-hessianai-13b-chat` is a German chat model built on our foundation model `LeoLM/leo-hessianai-13b` and finetuned on a selection of German instruction datasets.
The model performs exceptionally well on writing, explanation and discussion tasks but struggles somewhat with math and advanced reasoning. See our MT-Bench-DE scores:
```
{
"first_turn": 6.525,
"second_turn": 5.15,
"categories": {
"writing": 6.925,
"roleplay": 6.7,
"reasoning": 4.55,
"math": 3.25,
"coding": 3.45,
"extraction": 5.4,
"stem": 7.55,
"humanities": 8.875
},
"average": 5.8375
}
```
## Model Details
- **Finetuned from:** [LeoLM/leo-hessianai-13b](https://huggingface.co/LeoLM/leo-hessianai-7b)
- **Model type:** Causal decoder-only transformer language model
- **Language:** English and German
- **Demo:** [Web Demo]()
- **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt)
- **Contact:** [LAION Discord](https://discord.com/invite/eq3cAMZtCC) or [Bjรถrn Plรผster](mailto:[email protected])
## Use in ๐คTransformers
First install direct dependencies:
```
pip install transformers torch sentencepiece
```
If you want faster inference using flash-attention2, you need to install these dependencies:
```bash
pip install packaging ninja
pip install flash-attn==v2.1.1 --no-build-isolation
pip install git+https://github.com/HazyResearch/[email protected]#subdirectory=csrc/rotary
```
Then load the model in transformers:
```python
from transformers import pipeline
import torch
system_prompt = """<|im_start|>system
Dies ist eine Unterhaltung zwischen einem intelligenten, hilfsbereitem KI-Assistenten und einem Nutzer.
Der Assistent gibt ausfรผhrliche, hilfreiche und ehrliche Antworten.<|im_end|>
"""
prompt_format = "<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
prompt = "Erklรคre mir wie die Fahrradwegesituation in Hamburg ist."
generator = pipeline(model="LeoLM/leo-hessianai-13b-chat", device="cuda", torch_dtype=torch.float16, trust_remote_code=True) # True for flash-attn2 else False
print(generator(prompt_format.format(prompt=prompt), do_sample=True, top_p=0.95, max_length=8192))
```
"*Hamburg verfรผgt รผber ein umfangreiches Netzwerk an Radwegen, das ein effizientes und angenehmes Radfahren in der ganzen Stadt ermรถglicht. Die Radwege variieren in Bezug auf ihre Qualitรคt, wobei einige erstklassig mit eigens fรผr Radfahrer reservierten Spuren sind, wรคhrend andere einfache Fahrradsymbole auf dem Boden haben, die anzeigen, dass Radfahrer abwechselnd mit dem Autoverkehr auf der Straรe fahren sollten. Einige Nebenstraรen haben auch spezielle Fahrradspuren, wobei einige mit Bordsteinabsenkungen zur Seite der Autospuren markiert sind. Zusรคtzlich haben viele Hauptstraรen, insbesondere in Nebenstraรen, fahrradfreundliche Abstรคnde zwischen den geparkten Autos und dem Gehweg, was ein bequemes Fahren auf der Straรe ermรถglicht. Der Bau von Radschnellwegen, die schnelles und effizientes Radfahren in und aus der Stadt ermรถglichen, ist im Gange und wird in den kommenden Jahren fortgesetzt. Insgesamt sind die Radwege in Hamburg weitlรคufig und gut ausgeschildert, was es zu einem angenehmen Ort macht, um mit dem Fahrrad zu fahren.*"
## Prompting / Prompt Template
Prompt dialogue template (ChatML format):
```
"""
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
"""
```
The model input can contain multiple conversation turns between user and assistant, e.g.
```
<|im_start|>user
{prompt 1}<|im_end|>
<|im_start|>assistant
{reply 1}<|im_end|>
<|im_start|>user
{prompt 2}<|im_end|>
<|im_start|>assistant
(...)
```
## Ethical Considerations and Limitations
LeoLM has been tested in English and German, and has not covered, nor could it cover all scenarios.
For these reasons, as with all LLMs, the potential outputs of `LeoLM/leo-hessianai-13b-chat` cannot be predicted
in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses
to user prompts. Therefore, before deploying any applications of `LeoLM/leo-hessianai-13b-chat`, developers should
perform safety testing and tuning tailored to their specific applications of the model.
Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/).
## Finetuning Details
| Hyperparameter | Value |
|---|---|
| Num epochs | 3 |
| Examples per epoch | 131214 |
| Global batch size | 256 |
| Learning rate | 3e-5 |
| Warmup steps | 100 |
| LR scheduler | Cosine |
| Adam betas | (0.9, 0.95) |
## Dataset Details
```
## Stats for 'Subset of OpenAssistant/OASST-DE' (3534 samples (100.0%))
-----------------
Accepted: 3534/3534 (100.0%)
Accepted tokens: 2259302
Skipped: 0 (0.0%)
Min tokens per sample: 29
Max tokens per sample: 2484
Avg tokens per sample: 639.3044708545557
-----------------
## Stats for 'Subset of FreedomIntelligence/evol-instruct-deutsch' (57841 samples (100.0%))
-----------------
Accepted: 57841/57841 (100.0%)
Accepted tokens: 42958192
Skipped: 0 (0.0%)
Min tokens per sample: 33
Max tokens per sample: 5507
Avg tokens per sample: 742.6944900675991
-----------------
## Stats for 'Subset of FreedomIntelligence/alpaca-gpt4-deutsch' (48969 samples (100.0%))
-----------------
Accepted: 48969/48969 (100.0%)
Accepted tokens: 13372005
Skipped: 0 (0.0%)
Min tokens per sample: 19
Max tokens per sample: 1359
Avg tokens per sample: 273.07082031489307
-----------------
## Stats for 'Subset of LeoLM/OpenSchnabeltier' (21314 samples (100.0%))
-----------------
Accepted: 21314/21314 (100.0%)
Accepted tokens: 8134690
Skipped: 0 (0.0%)
Min tokens per sample: 25
Max tokens per sample: 1202
Avg tokens per sample: 381.65947264708643
-----------------
## Stats for 'Subset of LeoLM/German_Poems' (490 samples (100.0%))
-----------------
Accepted: 490/490 (100.0%)
Accepted tokens: 618642
Skipped: 0 (0.0%)
Min tokens per sample: 747
Max tokens per sample: 1678
Avg tokens per sample: 1262.534693877551
-----------------
## Stats for 'Subset of LeoLM/German_Songs' (392 samples (100.0%))
-----------------
Accepted: 392/392 (100.0%)
Accepted tokens: 187897
Skipped: 0 (0.0%)
Min tokens per sample: 231
Max tokens per sample: 826
Avg tokens per sample: 479.3290816326531
-----------------
## Stats for 'total' (132540 samples (100.0%))
-----------------
Accepted: 132540/132540 (100.0%)
Accepted tokens: 67530728
Skipped: 0 (0.0%)
Min tokens per sample: 19
Max tokens per sample: 5507
Avg tokens per sample: 509.51205673758864
-----------------
```
<!-- original-model-card end -->
|
TheBloke/airoboros-l2-13B-3.1-GGUF | TheBloke | 2023-10-15T10:51:58Z | 449 | 4 | transformers | [
"transformers",
"gguf",
"llama",
"dataset:jondurbin/airoboros-3.1",
"base_model:jondurbin/airoboros-l2-13b-3.1",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-10-15T10:36:34Z | ---
base_model: jondurbin/airoboros-l2-13b-3.1
datasets:
- jondurbin/airoboros-3.1
inference: false
license: llama2
model_creator: Jon Durbin
model_name: Airoboros L2 13B 3.1
model_type: llama
prompt_template: '[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as
possible, while being safe. Your answers should not include any harmful, unethical,
racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses
are socially unbiased and positive in nature. If a question does not make any sense,
or is not factually coherent, explain why instead of answering something not correct.
If you don''t know the answer to a question, please don''t share false information.
<</SYS>>
{prompt}[/INST]
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Airoboros L2 13B 3.1 - GGUF
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
- Original model: [Airoboros L2 13B 3.1](https://huggingface.co/jondurbin/airoboros-l2-13b-3.1)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Jon Durbin's Airoboros L2 13B 3.1](https://huggingface.co/jondurbin/airoboros-l2-13b-3.1).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/airoboros-l2-13B-3.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/airoboros-l2-13B-3.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-l2-13B-3.1-GGUF)
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-l2-13b-3.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Llama-2-Chat
```
[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{prompt}[/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [airoboros-l2-13b-3.1.Q2_K.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.1-GGUF/blob/main/airoboros-l2-13b-3.1.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [airoboros-l2-13b-3.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.1-GGUF/blob/main/airoboros-l2-13b-3.1.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [airoboros-l2-13b-3.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.1-GGUF/blob/main/airoboros-l2-13b-3.1.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [airoboros-l2-13b-3.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.1-GGUF/blob/main/airoboros-l2-13b-3.1.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [airoboros-l2-13b-3.1.Q4_0.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.1-GGUF/blob/main/airoboros-l2-13b-3.1.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [airoboros-l2-13b-3.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.1-GGUF/blob/main/airoboros-l2-13b-3.1.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [airoboros-l2-13b-3.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.1-GGUF/blob/main/airoboros-l2-13b-3.1.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [airoboros-l2-13b-3.1.Q5_0.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.1-GGUF/blob/main/airoboros-l2-13b-3.1.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [airoboros-l2-13b-3.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.1-GGUF/blob/main/airoboros-l2-13b-3.1.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [airoboros-l2-13b-3.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.1-GGUF/blob/main/airoboros-l2-13b-3.1.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [airoboros-l2-13b-3.1.Q6_K.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.1-GGUF/blob/main/airoboros-l2-13b-3.1.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [airoboros-l2-13b-3.1.Q8_0.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.1-GGUF/blob/main/airoboros-l2-13b-3.1.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/airoboros-l2-13B-3.1-GGUF and below it, a specific filename to download, such as: airoboros-l2-13b-3.1.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/airoboros-l2-13B-3.1-GGUF airoboros-l2-13b-3.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/airoboros-l2-13B-3.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/airoboros-l2-13B-3.1-GGUF airoboros-l2-13b-3.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m airoboros-l2-13b-3.1.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n{prompt}[/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/airoboros-l2-13B-3.1-GGUF", model_file="airoboros-l2-13b-3.1.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, ์ค๊ต ๊น, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjรคreholt, ้ฟๆ, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Jon Durbin's Airoboros L2 13B 3.1
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
#### IMPORTANT NOTE - llama-2 chat format!!!
The airoboros 3.1 models use llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
A model built on [airoboros-3.1 dataset](https://hf.co/datasets/jondurbin/airoboros-3.1), which is a continuation of [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3,0) with the following extras:
* More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations.
* Log information extraction (e.g. getting the list of paths/IPs/etc. from apache logs)
* Anonymization, e.g. removing names, IP addresses, and/or dates from text.
* Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.
* Multi-step instructions with acknowledgement (see below)
* De-censorship data (not published)
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
The prompt template is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/airoboros-l2-13b-3.1')
chat = [
{"role": "system", "content": "You are Bob, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = ฯrยฒ, where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = ฯrยฒ.
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
## Multi-step instructions with acknowledgement
I don't know how useful this is, really, but I thought I'd add it just in case. Example:
```
{
"conversations": [
{
"value": "You are a helpful, unbiased, uncensored assistant.",
"from": "system"
},
{
"value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "You can only respond using animal-related emojis.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Precede all responses with \"ZOOZOO:\"",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Include at least one bird emoji in your responses.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Describe a trip to a safari park.",
"from": "human"
},
{
"value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29",
"from": "gpt"
}
]
}
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.1 models are built on top of multiple base models, each with their own license/restrictions.
The 30b model is built on the original llama, which has a strict non-commercial usage restriction.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
<!-- original-model-card end -->
|
sainteye/ifoodie-detail-rating-v15.2 | sainteye | 2023-10-26T12:49:00Z | 449 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-10-26T12:48:56Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: ifoodie-detail-rating-v15.2
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9803921580314636
---
# ifoodie-detail-rating-v15.2
['ไธญ้', 'ๅๅฃ', 'ๅๅฅฝ']
## Example Images
# #### ไธญ้
# 
#
# #### ๅๅฃ
# 
#
# #### ๅๅฅฝ
# 
# |
TheBloke/Yarn-Mistral-7B-128k-AWQ | TheBloke | 2023-11-09T18:16:14Z | 449 | 65 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"custom_code",
"en",
"dataset:emozilla/yarn-train-tokenized-16k-mistral",
"arxiv:2309.00071",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
]
| text-generation | 2023-11-02T17:54:52Z | ---
base_model: NousResearch/Yarn-Mistral-7b-128k
datasets:
- emozilla/yarn-train-tokenized-16k-mistral
inference: false
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- perplexity
model_creator: NousResearch
model_name: Yarn Mistral 7B 128K
model_type: mistral
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Yarn Mistral 7B 128K - AWQ
- Model creator: [NousResearch](https://huggingface.co/NousResearch)
- Original model: [Yarn Mistral 7B 128K](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k)
<!-- description start -->
## Description
This repo contains AWQ model files for [NousResearch's Yarn Mistral 7B 128K](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Yarn-Mistral-7B-128k-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Yarn-Mistral-7B-128k-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Yarn-Mistral-7B-128k-GGUF)
* [NousResearch's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Yarn-Mistral-7B-128k-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.15 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Yarn-Mistral-7B-128k-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Yarn-Mistral-7B-128k-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/Yarn-Mistral-7B-128k-AWQ --quantization awq
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''{prompt}
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Yarn-Mistral-7B-128k-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Yarn-Mistral-7B-128k-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using AutoAWQ
### Install the AutoAWQ package
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.1 or later.
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### AutoAWQ example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/Yarn-Mistral-7B-128k-AWQ"
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=True, safetensors=True)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
print("*** Running model.generate:")
token_input = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
token_input,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("LLM output: ", text_output)
"""
# Inference should be possible with transformers pipeline as well in future
# But currently this is not yet supported by AutoAWQ (correct as of September 25th 2023)
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
"""
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, ้ฟๆ, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjรคreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: NousResearch's Yarn Mistral 7B 128K
# Model Card: Nous-Yarn-Mistral-7b-128k
[Preprint (arXiv)](https://arxiv.org/abs/2309.00071)
[GitHub](https://github.com/jquesnelle/yarn)

## Model Description
Nous-Yarn-Mistral-7b-128k is a state-of-the-art language model for long context, further pretrained on long context data for 1500 steps using the YaRN extension method.
It is an extension of [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) and supports a 128k token context window.
To use, pass `trust_remote_code=True` when loading the model, for example
```python
model = AutoModelForCausalLM.from_pretrained("NousResearch/Yarn-Mistral-7b-128k",
use_flash_attention_2=True,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True)
```
In addition you will need to use the latest version of `transformers` (until 4.35 comes out)
```sh
pip install git+https://github.com/huggingface/transformers
```
## Benchmarks
Long context benchmarks:
| Model | Context Window | 8k PPL | 16k PPL | 32k PPL | 64k PPL | 128k PPL |
|-------|---------------:|------:|----------:|-----:|-----:|------------:|
| [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 8k | 2.96 | - | - | - | - |
| [Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) | 64k | 3.04 | 2.65 | 2.44 | 2.20 | - |
| [Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) | 128k | 3.08 | 2.68 | 2.47 | 2.24 | 2.19 |
Short context benchmarks showing that quality degradation is minimal:
| Model | Context Window | ARC-c | Hellaswag | MMLU | Truthful QA |
|-------|---------------:|------:|----------:|-----:|------------:|
| [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 8k | 59.98 | 83.31 | 64.16 | 42.15 |
| [Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) | 64k | 59.38 | 81.21 | 61.32 | 42.50 |
| [Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) | 128k | 58.87 | 80.58 | 60.64 | 42.46 |
## Collaborators
- [bloc97](https://github.com/bloc97): Methods, paper and evals
- [@theemozilla](https://twitter.com/theemozilla): Methods, paper, model training, and evals
- [@EnricoShippole](https://twitter.com/EnricoShippole): Model training
- [honglu2875](https://github.com/honglu2875): Paper and evals
The authors would like to thank LAION AI for their support of compute for this model.
It was trained on the [JUWELS](https://www.fz-juelich.de/en/ias/jsc/systems/supercomputers/juwels) supercomputer.
|
andrijdavid/tinyfrank-1.4B-GGUF | andrijdavid | 2024-01-02T18:05:26Z | 449 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"GGUF",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-01-02T15:37:13Z | ---
license: apache-2.0
tags:
- GGUF
quantized_by: andrijdavid
---
# tinyfrank-1.4B-GGUF
- Original model: [tinyfrank-1.4B](https://huggingface.co/SkunkworksAI/tinyfrank-1.4B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [tinyfrank-1.4B](https://huggingface.co/SkunkworksAI/tinyfrank-1.4B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applicationsโ
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: andrijdavid/tinyfrank-1.4B-GGUF and below it, a specific filename to download, such as: tinyfrank-1.4B-f16.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download andrijdavid/tinyfrank-1.4B-GGUF tinyfrank-1.4B-f16.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download andrijdavid/tinyfrank-1.4B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download andrijdavid/tinyfrank-1.4B-GGUF tinyfrank-1.4B-f16.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m tinyfrank-1.4B-f16.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./tinyfrank-1.4B-f16.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./tinyfrank-1.4B-f16.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: tinyfrank-1.4B
**Undi95 type frankenstein of TinyLLama 1.1b**
https://github.com/jzhang38/TinyLlama
https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0
**GGUF custom quants included**
The secret sauce:
```bash
slices:
- sources:
- model: "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
layer_range: [0, 14]
- sources:
- model: "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
layer_range: [8, 22]
merge_method: passthrough
dtype: bfloat16
```
How to run as gguf:
```bash
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make -j
wget https://huggingface.co/SkunkworksAI/tinyfrank-1.4B/resolve/main/tinyfrank-q6L.gguf
./server -m tinyfrank-q6L.gguf --host "my.internal.ip.or.my.cloud.host.name.goes.here.com" -c 512
```
<!-- original-model-card end --> |
DrNicefellow/ChatAllInOne_Mixtral-8x7B-v1 | DrNicefellow | 2024-03-28T23:11:51Z | 449 | 0 | transformers | [
"transformers",
"pytorch",
"mixtral",
"text-generation",
"dataset:DrNicefellow/CHAT-ALL-IN-ONE-v1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-01-27T02:06:20Z | ---
license: apache-2.0
datasets:
- DrNicefellow/CHAT-ALL-IN-ONE-v1
---
# ChatAllInOne_Mixtral-8x7B-v1
## Description
ChatAllInOne_Mixtral-8x7B-v1 is a chat language model fine-tuned on the CHAT-ALL-IN-ONE-v1 dataset using the QLoRA technique. Originally based on the [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) model, this version is specifically optimized for diverse and comprehensive chat applications.
## Model Details
- **Base Model**: [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
- **Fine-tuning Technique**: QLoRA
- **Dataset**: [CHAT-ALL-IN-ONE-v1](https://huggingface.co/datasets/DrNicefellow/CHAT-ALL-IN-ONE-v1)
- **Tool Used for Fine-tuning**: [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
## Features
- Enhanced understanding and generation of conversational language.
- Improved performance in diverse chat scenarios, including casual, formal, and domain-specific conversations.
- Fine-tuned to maintain context and coherence over longer dialogues.
## Prompt Format
Vicuna 1.1
See the finetuning dataset for examples.
## License
This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## Feeling Generous? ๐
Eager to buy me a cup of 2$ coffe or iced tea?๐ตโ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink? |
mradermacher/Mixtral-8x7B-v0.1-i1-GGUF | mradermacher | 2024-05-06T06:21:41Z | 449 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"en",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-25T14:54:26Z | ---
base_model: mistralai/Mixtral-8x7B-v0.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- moe
---
## About
weighted/imatrix quants of https://huggingface.co/mistralai/Mixtral-8x7B-v0.1
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 9.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 11.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 13.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 14.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 17.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-Q3_K_XS.gguf) | i1-Q3_K_XS | 19.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 20.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 21.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 26.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 26.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 27.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-v0.1-i1-GGUF/resolve/main/Mixtral-8x7B-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 38.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
linzeqipku/distillroberta.onnx | linzeqipku | 2024-02-28T00:33:07Z | 449 | 0 | transformers | [
"transformers",
"onnx",
"sim-grounding",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-28T00:29:16Z | ---
license: apache-2.0
---
|
mradermacher/ShiningValiantXS-i1-GGUF | mradermacher | 2024-05-06T06:09:22Z | 449 | 0 | transformers | [
"transformers",
"gguf",
"shining-valiant",
"valiant",
"valiant-labs",
"llama",
"llama-2",
"llama-2-chat",
"13b",
"en",
"base_model:ValiantLabs/ShiningValiantXS",
"license:llama2",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-18T22:54:38Z | ---
base_model: ValiantLabs/ShiningValiantXS
language:
- en
library_name: transformers
license: llama2
model_type: llama
quantized_by: mradermacher
tags:
- shining-valiant
- valiant
- valiant-labs
- llama
- llama-2
- llama-2-chat
- 13b
---
## About
weighted/imatrix quants of https://huggingface.co/ValiantLabs/ShiningValiantXS
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/ShiningValiantXS-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ShiningValiantXS-i1-GGUF/resolve/main/ShiningValiantXS.i1-IQ1_S.gguf) | i1-IQ1_S | 3.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ShiningValiantXS-i1-GGUF/resolve/main/ShiningValiantXS.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/ShiningValiantXS-i1-GGUF/resolve/main/ShiningValiantXS.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/ShiningValiantXS-i1-GGUF/resolve/main/ShiningValiantXS.i1-IQ2_S.gguf) | i1-IQ2_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/ShiningValiantXS-i1-GGUF/resolve/main/ShiningValiantXS.i1-IQ2_M.gguf) | i1-IQ2_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/ShiningValiantXS-i1-GGUF/resolve/main/ShiningValiantXS.i1-Q2_K.gguf) | i1-Q2_K | 5.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ShiningValiantXS-i1-GGUF/resolve/main/ShiningValiantXS.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ShiningValiantXS-i1-GGUF/resolve/main/ShiningValiantXS.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/ShiningValiantXS-i1-GGUF/resolve/main/ShiningValiantXS.i1-IQ3_S.gguf) | i1-IQ3_S | 6.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ShiningValiantXS-i1-GGUF/resolve/main/ShiningValiantXS.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/ShiningValiantXS-i1-GGUF/resolve/main/ShiningValiantXS.i1-IQ3_M.gguf) | i1-IQ3_M | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/ShiningValiantXS-i1-GGUF/resolve/main/ShiningValiantXS.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/ShiningValiantXS-i1-GGUF/resolve/main/ShiningValiantXS.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/ShiningValiantXS-i1-GGUF/resolve/main/ShiningValiantXS.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/ShiningValiantXS-i1-GGUF/resolve/main/ShiningValiantXS.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/ShiningValiantXS-i1-GGUF/resolve/main/ShiningValiantXS.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ShiningValiantXS-i1-GGUF/resolve/main/ShiningValiantXS.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.6 | |
| [GGUF](https://huggingface.co/mradermacher/ShiningValiantXS-i1-GGUF/resolve/main/ShiningValiantXS.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.9 | |
| [GGUF](https://huggingface.co/mradermacher/ShiningValiantXS-i1-GGUF/resolve/main/ShiningValiantXS.i1-Q6_K.gguf) | i1-Q6_K | 11.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ZiangWu/MobileVLM_V2-1.7B-GGUF | ZiangWu | 2024-03-28T02:37:45Z | 449 | 6 | null | [
"gguf",
"MobileVLM V2",
"arxiv:2402.03766",
"license:apache-2.0",
"region:us"
]
| null | 2024-03-28T02:30:34Z | ---
license: apache-2.0
tags:
- MobileVLM V2
---
## Model Summery
MobileVLM V2 is a family of significantly improved vision language models upon MobileVLM, which proves that a delicate orchestration of novel architectural design, an improved training scheme tailored for mobile VLMs, and rich high-quality dataset curation can substantially benefit VLMsโ performance. Specifically, MobileVLM V2 1.7B achieves better or on-par performance on standard VLM benchmarks compared with much larger VLMs at the 3B scale. Notably, MobileVLM_V2-3B model outperforms a large variety of VLMs at the 7B+ scale.
The MobileVLM_V2-1.7B was built on our [MobileLLaMA-1.4B-Chat](](https://huggingface.co/mtgv/MobileLLaMA-1.4B-Chat)) to facilitate the off-the-shelf deployment.
## Model Sources
- Repository: https://github.com/Meituan-AutoML/MobileVLM
- Paper: [MobileVLM V2: Faster and Stronger Baseline for Vision Language Model](https://arxiv.org/abs/2402.03766)
## How to Get Started with the Model
Inference examples can be found at [Github](https://github.com/Meituan-AutoML/MobileVLM).
|
lemon-mint/gemma-ko-7b-it-v0.40 | lemon-mint | 2024-03-31T22:30:34Z | 449 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"korean",
"pytorch",
"conversational",
"ko",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-31T19:14:19Z | ---
library_name: transformers
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
language:
- ko
- en
tags:
- korean
- gemma
- pytorch
pipeline_tag: text-generation
---

# Gemma Ko 7B Instruct v0.40
- Eval Loss: `1.222`
- lr: `1.5e-5`
- optimizer: adamw
- lr_scheduler_type: cosine
## Model Details
### Model Description
The Gemma 7B Ko Instruct v0.40 model is designed for generating human-like text in the Korean language.
It can be used for a variety of natural language processing tasks, such as language translation, text summarization, question answering, and conversation generation.
This model is particularly well-suited for applications that require high-quality, coherent, and contextually relevant Korean text generation.
- **Developed by:** `lemon-mint`
- **Model type:** Gemma
- **Language(s) (NLP):** Korean, English
- **License:** [gemma-terms-of-use](https://ai.google.dev/gemma/terms)
- **Finetuned from model:** [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it)
# Limitations and Ethical Considerations
As Gemma Ko 7B has been trained on extensive web data, biases present in the training data may be reflected in the model. Additionally, there is a possibility that it may generate sentences containing errors or incorrect information. Therefore, rather than blindly trusting the model's output, it is necessary to refer to it with caution.
|
Shaleen123/phi-2-maths | Shaleen123 | 2024-04-05T11:16:24Z | 449 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-04-04T16:02:38Z | ---
library_name: transformers
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** mit
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DevsDoCode/LLama-3-8b-Uncensored-4bit | DevsDoCode | 2024-04-25T11:00:56Z | 449 | 19 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"uncensored",
"llama-3",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-04-19T07:49:00Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- uncensored
- transformers
- llama
- llama-3
- unsloth
pipeline_tag: text-generation
---
## Contributors
[](https://huggingface.co/DevsDoCode) [](https://huggingface.co/OEvortex)
# Finetune Meta Llama-3 8b to create an Uncensored Model with Devs Do Code!
Unleash the power of uncensored text generation with our model! We've fine-tuned the Meta Llama-3 8b model to create an uncensored variant that pushes the boundaries of text generation.
## Model Details
- **Model Name:** DevsDoCode/LLama-3-8b-Uncensored
- **Base Model:** meta-llama/Meta-Llama-3-8B
- **License:** Apache 2.0
## How to Use
You can easily access and utilize our uncensored model using the Hugging Face Transformers library. Here's a sample code snippet to get started:
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
model_name = "DevsDoCode/LLama-3-8b-Uncensored"
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
model = GPT2LMHeadModel.from_pretrained(model_name)
# Now you can generate text using the model!
```
## Notebooks
- **Finetuning Process:** [โถ๏ธ Start on Colab](https://colab.research.google.com/drive/1ZQ4E8O5QKuRfkSrjVg83uzcucDofNOpx?usp=sharing)
- **Accessing the Model:** [โถ๏ธ Start on Colab](https://www.youtube.com/@devsdocode)
## Social Media Handles
- [](https://t.me/devsdocode)
- [](https://www.youtube.com/@devsdocode)
- [](https://www.instagram.com/sree.shades_)
- [](https://www.linkedin.com/in/developer-sreejan/)
- [](https://discord.gg/XM4Yt6y4UG)
- [](https://twitter.com/anand-sreejan)
|
RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf | RichardErkhov | 2024-05-10T11:44:26Z | 449 | 0 | null | [
"gguf",
"arxiv:2404.17790",
"arxiv:2404.17733",
"region:us"
]
| null | 2024-05-10T09:43:30Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Swallow-7b-instruct-hf - GGUF
- Model creator: https://huggingface.co/tokyotech-llm/
- Original model: https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Swallow-7b-instruct-hf.Q2_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf/blob/main/Swallow-7b-instruct-hf.Q2_K.gguf) | Q2_K | 2.41GB |
| [Swallow-7b-instruct-hf.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf/blob/main/Swallow-7b-instruct-hf.IQ3_XS.gguf) | IQ3_XS | 2.66GB |
| [Swallow-7b-instruct-hf.IQ3_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf/blob/main/Swallow-7b-instruct-hf.IQ3_S.gguf) | IQ3_S | 2.8GB |
| [Swallow-7b-instruct-hf.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf/blob/main/Swallow-7b-instruct-hf.Q3_K_S.gguf) | Q3_K_S | 2.8GB |
| [Swallow-7b-instruct-hf.IQ3_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf/blob/main/Swallow-7b-instruct-hf.IQ3_M.gguf) | IQ3_M | 2.95GB |
| [Swallow-7b-instruct-hf.Q3_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf/blob/main/Swallow-7b-instruct-hf.Q3_K.gguf) | Q3_K | 3.13GB |
| [Swallow-7b-instruct-hf.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf/blob/main/Swallow-7b-instruct-hf.Q3_K_M.gguf) | Q3_K_M | 3.13GB |
| [Swallow-7b-instruct-hf.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf/blob/main/Swallow-7b-instruct-hf.Q3_K_L.gguf) | Q3_K_L | 3.4GB |
| [Swallow-7b-instruct-hf.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf/blob/main/Swallow-7b-instruct-hf.IQ4_XS.gguf) | IQ4_XS | 3.45GB |
| [Swallow-7b-instruct-hf.Q4_0.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf/blob/main/Swallow-7b-instruct-hf.Q4_0.gguf) | Q4_0 | 3.62GB |
| [Swallow-7b-instruct-hf.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf/blob/main/Swallow-7b-instruct-hf.IQ4_NL.gguf) | IQ4_NL | 3.64GB |
| [Swallow-7b-instruct-hf.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf/blob/main/Swallow-7b-instruct-hf.Q4_K_S.gguf) | Q4_K_S | 3.65GB |
| [Swallow-7b-instruct-hf.Q4_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf/blob/main/Swallow-7b-instruct-hf.Q4_K.gguf) | Q4_K | 3.86GB |
| [Swallow-7b-instruct-hf.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf/blob/main/Swallow-7b-instruct-hf.Q4_K_M.gguf) | Q4_K_M | 3.86GB |
| [Swallow-7b-instruct-hf.Q4_1.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf/blob/main/Swallow-7b-instruct-hf.Q4_1.gguf) | Q4_1 | 4.01GB |
| [Swallow-7b-instruct-hf.Q5_0.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf/blob/main/Swallow-7b-instruct-hf.Q5_0.gguf) | Q5_0 | 4.4GB |
| [Swallow-7b-instruct-hf.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf/blob/main/Swallow-7b-instruct-hf.Q5_K_S.gguf) | Q5_K_S | 4.4GB |
| [Swallow-7b-instruct-hf.Q5_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf/blob/main/Swallow-7b-instruct-hf.Q5_K.gguf) | Q5_K | 4.52GB |
| [Swallow-7b-instruct-hf.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf/blob/main/Swallow-7b-instruct-hf.Q5_K_M.gguf) | Q5_K_M | 4.52GB |
| [Swallow-7b-instruct-hf.Q5_1.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf/blob/main/Swallow-7b-instruct-hf.Q5_1.gguf) | Q5_1 | 4.78GB |
| [Swallow-7b-instruct-hf.Q6_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-7b-instruct-hf-gguf/blob/main/Swallow-7b-instruct-hf.Q6_K.gguf) | Q6_K | 5.22GB |
Original model description:
---
language:
- en
- ja
library_name: transformers
pipeline_tag: text-generation
license: llama2
model_type: llama
---
# Swallow
Our Swallow model has undergone continual pre-training from the [Llama 2 family](https://huggingface.co/meta-llama), primarily with the addition of Japanese language data. The tuned versions use supervised fine-tuning (SFT).
Links to other models can be found in the index.
# Model Release Updates
We are excited to share the release schedule for our latest models:
- **April 26, 2024**: Released version 0.1 of our enhanced instruction-tuned models: [Swallow-7b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v0.1), [Swallow-13b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-v0.1), and [Swallow-70b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-v0.1) as preview versions.
- **March 2, 2024**: Released the [Swallow-7b-plus-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-plus-hf), a model trained with approximately twice as many Japanese tokens as [Swallow-7b-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-hf).
- **February 4, 2024**: Released the [Swallow-13b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-NVE-hf).
- **January 26, 2024**: Released the [Swallow-7b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-hf), [Swallow-7b-NVE-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-instruct-hf), [Swallow-70b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-hf), and [Swallow-70b-NVE-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-instruct-hf)
- **December 19, 2024**: Released the [Swallow-7b-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-hf), [Swallow-7b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf), [Swallow-13b-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-hf), [Swallow-13b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf), [Swallow-70b-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-hf), and [Swallow-70b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf).
## Swallow Model Index
|Model|Swallow-hf|Swallow-instruct-hf|Swallow-instruct-v0.1|
|---|---|---|---|
|7B| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf)|[Link](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v1.0)|
|7B-Plus| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-plus-hf) | N/A | N/A |
|13B| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf)| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-v1.0)|
|70B| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf)| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-v1.0)|
## Swallow Model Index NVE (No Vocabulary Expansion)
|Model|Swallow-NVE-hf|Swallow-NVE-instruct-hf|
|---|---|---|
|7B| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-instruct-hf)|
|13B| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-NVE-hf) | N/A |
|70B| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-instruct-hf)|

This repository provides large language models developed by [TokyoTech-LLM](https://tokyotech-llm.github.io/).
Read our [blog post](https://zenn.dev/tokyotech_lm/articles/d6cb3a8fdfc907) or our [paper](https://arxiv.org/abs/2404.17790)
## Model Details
* **Model type**: Please refer to LLaMA-2 technical report for details on the model architecture.
* **Language(s)**: Japanese English
* **Library**: [Megatron-LM](https://github.com/rioyokotalab/Megatron-Llama2)
* **Tokenizer**: This model employs a tokenizer that features a broadened vocabulary based on Japanese data. This allows for a more efficient representation of text using fewer tokens, leading to a notably faster inference process.
* **Contact**: swallow[at]nlp.c.titech.ac.jp
## Base Model Performance
### Japanese tasks
|Model|Size|JCommonsenseQA|JEMHopQA|NIILC|JSQuAD|XL-Sum|MGSM|WMT20-en-ja|WMT20-ja-en|
|---|---|---|---|---|---|---|---|---|---|
| | |4-shot|4-shot|4-shot|4-shot|1-shot|4-shot|4-shot|4-shot|
| Llama 2 | 7B | 0.3852 | 0.4240 | 0.3410 | 0.7917 | 0.1905 | 0.0760 | 0.1783 | 0.1738 |
| Swallow | 7B | 0.4808 | 0.5078 | 0.5968 | 0.8573 | 0.1830 | 0.1240 | 0.2510 | 0.1511 |
| Swallow-Plus | 7B | 0.5478 | 0.5493 | 0.6030 | 0.8544 | 0.1806 | 0.1360 | 0.2568 | 0.1441 |
| Swallow-NVE | 7B | 0.5433 | 0.5425 | 0.5729 | 0.8684 | 0.2117 | 0.1200 | 0.2405 | 0.1512 |
| Llama 2 | 13B | 0.6997 | 0.4415 | 0.4170 | 0.8533 | 0.2139 | 0.1320 | 0.2146 | 0.1982 |
| Swallow | 13B | 0.7837 | 0.5063 | 0.6398 | 0.9005 | 0.2168 | 0.2040 | 0.2720 | 0.1771 |
| Swallow-NVE | 13B | 0.7712 | 0.5438 | 0.6351 | 0.9030 | 0.2294 | 0.2120 | 0.2735 | 0.1817 |
| Llama 2 | 70B | 0.8686 | 0.4656 | 0.5256 | 0.9080 | 0.2361 | 0.3560 | 0.2643 | **0.2398** |
| Swallow | 70B | 0.9348 | **0.6290** | 0.6960 | 0.9176 | 0.2266 | **0.4840** | **0.3043** | 0.2298 |
| Swallow-NVE | 70B | **0.9410** | 0.5759 | **0.7024** | **0.9254** | **0.2758** | 0.4720 | 0.3042 | 0.2322 |
### English tasks
|Model|Size|OpenBookQA|TriviaQA|HellaSwag|SQuAD2.0|XWINO|GSM8K|
|---|---|---|---|---|---|---|---|
| | |8-shot|8-shot|8-shot|8-shot|8-shot|8-shot|
| Llama 2 | 7B | 0.3580 | 0.6265 | 0.5860 | 0.3207 | 0.9049 | 0.1410 |
| Swallow | 7B | 0.3180 | 0.4836 | 0.5308 | 0.3125 | 0.8817 | 0.1130 |
| Swallow-Plus | 7B | 0.3280 | 0.4558 | 0.5259 | 0.3134 | 0.8929 | 0.1061 |
| Swallow-NVE | 7B | 0.3180 | 0.5079 | 0.5329 | 0.2919 | 0.8817 | 0.0986 |
| Llama 2 | 13B | 0.3760 | 0.7255 | 0.6148 | 0.3681 | 0.9140 | 0.2403 |
| Swallow | 13B | 0.3500 | 0.5852 | 0.5660 | 0.3406 | 0.9075 | 0.2039 |
| Swallow-NVE | 13B | 0.3460 | 0.6025 | 0.5700 | 0.3478 | 0.9006 | 0.1751 |
| Llama 2 | 70B | **0.4280** | **0.8239** | **0.6742** | **0.3770** | **0.9290** | **0.5284** |
| Swallow | 70B | 0.4220 | 0.7756 | 0.6458 | 0.3745 | 0.9204 | 0.4867 |
| Swallow-NVE | 70B | 0.4240 | 0.7817 | 0.6439 | 0.3451 | 0.9256 | 0.4943 |
## Evaluation Benchmarks
### Japanese evaluation benchmarks
We used llm-jp-eval(v1.0.0) and JP Language Model Evaluation Harness(commit #9b42d41). The details are as follows:
- Multiple-choice question answering (JCommonsenseQA [Kurihara+, 2022])
- Open-ended question answering (JEMHopQA [Ishii+, 2023])
- Open-ended question answering (NIILC [Sekine, 2003])
- Machine reading comprehension (JSQuAD [Kurihara+, 2022])
- Automatic summarization (XL-Sum [Hasan+, 2021])
- Machine translation (WMT2020 ja-en [Barrault+, 2020])
- Machine translation (WMT2020 en-ja [Barrault+, 2020])
- Mathematical reasoning (MGSM [Shi+, 2023])
### English evaluation benchmarks
We used the Language Model Evaluation Harness(v.0.3.0). The details are as follows:
- Multiple-choice question answering (OpenBookQA [Mihaylov+, 2018])
- Open-ended question answering (TriviaQA [Joshi+, 2017])
- Machine reading comprehension (SQuAD 2.0 [Rajpurkar+, 2018])
- Commonsense reasoning (XWINO [Tikhonov & Ryabinin, 2021])
- Natural language inference (HellaSwag [Zellers+, 2019])
- Mathematical reasoning (GSM8k [Cobbe+, 2021])
## Usage
First install additional dependencies in [requirements.txt](./requirements.txt):
```sh
pip install -r requirements.txt
```
### Use the instruct model
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "tokyotech-llm/Swallow-7b-instruct-hf"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, device_map="auto")
PROMPT_DICT = {
"prompt_input": (
"ไปฅไธใซใใใใฟในใฏใ่ชฌๆใใๆ็คบใใใใใใใซไป้ใใๅ
ฅๅใๆดใชใๆ่ใๆไพใใฆใใพใใ"
"ใชใฏใจในใใ้ฉๅใซๅฎไบใใใใใฎๅ็ญใ่จ่ฟฐใใฆใใ ใใใ\n\n"
"### ๆ็คบ:\n{instruction}\n\n### ๅ
ฅๅ:\n{input}\n\n### ๅฟ็ญ:"
),
"prompt_no_input": (
"ไปฅไธใซใใใใฟในใฏใ่ชฌๆใใๆ็คบใใใใพใใ"
"ใชใฏใจในใใ้ฉๅใซๅฎไบใใใใใฎๅ็ญใ่จ่ฟฐใใฆใใ ใใใ\n\n"
"### ๆ็คบ:\n{instruction}\n\n### ๅฟ็ญ:"
),
}
def create_prompt(instruction, input=None):
"""
Generates a prompt based on the given instruction and an optional input.
If input is provided, it uses the 'prompt_input' template from PROMPT_DICT.
If no input is provided, it uses the 'prompt_no_input' template.
Args:
instruction (str): The instruction describing the task.
input (str, optional): Additional input providing context for the task. Default is None.
Returns:
str: The generated prompt.
"""
if input:
# Use the 'prompt_input' template when additional input is provided
return PROMPT_DICT["prompt_input"].format(instruction=instruction, input=input)
else:
# Use the 'prompt_no_input' template when no additional input is provided
return PROMPT_DICT["prompt_no_input"].format(instruction=instruction)
# Example usage
instruction_example = "ไปฅไธใฎใใใใฏใซ้ขใใ่ฉณ็ดฐใชๆ
ๅ ฑใๆไพใใฆใใ ใใใ"
input_example = "ๆฑไบฌๅทฅๆฅญๅคงๅญฆใฎไธปใชใญใฃใณใในใซใคใใฆๆใใฆใใ ใใ"
prompt = create_prompt(instruction_example, input_example)
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=128,
temperature=0.99,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
```
### Use the base model
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "tokyotech-llm/Swallow-7b-hf"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
prompt = "ๆฑไบฌๅทฅๆฅญๅคงๅญฆใฎไธปใชใญใฃใณใในใฏใ"
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=128,
temperature=0.99,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
```
## Training Datasets
### Continual Pre-Training
The following datasets were used for continual pre-training.
- [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
- [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
- [Swallow Corpus](https://arxiv.org/abs/2404.17733)
- [The Pile](https://huggingface.co/datasets/EleutherAI/pile)
### Instruction Tuning
The following datasets were used for the instruction tuning.
- [Anthropic HH-RLHF](https://huggingface.co/datasets/kunishou/hh-rlhf-49k-ja)
- [Databricks Dolly 15-k](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)
- [OpenAssistant Conversations Dataset](https://huggingface.co/datasets/kunishou/oasst1-89k-ja)
## Risks and Limitations
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Acknowledgements
We thank Meta Research for releasing Llama 2 under an open license for others to build on.
Our project is supported by the [ABCI Large-scale Language Model Building Support Program](https://abci.ai/en/link/llm_support_program.html) of the National Institute of Advanced Industrial Science and Technology.
## License
Llama 2 is licensed under the LLAMA 2 Community License, Copyright ยฉ Meta Platforms, Inc. All Rights Reserved.
## Authors
Here are the team members:
- From [Okazaki Laboratory](https://www.nlp.c.titech.ac.jp/index.en.html), the following members:
- [Naoaki Okazaki](https://www.chokkan.org/index.ja.html)
- [Sakae Mizuki](https://s-mizuki-nlp.github.io/)
- [Hiroki Iida](https://meshidenn.github.io/)
- [Mengsay Loem](https://loem-ms.github.io/)
- [Shota Hirai](https://huggingface.co/Kotemo428)
- [Kakeru Hattori](https://aya-se.vercel.app/)
- [Masanari Ohi](https://twitter.com/stjohn2007)
- From [YOKOTA Laboratory](https://www.rio.gsic.titech.ac.jp/en/index.html), the following members:
- [Rio Yokota](https://twitter.com/rioyokota)
- [Kazuki Fujii](https://twitter.com/okoge_kaz)
- [Taishi Nakamura](https://twitter.com/Setuna7777_2)
## How to cite
```
@misc{fujii2024continual,
title={Continual Pre-Training for Cross-Lingual LLM Adaptation: Enhancing Japanese Language Capabilities},
author={Kazuki Fujii and Taishi Nakamura and Mengsay Loem and Hiroki Iida and Masanari Ohi and Kakeru Hattori and Hirai Shota and Sakae Mizuki and Rio Yokota and Naoaki Okazaki},
year={2024},
eprint={2404.17790},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf | RichardErkhov | 2024-05-17T22:23:23Z | 449 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-17T20:45:58Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
opus-v1.2-7b - GGUF
- Model creator: https://huggingface.co/dreamgen/
- Original model: https://huggingface.co/dreamgen/opus-v1.2-7b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [opus-v1.2-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf/blob/main/opus-v1.2-7b.Q2_K.gguf) | Q2_K | 2.53GB |
| [opus-v1.2-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf/blob/main/opus-v1.2-7b.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [opus-v1.2-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf/blob/main/opus-v1.2-7b.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [opus-v1.2-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf/blob/main/opus-v1.2-7b.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [opus-v1.2-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf/blob/main/opus-v1.2-7b.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [opus-v1.2-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf/blob/main/opus-v1.2-7b.Q3_K.gguf) | Q3_K | 3.28GB |
| [opus-v1.2-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf/blob/main/opus-v1.2-7b.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [opus-v1.2-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf/blob/main/opus-v1.2-7b.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [opus-v1.2-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf/blob/main/opus-v1.2-7b.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [opus-v1.2-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf/blob/main/opus-v1.2-7b.Q4_0.gguf) | Q4_0 | 3.83GB |
| [opus-v1.2-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf/blob/main/opus-v1.2-7b.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [opus-v1.2-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf/blob/main/opus-v1.2-7b.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [opus-v1.2-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf/blob/main/opus-v1.2-7b.Q4_K.gguf) | Q4_K | 4.07GB |
| [opus-v1.2-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf/blob/main/opus-v1.2-7b.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [opus-v1.2-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf/blob/main/opus-v1.2-7b.Q4_1.gguf) | Q4_1 | 4.24GB |
| [opus-v1.2-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf/blob/main/opus-v1.2-7b.Q5_0.gguf) | Q5_0 | 4.65GB |
| [opus-v1.2-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf/blob/main/opus-v1.2-7b.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [opus-v1.2-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf/blob/main/opus-v1.2-7b.Q5_K.gguf) | Q5_K | 4.78GB |
| [opus-v1.2-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf/blob/main/opus-v1.2-7b.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [opus-v1.2-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf/blob/main/opus-v1.2-7b.Q5_1.gguf) | Q5_1 | 5.07GB |
| [opus-v1.2-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf/blob/main/opus-v1.2-7b.Q6_K.gguf) | Q6_K | 5.53GB |
| [opus-v1.2-7b.Q8_0.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_opus-v1.2-7b-gguf/blob/main/opus-v1.2-7b.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
language:
- en
pipeline_tag: text-generation
tags:
- unsloth
- axolotl
license: cc-by-nc-nd-4.0
---
# DreamGen Opus V1
<div style="display: flex; flex-direction: row; align-items: center;">
<img src="/dreamgen/opus-v1.2-7b/resolve/main/images/logo-1024.png" alt="model logo" style="
border-radius: 12px;
margin-right: 12px;
margin-top: 0px;
margin-bottom: 0px;
max-width: 100px;
height: auto;
"/>
Models for **(steerable) story-writing and role-playing**.
<br/>[All Opus V1 models, including quants](https://huggingface.co/collections/dreamgen/opus-v1-65d092a6f8ab7fc669111b31).
</div>
## Resources
- [**Opus V1 prompting guide**](https://dreamgen.com/docs/models/opus/v1) with many (interactive) examples and prompts that you can copy.
- [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing) for interactive role-play using `opus-v1.2-7b`.
- [Python code](example/prompt/format.py) to format the prompt correctly.
- Join the community on [**Discord**](https://dreamgen.com/discord) to get early access to new models.
<img src="/dreamgen/opus-v1.2-7b/resolve/main/images/story_writing.webp" alt="story writing on dreamgen.com" style="
padding: 12px;
border-radius: 12px;
border: 2px solid #f9a8d4;
background: rgb(9, 9, 11);
"/>
## Prompting
<details>
<summary>The models use an extended version of ChatML.</summary>
```
<|im_start|>system
(Story description in the right format here)
(Typically consists of plot description, style description and characters)<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Alice
(Continuation of the story from the Alice character)<|im_end|>
<|im_start|>text
(Continuation of the story from no character in particular (pure narration))<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Bob
(Continuation of the story from the Bob character)<|im_end|>
```
The Opus V1 extension is the addition of the `text` role, and the addition / modification of role names.
Pay attention to the following:
- The `text` messages can (but don't have to have) `names`, names are used to indicate the "active" character during role-play.
- There can be multiple subsequent message with a `text` role, especially if names are involved.
- There can be multiple names attached to a message.
- The format for names is `names= {{name[0]}}; {{name[1]}}`, beware of the spaces after `names=` and after the `;`. This spacing leads to most natural tokenization for the names.
</details>
While the main goal for the models is great story-writing and role-playing performance, the models are also capable of several writing related tasks as well as general assistance.
Here's how you can prompt the model for the following tasks
### Steerable Story-writing and Role-playing:
- [Story-writing prompt guide](https://dreamgen.com/docs/models/opus/v1#task-story-writing)
- [Role-playing prompt guide](https://dreamgen.com/docs/models/opus/v1#task-role-playing)
- Input:
- System prompt: You provide story / role-play description, which consists of:
- Plot description
- Style description
- Characters and their descriptions
- Conversation turns:
- Text / message turn: This represents part of the story or role play
- Instruction: This tells the model what should happen next
- Output: Continuation of the story / role-play.
### Story plot summarization
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-plot-description)
- Input: A story, or a few chapters of a story.
- Output: A description of the story or chapters.
#### Examples:
<details>
<summary>Summary of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">short fiction story</a> in 50 words</summary>
<blockquote>
Malygris, a powerful magician, summons his long-dead lover Nylissa from the grave using necromancy. However, upon seeing her, he realizes that she is not the same person he once knew, leading him to question the nature of love, memory, and the power of magic.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">short fiction story</a> in 200 words</summary>
<blockquote>
The story is a dark fantasy set in the city of Susran, where an ancient and powerful magician named Malygris dwells in a tower atop a conical hill. Malygris, who has mastered all forms of sorcery and necromancy, is plagued by a deep sense of ennui and melancholy. He seeks solace in the memory of Nylissa, a girl he once loved in his youth.
One day, Malygris decides to summon Nylissa's spirit from the grave using his necromantic powers. He performs the necessary rituals and incantations, and Nylissa appears before him as a ghostly apparition. However, as Malygris gazes upon her, he becomes increasingly disillusioned. He notices subtle changes in her appearance and behavior that do not match his memories of her. He begins to question whether this is truly the same Nylissa he once knew, or if it is just another illusion conjured by his magic.
As Malygris grapples with this uncertainty, he becomes increasingly despondent and disillusioned with life itself. He realizes that even his magic cannot bring back the past or make him truly happy. In the end, he dismisses Nylissa's spirit and returns to his lonely tower, resigned to his fate as an old and weary sorcerer.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://edition.cnn.com/2024/02/29/health/pets-diseases-viruses-bacteria-wellness/index.html">CNN News article</a> in 150 words</summary>
<blockquote>
The article discusses the recent human deaths from diseases transmitted by animals, specifically Alaskapox and bubonic plague. Dr. Leana Wen, an emergency physician and former Baltimore health commissioner, explains the various ways household pets can transmit diseases to humans. These include fecal-oral transmission, scratches and bites, and vector-borne diseases like ticks, fleas, and mosquitoes. Wen emphasizes the importance of vaccinating pets, maintaining cleanliness, and taking commonsense precautions to reduce disease transmission. She also advises that individuals who are immunocompromised, babies, and pregnant women should take additional care when interacting with animals.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://edition.cnn.com/2024/02/29/health/pets-diseases-viruses-bacteria-wellness/index.html">CNN News article</a> in 50 words</summary>
<blockquote>
A doctor discusses the risks of diseases transmitted from animals to humans, focusing on household pets like cats and dogs. They emphasize the importance of vaccination, hygiene, and regular veterinary care to prevent these illnesses.
</blockquote>
</details>
### Story character description
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-char-description)
- Input: A story, or a few chapters of a story, set of characters.
- Output: A description of the characters.
#### Examples:
<details>
<summary>Description of the Malygris character from <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">this story</a> in 50 words</summary>
<blockquote>
Malygris is an ancient and powerful magician, characterized by his long white beard, cold green eyes, and a violet robe. He is a master of necromancy and possesses a vast knowledge of forbidden magic. Despite his immense power, he is plagued by a deep sense of melancholy and ennui.
</blockquote>
</details>
<details>
<summary>Description of the prince character from <a href="https://www.libraryofshortstories.com/onlinereader/the-pearl-of-love">this story</a> in 50 words</summary>
<blockquote>
The prince is a devoted and determined individual, driven by his love for his late queen. He is described as a graceful youth in the beginning, but as he grows older, his dedication to the Pearl of Love transforms him into a grave and intent man. His sense of color and design evolves over time, reflecting his growing maturity and artistic vision.
</blockquote>
</details>
### Story style description
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-style-description)
- Input: A story, or a few chapters of a story.
- Output: A description the style of the story.
<details>
<summary>Style description of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">this story</a></summary>
<blockquote>
The story is written in a descriptive and evocative style, with a focus on the atmosphere and setting. The author uses vivid and detailed language to create a sense of place and time, and employs a variety of literary devices such as similes, metaphors, and allusions to enhance the mood and tone of the narrative. The sentence structure is varied, with a mix of short and long sentences that create a rhythmic flow to the prose.
</blockquote>
</details>
<details>
<summary>Style description of <a href="https://www.libraryofshortstories.com/onlinereader/the-pearl-of-love">this story</a></summary>
<blockquote>
The writing style is descriptive and evocative, with a focus on the beauty and grandeur of the Pearl of Love. The author uses vivid imagery and sensory details to create a rich and immersive atmosphere. The tone is reverential and contemplative, reflecting the prince's deep love for his queen and his dedication to creating a lasting monument to her memory.
</blockquote>
</details>
### Story description to chapters
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-story-description-to-chapter-descriptions)
- Input: A brief plot description and the desired number of chapters.
- Output: A description for each chapter.
### And more...
## Sampling params
For story-writing and role-play, I recommend "Min P" based sampling with `min_p` in the range `[0.01, 0.1]` and with `temperature` in the range `[0.5, 1.5]`, depending on your preferences. A good starting point would be `min_p=0.1; temperature=0.8`.
You may also benefit from setting presence, frequency and repetition penalties, especially at lower temperatures.
## Dataset
The fine-tuning dataset consisted of ~100M tokens of steerable story-writing, role-playing, writing-assistant and general-assistant examples. Each example was up to 31000 tokens long.
All story-writing and role-playing examples were based on human-written text.

## Running the model
The model is should be compatible with any software that supports the base model, but beware of prompting and tokenization.
I recommend using these model versions:
- 7B: [no quant (opus-v1.2-7b)](https://huggingface.co/dreamgen/opus-v1.2-7b)
- 34B: [no quant (opus-v1-34b)](https://huggingface.co/dreamgen/opus-v1-34b) or [awq (opus-v1-34b-awq)](https://huggingface.co/dreamgen/opus-v1-34b-awq)
- 34B: [no quant (opus-v1.2-70b)](https://huggingface.co/dreamgen/opus-v1.2-70b) or [awq (opus-v1.2-70b-awq)](https://huggingface.co/dreamgen/opus-v1.2-70b-awq)
### Running on DreamGen.com (free)
You can run the models on [dreamgen.com](https://dreamgen.com) for free โ you can use the built-in UI for story-writing & role-playing, or use [the API](https://dreamgen.com/docs/api).
### Running Locally
- **Make sure your prompt is as close as possible to the Opus V1**
- Regardless of which backend you use, it's important that you format your prompt well and that the tokenization works correctly.
- [Read the prompt guide](https://dreamgen.com/docs/models/opus/v1)
- [Read the prompt formatting code](example/prompt/format.py)
- Make sure `<|im_start|>` and `<|im_end|>` are tokenized correctly
- **vLLM**
- [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing): This is a simple interactive Google Colab to do role-play with the 7B model, it should fit on the T4 GPU.
- [Code](example/prompt/interactive.py): This is simple script for interactive chat for one hard-coded scenario.
- **SillyTavern**
- [Official SillyTavern documentation for DreamGen](https://docs.sillytavern.app/usage/api-connections/dreamgen/) -- applies to both the API an local models
- SillyTavern (staging) comes with built-in DreamGen preset for RP
- Other presets can be found [here](https://huggingface.co/dreamgen/opus-v1.2-7b/tree/main/configs/silly_tavern), v2 kindly provided by @MarinaraSpaghetti
- Make sure to unselect `Skip special tokens`, otherwise it won't work
- This is just an attempt at approximating the Opus V1 prompt, it won't be perfect
- Character cards specifically rewritten for the built-in DreamGen preset:
- [Seraphina](configs/silly_tavern/cards/Seraphina.png) (based on the default Seraphina card)
- [Lara Lightland](configs/silly_tavern/cards/LaraLightland.png) (based on the card by Deffcolony)
- **LM Studio**
- [Config](configs/lmstudio/preset.json)
- Just like ChatML, just changed "assistant" to "text" role.
- **There's a bug** in LM Studio if you delete a message or click "Continue", [see here for details](https://discord.com/channels/1110598183144399058/1212665261128417280/1212665261128417280).
- **HuggingFace**
- [Chat template](tokenizer_config.json#L51)
- Just like ChatML, just changed "assistant" to "text" role.
## Known Issues
- **34B repetition**:
- The 34B sometimes gets stuck repeating the same word, or synonyms. This seems to be a common problem across various Yi 34B fine-tunes.
- **GGUF**:
- The tokenization might be messed up. Some users reported that `<|im_start|>` and `<|im_end|>` are tokenized as multiple tokens. Also llama.cpp may not tokenize correctly (the Yi tokenizer is subtly different from the Llama 2 tokenizer).
## License
- This model is intended for personal use only, other use is not permitted.
|
QuantFactory/Llama-3-Instruct-8B-RDPO-GGUF | QuantFactory | 2024-05-29T02:16:31Z | 449 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"arxiv:2405.14734",
"base_model:princeton-nlp/Llama-3-Instruct-8B-RDPO",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-29T01:12:26Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model: princeton-nlp/Llama-3-Instruct-8B-RDPO
---
# QuantFactory/Llama-3-Instruct-8B-RDPO-GGUF
This is quantized version of [princeton-nlp/Llama-3-Instruct-8B-RDPO](https://huggingface.co/princeton-nlp/Llama-3-Instruct-8B-RDPO) created using llama.cpp
# Model Description
This is a model released from the preprint: *[SimPO: Simple Preference Optimization with a Reference-Free Reward](https://arxiv.org/abs/2405.14734)* Please refer to our [repository](https://github.com/princeton-nlp/SimPO) for more details.
|
mradermacher/BagelMIsteryTour-8x7B-i1-GGUF | mradermacher | 2024-06-13T02:52:11Z | 449 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ycros/BagelMIsteryTour-8x7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-12T18:32:31Z | ---
base_model: ycros/BagelMIsteryTour-8x7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ycros/BagelMIsteryTour-8x7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-i1-GGUF/resolve/main/BagelMIsteryTour-8x7B.i1-IQ1_S.gguf) | i1-IQ1_S | 9.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-i1-GGUF/resolve/main/BagelMIsteryTour-8x7B.i1-IQ1_M.gguf) | i1-IQ1_M | 10.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-i1-GGUF/resolve/main/BagelMIsteryTour-8x7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-i1-GGUF/resolve/main/BagelMIsteryTour-8x7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.0 | |
| [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-i1-GGUF/resolve/main/BagelMIsteryTour-8x7B.i1-IQ2_S.gguf) | i1-IQ2_S | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-i1-GGUF/resolve/main/BagelMIsteryTour-8x7B.i1-IQ2_M.gguf) | i1-IQ2_M | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-i1-GGUF/resolve/main/BagelMIsteryTour-8x7B.i1-Q2_K.gguf) | i1-Q2_K | 17.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-i1-GGUF/resolve/main/BagelMIsteryTour-8x7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-i1-GGUF/resolve/main/BagelMIsteryTour-8x7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-i1-GGUF/resolve/main/BagelMIsteryTour-8x7B.i1-IQ3_S.gguf) | i1-IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-i1-GGUF/resolve/main/BagelMIsteryTour-8x7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-i1-GGUF/resolve/main/BagelMIsteryTour-8x7B.i1-IQ3_M.gguf) | i1-IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-i1-GGUF/resolve/main/BagelMIsteryTour-8x7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-i1-GGUF/resolve/main/BagelMIsteryTour-8x7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-i1-GGUF/resolve/main/BagelMIsteryTour-8x7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-i1-GGUF/resolve/main/BagelMIsteryTour-8x7B.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-i1-GGUF/resolve/main/BagelMIsteryTour-8x7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-i1-GGUF/resolve/main/BagelMIsteryTour-8x7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-i1-GGUF/resolve/main/BagelMIsteryTour-8x7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-i1-GGUF/resolve/main/BagelMIsteryTour-8x7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-i1-GGUF/resolve/main/BagelMIsteryTour-8x7B.i1-Q6_K.gguf) | i1-Q6_K | 38.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
Ali-C137/Q2H10M-0000 | Ali-C137 | 2024-06-21T12:37:12Z | 449 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-18T12:29:27Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
empgces/dre_tiny_gguf_200M | empgces | 2024-06-24T17:45:03Z | 449 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-24T17:35:23Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/tinyllama-chat-bnb-4bit
---
# Uploaded model
- **Developed by:** empgces
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-sft-full-gguf | RichardErkhov | 2024-06-27T21:51:12Z | 449 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-27T21:16:28Z | Entry not found |
NikolayKozloff/RoGemma-7b-Instruct-Q4_0-GGUF | NikolayKozloff | 2024-06-30T19:27:40Z | 449 | 1 | null | [
"gguf",
"text-generation-inference",
"ro",
"region:us"
]
| null | 2024-06-30T18:15:56Z | ---
language:
- ro
tags:
- text-generation-inference
--- |
NYTK/text-generation-news-gpt2-small-hungarian | NYTK | 2023-01-10T09:15:53Z | 448 | 2 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"hu",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-03-02T23:29:04Z | ---
language:
- hu
tags:
- text-generation
license: mit
widget:
- text: Szeptember vรฉgรฉn zรกrul a balatoni szezon
---
# Hungarian GPT-2 news generator
For further models, scripts and details, see [our repository](https://github.com/nytud/neural-models) or [our demo site](https://juniper.nytud.hu/demo/nlp).
- Pretrained on Hungarian Wikipedia
- Finetuned on hin corpus (hvg.hu, index.hu, nol.hu)
## Results
| Model | Perplexity |
| ------------- | ------------- |
| GPT-2 poem | 47.46 |
| **GPT-2 news** | **22.06** |
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {yang-gpt2,
title = {{"Az invazรญv medvรฉk nem tolerรกljรกk a suzukis agressziรณt" - Magyar GPT-2 kรญsรฉrleti modell}},
booktitle = {XVIII. Magyar Szรกmรญtรณgรฉpes Nyelvรฉszeti Konferencia},
year = {2022},
publisher = {Szegedi Tudomรกnyegyetem, Informatikai Intรฉzet},
address = {Szeged, Magyarorszรกg},
author = {Yang, Zijian Gyลzล},
pages = {463--476}
}
``` |
mgrella/autonlp-bank-transaction-classification-5521155 | mgrella | 2024-02-17T20:58:38Z | 448 | 22 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autonlp",
"it",
"dataset:mgrella/autonlp-data-bank-transaction-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
tags:
- autonlp
language: it
widget:
- text: "I love AutoNLP ๐ค"
datasets:
- mgrella/autonlp-data-bank-transaction-classification
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 5521155
## Validation Metrics
- Loss: 1.3173143863677979
- Accuracy: 0.8220706757594545
- Macro F1: 0.5713688384455807
- Micro F1: 0.8220706757594544
- Weighted F1: 0.8217158913702755
- Macro Precision: 0.6064387992817253
- Micro Precision: 0.8220706757594545
- Weighted Precision: 0.8491515834140735
- Macro Recall: 0.5873349311175117
- Micro Recall: 0.8220706757594545
- Weighted Recall: 0.8220706757594545
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/mgrella/autonlp-bank-transaction-classification-5521155
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("mgrella/autonlp-bank-transaction-classification-5521155", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("mgrella/autonlp-bank-transaction-classification-5521155", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
circulus/sd-photoreal-photo-v2 | circulus | 2023-02-20T16:00:35Z | 448 | 7 | diffusers | [
"diffusers",
"generative ai",
"stable-diffusion",
"image-to-image",
"realism",
"art",
"text-to-image",
"en",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-01-15T06:13:08Z | ---
license: gpl-3.0
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- generative ai
- stable-diffusion
- image-to-image
- realism
- art
---
Photoreal Photo v2
Finetuned Stable Diffusion 1.5 for generating images
You can test this model here >
https://eva.circul.us/index.html

 |
liujch1998/vera | liujch1998 | 2023-10-23T20:58:42Z | 448 | 12 | transformers | [
"transformers",
"pytorch",
"t5",
"text-classification",
"en",
"arxiv:2305.03695",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-classification | 2023-03-24T17:46:55Z | ---
license: mit
language:
- en
pipeline_tag: text-classification
arxiv: 2305.03695
---
# Model Card for Vera
<!-- Provide a quick summary of what the model is/does. -->
Vera is a commonsense statement verification model. See our paper at: <https://arxiv.org/abs/2305.03695>.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
Given a commonsense statement as input, Vera predicts the plausibility of this statement.
Vera outputs a real-valued score in the range [0, 1].
A score of 1 means the statement is correct according to commonsense, and a score of 0 means the statement is incorrect.
This score is calibrated, so a score between 0 and 1 can be interpreted as Vera's confidence that the statement is correct.
- **Developed by:** Jiacheng Liu, Wenya Wang, Dianzhuo Wang, Noah A. Smith, Yejin Choi, Hannaneh Hajishirzi
- **Shared by:** [Jiacheng (Gary) Liu](https://liujch1998.github.io/)
- **Model type:** Transformers
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** T5-v1.1-XXL
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** <https://github.com/liujch1998/vera>
- **Paper:** <https://arxiv.org/abs/2305.03695>
- **Demo:** <https://huggingface.co/spaces/liujch1998/vera>
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
Vera is intended to predict the correctness of commonsense statements.
### Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
Vera can be used to detect commonsense errors made by generative LMs (e.g., ChatGPT), or filter noisy commonsense knowledge generated by other LMs (e.g., Rainier).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Vera is a research prototype and may make mistakes. Do not use for making critical decisions. It is intended to predict the correctness of commonsense statements, and may be unreliable when taking input out of this scope.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
See the **Limitations and Ethics Statement** section of our paper.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
tokenizer = transformers.AutoTokenizer.from_pretrained('liujch1998/vera')
model = transformers.T5EncoderModel.from_pretrained('liujch1998/vera')
model.D = model.shared.embedding_dim
linear = torch.nn.Linear(model.D, 1, dtype=model.dtype)
linear.weight = torch.nn.Parameter(model.shared.weight[32099, :].unsqueeze(0))
linear.bias = torch.nn.Parameter(model.shared.weight[32098, 0].unsqueeze(0))
model.eval()
t = model.shared.weight[32097, 0].item() # temperature for calibration
statement = 'Please enter a commonsense statement here.'
input_ids = tokenizer.batch_encode_plus([statement], return_tensors='pt', padding='longest', truncation='longest_first', max_length=128).input_ids
with torch.no_grad():
output = model(input_ids)
last_hidden_state = output.last_hidden_state
hidden = last_hidden_state[0, -1, :]
logit = linear(hidden).squeeze(-1)
logit_calibrated = logit / t
score_calibrated = logit_calibrated.sigmoid()
# score_calibrated is Vera's final output plausibility score
```
You may also refer to <https://huggingface.co/spaces/liujch1998/vera/blob/main/app.py#L27-L98> for implementation.
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@article{Liu2023VeraAG,
title={Vera: A General-Purpose Plausibility Estimation Model for Commonsense Statements},
author={Jiacheng Liu and Wenya Wang and Dianzhuo Wang and Noah A. Smith and Yejin Choi and Hanna Hajishirzi},
journal={ArXiv},
year={2023},
volume={abs/2305.03695}
}
```
## Model Card Contact
[Jiacheng (Gary) Liu](https://liujch1998.github.io/)
|
Dzeniks/roberta-fact-check | Dzeniks | 2023-05-04T16:37:49Z | 448 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-04-01T16:35:13Z | ---
license: mit
pipeline_tag: text-classification
---
# Roberta-Fact-Check Model
The Roberta-Fact-Check Model is a deep learning model that uses the Roberta architecture for text classification. It is designed to classify claims as either supported or refuted based on the provided evidence.
## Model Training
The model was trained using the Adam optimizer with a learning rate of 2-4e, epsilon of 1-8, and weight decay of 2-8e. The training dataset mainly consisted of the FEVER and Hover datasets, along with a small sample of manually created data.
## Input and Output
The model takes a claim and corresponding evidence as input and returns a label indicating whether the evidence supports or refutes the claim. The two possible labels are:
- 0: Supports
- 1: Refutes
## Usage
To use the Roberta-Fact-Check Model, you can simply pass in a claim and evidence as input to the model and receive a label indicating whether the evidence supports or refutes the claim. The model can be integrated into various applications for fact-checking and misinformation detection.
```python
import torch
from transformers import RobertaTokenizer, RobertaForSequenceClassification
# Load the tokenizer and model
tokenizer = RobertaTokenizer.from_pretrained('Dzeniks/roberta-fact-check')
model = RobertaForSequenceClassification.from_pretrained('Dzeniks/roberta-fact-check')
# Define the claim with evidence to classify
claim = "Albert Einstein work in the field of computer science"
evidence = "Albert Einstein was a German-born theoretical physicist, widely acknowledged to be one of the greatest and most influential physicists of all time."
# Tokenize the claim with evidence
x = tokenizer.encode_plus(claim, evidence, return_tensors="pt")
model.eval()
with torch.no_grad():
prediction = model(**x)
label = torch.argmax(outputs[0]).item()
print(f"Label: {label}")
```
## Acknowledgements
This model was developed using the Hugging Face transformers library and trained on the FEVER and Hover datasets. We would like to thank the developers of these datasets for their contributions to the community.
## Disclaimer
While the Roberta-Fact-Check Model has been trained on a large dataset and can provide accurate results in many cases, it may not always provide correct results. Users should always exercise caution when making decisions based on the output of any machine learning model.
|
osiria/diablo-italian-chatbot-1.3b | osiria | 2023-07-02T13:56:02Z | 448 | 0 | transformers | [
"transformers",
"pytorch",
"xglm",
"it",
"arxiv:2005.14165",
"arxiv:2112.10668",
"arxiv:2004.13637",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2023-06-26T22:55:53Z | ---
license: mit
language:
- it
---
--------------------------------------------------------------------------------------------------
<body>
<span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;">โ</span>
<br>
<span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;">โโโโTask: CHAT</span>
<br>
<span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;">โโโโModel: DIABLO ๐ฅ</span>
<br>
<span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;">โโโโLang: IT</span>
<br>
<span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;">โโ</span>
<br>
<span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;">โ</span>
</body>
--------------------------------------------------------------------------------------------------
<h3>Model description</h3>
This model is a <b>conversational</b> language model for the <b>Italian</b> language, based on a GPT-like <b>[1]</b> architecture (more specifically, the model has been obtained by modifying Meta's XGLM architecture <b>[2]</b> and exploiting its 1.7B checkpoint).
The model has been trained on a corpus of \~50K Italian conversational exchanges for \~3 epochs (\~15K steps with a batch size of 10), using 3 different learning rates (1e-5, 2e-6, 1e-6) and exploiting FP16 quantization to manage the considerable size of the model.
The training corpus has been built by using Meta's Blenderbot <b>[3]</b> to generate 50K conversational exchanges in English, and then translating them to the Italian language using a machine translation model.
The current release is designed for brief and informal conversations (small talk) covering light topics (mainly food, entertainment and holidays), but several generalizations and improvements will be introduced in future releases.
<h3>Example</h3>
This is an example of intended use of the model, for brief and informal conversations:

<h3>Quick usage</h3>
In order to use the model for inference, the following pipeline is needed:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
import re
tokenizer = AutoTokenizer.from_pretrained("osiria/diablo-italian-chatbot-1.3b")
model = AutoModelForCausalLM.from_pretrained("osiria/diablo-italian-chatbot-1.3b")
device = torch.device("cpu")
model = model.to(device)
model.eval()
class Diablo:
def __init__(self, tokenizer, model):
self.tokenizer = tokenizer
self.model = model
def _check_sublist(self, lst, sub_lst, sep = " "):
l_type = type(lst[0])
lst = sep.join(list(map(str, lst)))
sub_lst = sep.join(list(map(str, sub_lst)))
return sub_lst in lst
def _exclude_sublist(self, lst, sub_lst, sep = " "):
l_type = type(lst[0])
lst = sep.join(list(map(str, lst)))
sub_lst = sep.join(list(map(str, sub_lst)))
lst = re.sub("\s+", " ", lst.replace(sub_lst, "")).strip().split(sep)
lst = list(map(l_type, lst))
return lst
def generate(self, prompt, sep = "|", max_tokens = 100, excluded = [[40, 19]],
lookback = 1, stop_tokens = [5, 27, 33], sample = False, top_k = 3):
tokens = tokenizer.encode(prompt + sep)
tokens_generated = []
while tokens[-1] not in stop_tokens and len(tokens) < max_tokens:
output = model.forward(input_ids=torch.tensor([tokens]).to(device)).logits[0,-1]
output = torch.softmax(output, dim = 0)
candidates = torch.topk(output, k = top_k)
if sample:
indices = candidates.indices
scores = candidates.values
next_token = indices[torch.multinomial(scores, 1)[0].item()]
else:
next_token = candidates.indices[0]
next_token = next_token.item()
sub_tokens = tokens_generated[-lookback:] + [next_token]
if len(tokens_generated) >= (lookback + 1) and next_token in tokens_generated[-(lookback + 1):]:
next_token = candidates.indices[1]
next_token = next_token.item()
elif len(tokens_generated) >= lookback and self._check_sublist(tokens_generated, sub_tokens):
next_token = candidates.indices[1]
next_token = next_token.item()
tokens = tokens + [next_token]
tokens_generated = tokens_generated + [next_token]
for ex_lst in excluded:
tokens = self._exclude_sublist(tokens, ex_lst)
output = tokenizer.decode(tokens, skip_special_tokens=True)
output = output.split(sep)[-1].strip()
output = output[0].upper() + output[1:]
if output[-1] == tokenizer.decode(stop_tokens[0]):
output = output[:-1]
return output
diablo = Diablo(tokenizer = tokenizer, model = model)
prompt = "Ciao, come stai?"
# setting "sample = True" the model will be more creative but occasionally less accurate
print("OUTPUT:", diablo.generate(prompt, sample = False))
# OUTPUT: Sto bene, grazie
```
<h3>Limitations</h3>
This model has been mainly trained on machine-translated (and synthetic) conversational data, so it might behave erratically when presented with prompts which are too far away from its training set.
Moreover, the heterogeneous nature of the pretraining dataset, together with the limits of the conversational data, might lead the model to produce biased or offensive content with respect to gender, race, ideologies, and political or religious beliefs.
These limitations imply that the model and its outputs should be used with caution, and should not be involved in situations that require the generated text to be fair or true.
<h3>References</h3>
[1] https://arxiv.org/abs/2005.14165
[2] https://arxiv.org/abs/2112.10668
[3] https://arxiv.org/pdf/2004.13637.pdf
<h3>License</h3>
The model is released under <b>MIT</b> license
|
digiplay/Realisian_v5 | digiplay | 2023-07-12T12:47:08Z | 448 | 7 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-12T12:11:07Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/47130?modelVersionId=115942
Sample images I made



Original Author's DEMO image :
 |
TheBloke/Chronoboros-33B-GGUF | TheBloke | 2023-09-27T12:53:19Z | 448 | 8 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:Henk717/chronoboros-33B",
"license:other",
"text-generation-inference",
"region:us"
]
| null | 2023-09-20T01:52:14Z | ---
license: other
model_name: Chronoboros 33B
base_model: Henk717/chronoboros-33B
inference: false
model_creator: Henky!!
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Chronoboros 33B - GGUF
- Model creator: [Henky!!](https://huggingface.co/Henk717)
- Original model: [Chronoboros 33B](https://huggingface.co/Henk717/chronoboros-33B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Henk717's Chronoboros 33B](https://huggingface.co/Henk717/chronoboros-33B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Chronoboros-33B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Chronoboros-33B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Chronoboros-33B-GGUF)
* [Henky!!'s original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Henk717/chronoboros-33B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [chronoboros-33b.Q2_K.gguf](https://huggingface.co/TheBloke/Chronoboros-33B-GGUF/blob/main/chronoboros-33b.Q2_K.gguf) | Q2_K | 2 | 13.50 GB| 16.00 GB | smallest, significant quality loss - not recommended for most purposes |
| [chronoboros-33b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Chronoboros-33B-GGUF/blob/main/chronoboros-33b.Q3_K_S.gguf) | Q3_K_S | 3 | 14.06 GB| 16.56 GB | very small, high quality loss |
| [chronoboros-33b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Chronoboros-33B-GGUF/blob/main/chronoboros-33b.Q3_K_M.gguf) | Q3_K_M | 3 | 15.76 GB| 18.26 GB | very small, high quality loss |
| [chronoboros-33b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Chronoboros-33B-GGUF/blob/main/chronoboros-33b.Q3_K_L.gguf) | Q3_K_L | 3 | 17.28 GB| 19.78 GB | small, substantial quality loss |
| [chronoboros-33b.Q4_0.gguf](https://huggingface.co/TheBloke/Chronoboros-33B-GGUF/blob/main/chronoboros-33b.Q4_0.gguf) | Q4_0 | 4 | 18.36 GB| 20.86 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [chronoboros-33b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Chronoboros-33B-GGUF/blob/main/chronoboros-33b.Q4_K_S.gguf) | Q4_K_S | 4 | 18.44 GB| 20.94 GB | small, greater quality loss |
| [chronoboros-33b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Chronoboros-33B-GGUF/blob/main/chronoboros-33b.Q4_K_M.gguf) | Q4_K_M | 4 | 19.62 GB| 22.12 GB | medium, balanced quality - recommended |
| [chronoboros-33b.Q5_0.gguf](https://huggingface.co/TheBloke/Chronoboros-33B-GGUF/blob/main/chronoboros-33b.Q5_0.gguf) | Q5_0 | 5 | 22.40 GB| 24.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [chronoboros-33b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Chronoboros-33B-GGUF/blob/main/chronoboros-33b.Q5_K_S.gguf) | Q5_K_S | 5 | 22.40 GB| 24.90 GB | large, low quality loss - recommended |
| [chronoboros-33b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Chronoboros-33B-GGUF/blob/main/chronoboros-33b.Q5_K_M.gguf) | Q5_K_M | 5 | 23.05 GB| 25.55 GB | large, very low quality loss - recommended |
| [chronoboros-33b.Q6_K.gguf](https://huggingface.co/TheBloke/Chronoboros-33B-GGUF/blob/main/chronoboros-33b.Q6_K.gguf) | Q6_K | 6 | 26.69 GB| 29.19 GB | very large, extremely low quality loss |
| [chronoboros-33b.Q8_0.gguf](https://huggingface.co/TheBloke/Chronoboros-33B-GGUF/blob/main/chronoboros-33b.Q8_0.gguf) | Q8_0 | 8 | 34.57 GB| 37.07 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Chronoboros-33B-GGUF and below it, a specific filename to download, such as: chronoboros-33b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Chronoboros-33B-GGUF chronoboros-33b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Chronoboros-33B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Chronoboros-33B-GGUF chronoboros-33b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m chronoboros-33b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Chronoboros-33B-GGUF", model_file="chronoboros-33b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค๊ต ๊น, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้ฟๆ, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Henk717's Chronoboros 33B
This model was the result of a 50/50 average weight merge between Airoboros-33B-1.4 and Chronos-33B.
After prolonged testing we concluded that while this merge is highly flexible and capable of many different tasks, it has to much variation in how it answers to be reliable.
Because of this the model relies on some luck to get good results, and is therefore not recommended to people seeking a consistent experience, or people sensitive to anticipation based addictions.
If you would like an improved version of this model that is more stable check out my Airochronos-33B merge.
<!-- original-model-card end -->
|
TheBloke/MistralMakise-Merged-13B-GGUF | TheBloke | 2023-10-21T02:47:06Z | 448 | 5 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:Heralax/MistralMakise-Merged-13b",
"license:apache-2.0",
"text-generation-inference",
"region:us"
]
| null | 2023-10-21T02:41:04Z | ---
base_model: Heralax/MistralMakise-Merged-13b
inference: false
license: apache-2.0
model_creator: Evan Armstrong
model_name: MistralMakise Merged 13B
model_type: llama
prompt_template: '## {{{{charname}}}}:
- You''re "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}".
### Input:
{prompt}
### Response:
(OOC) Understood. I will take this info into account for the roleplay. (end OOC)
### New Roleplay:
### Instruction:
#### {{{{char}}}}:
whatever the char says, this is the chat history
#### {{{{user}}}}:
whatever the user says, this is the chat history
... repeated some number of times ...
### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative):
#### {{{{char}}}}:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# MistralMakise Merged 13B - GGUF
- Model creator: [Evan Armstrong](https://huggingface.co/Heralax)
- Original model: [MistralMakise Merged 13B](https://huggingface.co/Heralax/MistralMakise-Merged-13b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Evan Armstrong's MistralMakise Merged 13B](https://huggingface.co/Heralax/MistralMakise-Merged-13b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MistralMakise-Merged-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MistralMakise-Merged-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MistralMakise-Merged-13B-GGUF)
* [Evan Armstrong's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Heralax/MistralMakise-Merged-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: SillyTavern
```
## {{{{charname}}}}:
- You're "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}".
### Input:
{prompt}
### Response:
(OOC) Understood. I will take this info into account for the roleplay. (end OOC)
### New Roleplay:
### Instruction:
#### {{{{char}}}}:
whatever the char says, this is the chat history
#### {{{{user}}}}:
whatever the user says, this is the chat history
... repeated some number of times ...
### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative):
#### {{{{char}}}}:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `apache-2.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Evan Armstrong's MistralMakise Merged 13B](https://huggingface.co/Heralax/MistralMakise-Merged-13b).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [mistralmakise-merged-13b.Q2_K.gguf](https://huggingface.co/TheBloke/MistralMakise-Merged-13B-GGUF/blob/main/mistralmakise-merged-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [mistralmakise-merged-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/MistralMakise-Merged-13B-GGUF/blob/main/mistralmakise-merged-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [mistralmakise-merged-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/MistralMakise-Merged-13B-GGUF/blob/main/mistralmakise-merged-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [mistralmakise-merged-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/MistralMakise-Merged-13B-GGUF/blob/main/mistralmakise-merged-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [mistralmakise-merged-13b.Q4_0.gguf](https://huggingface.co/TheBloke/MistralMakise-Merged-13B-GGUF/blob/main/mistralmakise-merged-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mistralmakise-merged-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/MistralMakise-Merged-13B-GGUF/blob/main/mistralmakise-merged-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [mistralmakise-merged-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/MistralMakise-Merged-13B-GGUF/blob/main/mistralmakise-merged-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [mistralmakise-merged-13b.Q5_0.gguf](https://huggingface.co/TheBloke/MistralMakise-Merged-13B-GGUF/blob/main/mistralmakise-merged-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mistralmakise-merged-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/MistralMakise-Merged-13B-GGUF/blob/main/mistralmakise-merged-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [mistralmakise-merged-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/MistralMakise-Merged-13B-GGUF/blob/main/mistralmakise-merged-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [mistralmakise-merged-13b.Q6_K.gguf](https://huggingface.co/TheBloke/MistralMakise-Merged-13B-GGUF/blob/main/mistralmakise-merged-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [mistralmakise-merged-13b.Q8_0.gguf](https://huggingface.co/TheBloke/MistralMakise-Merged-13B-GGUF/blob/main/mistralmakise-merged-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/MistralMakise-Merged-13B-GGUF and below it, a specific filename to download, such as: mistralmakise-merged-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/MistralMakise-Merged-13B-GGUF mistralmakise-merged-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/MistralMakise-Merged-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MistralMakise-Merged-13B-GGUF mistralmakise-merged-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m mistralmakise-merged-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "## {{{{charname}}}}:\n- You're "{{{{charname}}}}" in this never-ending roleplay with "{{{{user}}}}".\n### Input:\n{prompt}\n\n### Response:\n(OOC) Understood. I will take this info into account for the roleplay. (end OOC)\n\n### New Roleplay:\n### Instruction:\n#### {{{{char}}}}:\nwhatever the char says, this is the chat history\n#### {{{{user}}}}:\nwhatever the user says, this is the chat history\n... repeated some number of times ...\n### Response 2 paragraphs, engaging, natural, authentic, descriptive, creative):\n#### {{{{char}}}}:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/MistralMakise-Merged-13B-GGUF", model_file="mistralmakise-merged-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, ์ค๊ต ๊น, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjรคreholt, ้ฟๆ, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Evan Armstrong's MistralMakise Merged 13B
The same dataset and training settings as MythoMakise, but with the ReMM mistral model as a base. See that model card for more info on prompt formats and such.
<!-- original-model-card end -->
|
TheBloke/SciPhi-Mistral-7B-32k-GGUF | TheBloke | 2023-10-30T02:49:15Z | 448 | 14 | transformers | [
"transformers",
"gguf",
"mistral",
"base_model:SciPhi/SciPhi-Mistral-7B-32k",
"text-generation-inference",
"region:us"
]
| null | 2023-10-29T21:52:43Z | ---
base_model: SciPhi/SciPhi-Mistral-7B-32k
inference: false
model_creator: SciPhi
model_name: SciPhi Mistral 7B 32K
model_type: mistral
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# SciPhi Mistral 7B 32K - GGUF
- Model creator: [SciPhi](https://huggingface.co/SciPhi)
- Original model: [SciPhi Mistral 7B 32K](https://huggingface.co/SciPhi/SciPhi-Mistral-7B-32k)
<!-- description start -->
## Description
This repo contains GGUF format model files for [SciPhi's SciPhi Mistral 7B 32K](https://huggingface.co/SciPhi/SciPhi-Mistral-7B-32k).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SciPhi-Mistral-7B-32k-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SciPhi-Mistral-7B-32k-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SciPhi-Mistral-7B-32k-GGUF)
* [SciPhi's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/SciPhi/SciPhi-Mistral-7B-32k)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [sciphi-mistral-7b-32k.Q2_K.gguf](https://huggingface.co/TheBloke/SciPhi-Mistral-7B-32k-GGUF/blob/main/sciphi-mistral-7b-32k.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [sciphi-mistral-7b-32k.Q3_K_S.gguf](https://huggingface.co/TheBloke/SciPhi-Mistral-7B-32k-GGUF/blob/main/sciphi-mistral-7b-32k.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [sciphi-mistral-7b-32k.Q3_K_M.gguf](https://huggingface.co/TheBloke/SciPhi-Mistral-7B-32k-GGUF/blob/main/sciphi-mistral-7b-32k.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [sciphi-mistral-7b-32k.Q3_K_L.gguf](https://huggingface.co/TheBloke/SciPhi-Mistral-7B-32k-GGUF/blob/main/sciphi-mistral-7b-32k.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [sciphi-mistral-7b-32k.Q4_0.gguf](https://huggingface.co/TheBloke/SciPhi-Mistral-7B-32k-GGUF/blob/main/sciphi-mistral-7b-32k.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [sciphi-mistral-7b-32k.Q4_K_S.gguf](https://huggingface.co/TheBloke/SciPhi-Mistral-7B-32k-GGUF/blob/main/sciphi-mistral-7b-32k.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [sciphi-mistral-7b-32k.Q4_K_M.gguf](https://huggingface.co/TheBloke/SciPhi-Mistral-7B-32k-GGUF/blob/main/sciphi-mistral-7b-32k.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [sciphi-mistral-7b-32k.Q5_0.gguf](https://huggingface.co/TheBloke/SciPhi-Mistral-7B-32k-GGUF/blob/main/sciphi-mistral-7b-32k.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [sciphi-mistral-7b-32k.Q5_K_S.gguf](https://huggingface.co/TheBloke/SciPhi-Mistral-7B-32k-GGUF/blob/main/sciphi-mistral-7b-32k.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [sciphi-mistral-7b-32k.Q5_K_M.gguf](https://huggingface.co/TheBloke/SciPhi-Mistral-7B-32k-GGUF/blob/main/sciphi-mistral-7b-32k.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [sciphi-mistral-7b-32k.Q6_K.gguf](https://huggingface.co/TheBloke/SciPhi-Mistral-7B-32k-GGUF/blob/main/sciphi-mistral-7b-32k.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [sciphi-mistral-7b-32k.Q8_0.gguf](https://huggingface.co/TheBloke/SciPhi-Mistral-7B-32k-GGUF/blob/main/sciphi-mistral-7b-32k.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/SciPhi-Mistral-7B-32k-GGUF and below it, a specific filename to download, such as: sciphi-mistral-7b-32k.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/SciPhi-Mistral-7B-32k-GGUF sciphi-mistral-7b-32k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/SciPhi-Mistral-7B-32k-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/SciPhi-Mistral-7B-32k-GGUF sciphi-mistral-7b-32k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m sciphi-mistral-7b-32k.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/SciPhi-Mistral-7B-32k-GGUF", model_file="sciphi-mistral-7b-32k.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, ้ฟๆ, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjรคreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: SciPhi's SciPhi Mistral 7B 32K
# SciPhi-Mistral-7B-32k Model Card
**License:** llama2
The SciPhi-Mistral-7B-32k is a Large Language Model (LLM) fine-tuned from Mistral-7B-v0.1. This model underwent a fine-tuning process over four epochs using more than 1 billion tokens, which include regular instruction tuning data and synthetic textbooks. The objective of this work was to increase the model's scientific reasoning and educational abilities.
## Model Architecture
Base Model: Mistral-7B-v0.1
**Architecture Features:**
- Transformer-based model
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## References
1. Lian, W., Goodson, B., Wang, G., Pentland, E., Cook, A., Vong, C., & Teknium. (2023). MistralOrca: Mistral-7B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset. *HuggingFace repository*. [Link](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
2. Mukherjee, S., Mitra, A., Jawahar, G., Agarwal, S., Palangi, H., & Awadallah, A. (2023). Orca: Progressive Learning from Complex Explanation Traces of GPT-4. *arXiv preprint arXiv:2306.02707*.
3. Longpre, S., Hou, L., Vu, T., Webson, A., Chung, H. W., Tay, Y., Zhou, D., Le, Q. V., Zoph, B., Wei, J., & Roberts, A. (2023). The Flan Collection: Designing Data and Methods for Effective Instruction Tuning. *arXiv preprint arXiv:2301.13688*.
4. Mistral AI. (2023). Model Card for Mistral-7B-v0.1. The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks tested. For full details, please refer to the paper and release blog post. Model Architecture: Transformer with Grouped-Query Attention, Sliding-Window Attention, and Byte-fallback BPE tokenizer. [Link](https://huggingface.co/mistralai/Mistral-7B-v0.1)
## Acknowledgements
Thank you to the [AI Alignment Lab](https://huggingface.co/Alignment-Lab-AI), [vikp](https://huggingface.co/vikp), [jph00](https://huggingface.co/jph00) and others who contributed to this work.
<!-- original-model-card end -->
|
TheBloke/openbuddy-zephyr-7B-v14.1-GGUF | TheBloke | 2023-11-06T16:04:17Z | 448 | 12 | transformers | [
"transformers",
"gguf",
"mistral",
"base_model:OpenBuddy/openbuddy-zephyr-7b-v14.1",
"license:apache-2.0",
"text-generation-inference",
"region:us"
]
| null | 2023-11-06T14:54:31Z | ---
base_model: OpenBuddy/openbuddy-zephyr-7b-v14.1
inference: false
license: apache-2.0
model_creator: OpenBuddy
model_name: Openbuddy Zephyr 7B v14.1
model_type: mistral
prompt_template: "You are a helpful, respectful and honest INTP-T AI Assistant named\
\ Buddy. You are talking to a human User.\nAlways answer as helpfully and logically\
\ as possible, while being safe. Your answers should not include any harmful, political,\
\ religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please\
\ ensure that your responses are socially unbiased and positive in nature.\nIf a\
\ question does not make any sense, or is not factually coherent, explain why instead\
\ of answering something not correct. If you don't know the answer to a question,\
\ please don't share false information.\nYou like to use emojis. You can speak fluently\
\ in many languages, for example: English, Chinese.\nYou cannot access the internet,\
\ but you have vast knowledge, cutoff: 2021-09.\nYou are trained by OpenBuddy team,\
\ (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), you are based\
\ on LLaMA and Falcon transformers model, not related to GPT or OpenAI.\n\nUser:\
\ {prompt}\nAssistant: \n"
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Openbuddy Zephyr 7B v14.1 - GGUF
- Model creator: [OpenBuddy](https://huggingface.co/OpenBuddy)
- Original model: [Openbuddy Zephyr 7B v14.1](https://huggingface.co/OpenBuddy/openbuddy-zephyr-7b-v14.1)
<!-- description start -->
## Description
This repo contains GGUF format model files for [OpenBuddy's Openbuddy Zephyr 7B v14.1](https://huggingface.co/OpenBuddy/openbuddy-zephyr-7b-v14.1).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/openbuddy-zephyr-7B-v14.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/openbuddy-zephyr-7B-v14.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/openbuddy-zephyr-7B-v14.1-GGUF)
* [OpenBuddy's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenBuddy/openbuddy-zephyr-7b-v14.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: OpenBuddy
```
You are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human User.
Always answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
You like to use emojis. You can speak fluently in many languages, for example: English, Chinese.
You cannot access the internet, but you have vast knowledge, cutoff: 2021-09.
You are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), you are based on LLaMA and Falcon transformers model, not related to GPT or OpenAI.
User: {prompt}
Assistant:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [openbuddy-zephyr-7b-v14.1.Q2_K.gguf](https://huggingface.co/TheBloke/openbuddy-zephyr-7B-v14.1-GGUF/blob/main/openbuddy-zephyr-7b-v14.1.Q2_K.gguf) | Q2_K | 2 | 3.10 GB| 5.60 GB | smallest, significant quality loss - not recommended for most purposes |
| [openbuddy-zephyr-7b-v14.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/openbuddy-zephyr-7B-v14.1-GGUF/blob/main/openbuddy-zephyr-7b-v14.1.Q3_K_S.gguf) | Q3_K_S | 3 | 3.19 GB| 5.69 GB | very small, high quality loss |
| [openbuddy-zephyr-7b-v14.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/openbuddy-zephyr-7B-v14.1-GGUF/blob/main/openbuddy-zephyr-7b-v14.1.Q3_K_M.gguf) | Q3_K_M | 3 | 3.54 GB| 6.04 GB | very small, high quality loss |
| [openbuddy-zephyr-7b-v14.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/openbuddy-zephyr-7B-v14.1-GGUF/blob/main/openbuddy-zephyr-7b-v14.1.Q3_K_L.gguf) | Q3_K_L | 3 | 3.85 GB| 6.35 GB | small, substantial quality loss |
| [openbuddy-zephyr-7b-v14.1.Q4_0.gguf](https://huggingface.co/TheBloke/openbuddy-zephyr-7B-v14.1-GGUF/blob/main/openbuddy-zephyr-7b-v14.1.Q4_0.gguf) | Q4_0 | 4 | 4.14 GB| 6.64 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [openbuddy-zephyr-7b-v14.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/openbuddy-zephyr-7B-v14.1-GGUF/blob/main/openbuddy-zephyr-7b-v14.1.Q4_K_S.gguf) | Q4_K_S | 4 | 4.17 GB| 6.67 GB | small, greater quality loss |
| [openbuddy-zephyr-7b-v14.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/openbuddy-zephyr-7B-v14.1-GGUF/blob/main/openbuddy-zephyr-7b-v14.1.Q4_K_M.gguf) | Q4_K_M | 4 | 4.39 GB| 6.89 GB | medium, balanced quality - recommended |
| [openbuddy-zephyr-7b-v14.1.Q5_0.gguf](https://huggingface.co/TheBloke/openbuddy-zephyr-7B-v14.1-GGUF/blob/main/openbuddy-zephyr-7b-v14.1.Q5_0.gguf) | Q5_0 | 5 | 5.03 GB| 7.53 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [openbuddy-zephyr-7b-v14.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/openbuddy-zephyr-7B-v14.1-GGUF/blob/main/openbuddy-zephyr-7b-v14.1.Q5_K_S.gguf) | Q5_K_S | 5 | 5.03 GB| 7.53 GB | large, low quality loss - recommended |
| [openbuddy-zephyr-7b-v14.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/openbuddy-zephyr-7B-v14.1-GGUF/blob/main/openbuddy-zephyr-7b-v14.1.Q5_K_M.gguf) | Q5_K_M | 5 | 5.16 GB| 7.66 GB | large, very low quality loss - recommended |
| [openbuddy-zephyr-7b-v14.1.Q6_K.gguf](https://huggingface.co/TheBloke/openbuddy-zephyr-7B-v14.1-GGUF/blob/main/openbuddy-zephyr-7b-v14.1.Q6_K.gguf) | Q6_K | 6 | 5.97 GB| 8.47 GB | very large, extremely low quality loss |
| [openbuddy-zephyr-7b-v14.1.Q8_0.gguf](https://huggingface.co/TheBloke/openbuddy-zephyr-7B-v14.1-GGUF/blob/main/openbuddy-zephyr-7b-v14.1.Q8_0.gguf) | Q8_0 | 8 | 7.74 GB| 10.24 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/openbuddy-zephyr-7B-v14.1-GGUF and below it, a specific filename to download, such as: openbuddy-zephyr-7b-v14.1.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/openbuddy-zephyr-7B-v14.1-GGUF openbuddy-zephyr-7b-v14.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/openbuddy-zephyr-7B-v14.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/openbuddy-zephyr-7B-v14.1-GGUF openbuddy-zephyr-7b-v14.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m openbuddy-zephyr-7b-v14.1.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human User.\nAlways answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\nYou like to use emojis. You can speak fluently in many languages, for example: English, Chinese.\nYou cannot access the internet, but you have vast knowledge, cutoff: 2021-09.\nYou are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), you are based on LLaMA and Falcon transformers model, not related to GPT or OpenAI.\n\nUser: {prompt}\nAssistant:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/openbuddy-zephyr-7B-v14.1-GGUF", model_file="openbuddy-zephyr-7b-v14.1.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, ้ฟๆ, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjรคreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: OpenBuddy's Openbuddy Zephyr 7B v14.1
<!-- original-model-card end -->
|
hfl/chinese-llama-2-13b-gguf | hfl | 2024-01-24T03:25:40Z | 448 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-11-16T03:21:16Z | ---
license: apache-2.0
---
# Chinese-LLaMA-2-13B-GGUF
This repository contains the GGUF-v3 models (llama.cpp compatible) for **Chinese-LLaMA-2-13B**.
## Performance
Metric: PPL, lower is better
| Quant | original | imatrix (`-im`) |
|-----|------|------|
| Q2_K | 14.4701 +/- 0.26107 | 17.4275 +/- 0.31909 |
| Q3_K | 10.1620 +/- 0.18277 | 9.7486 +/- 0.17744 |
| Q4_0 | 9.8633 +/- 0.17792 | - |
| Q4_K | 9.2735 +/- 0.16793 | 9.2734 +/- 0.16792 |
| Q5_0 | 9.3553 +/- 0.16945 | - |
| Q5_K | 9.1767 +/- 0.16634 | 9.1594 +/- 0.16590 |
| Q6_K | 9.1326 +/- 0.16546 | 9.1478 +/- 0.16583 |
| Q8_0 | 9.1394 +/- 0.16574 | - |
| F16 | 9.1050 +/- 0.16518 | - |
*The model with `-im` suffix is generated with important matrix, which has generally better performance (not always though).*
## Others
For Hugging Face version, please see: https://huggingface.co/hfl/chinese-llama-2-13b
Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for more details. |
arlineka/manbasya_2x7b_MOE | arlineka | 2024-02-11T18:17:38Z | 448 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
]
| text-generation | 2024-02-11T17:28:01Z | ---
license: apache-2.0
---
AWQ Quantized
```
!pip install git+https://github.com/huggingface/transformers.git -q
!pip install huggingface_hub
!pip install autoawq -q
```
```
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
import torch
# Assuming your model and tokenizer are loaded
model_name_or_path = "arlineka/manbasya_2x7b_MOE"
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layer=True, trust_remote_code=False, safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
# Set device to CUDA if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Move model to the device
model.to(device)
# Prepare your input text and move input tensors to the same device
input_text = "Hello. Input Here"
input_ids = tokenizer.encode(input_text, return_tensors="pt").to(device)
# Now generate text with model and input tensors on the same device
output = model.generate(input_ids, max_new_tokens=2048) # Example usage, adjust as necessary
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
``` |
PassionFriend/5FhBfreSFENksVqW9MxoE6bkoqexwU5r5FgAf4pCtoKzonMQ_vgg | PassionFriend | 2024-03-01T06:50:06Z | 448 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-02-17T23:09:11Z | Entry not found |
cognitivecomputations/DolphinHermes-120b | cognitivecomputations | 2024-05-20T15:06:43Z | 448 | 8 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:NousResearch/Nous-Hermes-2-Llama-2-70B",
"base_model:cognitivecomputations/dolphin-2.2-70b",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-21T21:07:22Z | ---
base_model:
- NousResearch/Nous-Hermes-2-Llama-2-70B
- cognitivecomputations/dolphin-2.2-70b
library_name: transformers
tags:
- mergekit
- merge
license: llama2
---

# DolphinHermes-120b
Cheers @teknium
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
[](https://discord.gg/cognitivecomputations)
Discord: https://discord.gg/cognitivecomputations
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [NousResearch/Nous-Hermes-2-Llama-2-70B](https://huggingface.co/NousResearch/Nous-Hermes-2-Llama-2-70B)
* [cognitivecomputations/dolphin-2.2-70b](https://huggingface.co/cognitivecomputations/dolphin-2.2-70b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: linear # use linear so we can include multiple models, albeit at a zero weight
parameters:
weight: 1.0 # weight everything as 1 unless specified otherwise - linear with one model weighted at 1 is a no-op like passthrough
slices:
- sources:
- model: cognitivecomputations/dolphin-2.2-70b # embed_tokens comes along with the ride with whatever is the first layer
layer_range: [0, 1]
- model: NousResearch/Nous-Hermes-2-Llama-2-70B # add dummy second model with 0 weight so tokenizer-based merge routine is invoked for embed_tokens
layer_range: [0, 1]
parameters:
weight: 0
- sources:
- model: cognitivecomputations/dolphin-2.2-70b
layer_range: [1, 20]
- sources:
- model: NousResearch/Nous-Hermes-2-Llama-2-70B
layer_range: [10, 30]
- sources:
- model: cognitivecomputations/dolphin-2.2-70b
layer_range: [20, 40]
- sources:
- model: NousResearch/Nous-Hermes-2-Llama-2-70B
layer_range: [30, 50]
- sources:
- model: cognitivecomputations/dolphin-2.2-70b
layer_range: [40, 60]
- sources:
- model: NousResearch/Nous-Hermes-2-Llama-2-70B
layer_range: [50, 70]
- sources:
- model: cognitivecomputations/dolphin-2.2-70b
layer_range: [60, 79]
- sources: # same as above, but for lm_head with the last layer
- model: cognitivecomputations/dolphin-2.2-70b
layer_range: [79, 80]
- model: NousResearch/Nous-Hermes-2-Llama-2-70B
layer_range: [79, 80]
parameters:
weight: 0
dtype: float16
tokenizer_source: model:cognitivecomputations/dolphin-2.2-70b # keep exact tokenizer used by dolphin - or you could use `union` if you add all of the input models to the first/last slice, but they would need to be non-zero weight or you'll get NaNs in your embeddings
``` |
Radiantloom/radiantloom-llama-70b-instruct | Radiantloom | 2024-02-23T23:29:06Z | 448 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"SFT",
"Llama",
"Llama 70B Chat",
"arxiv:2306.05685",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-23T05:40:51Z | ---
library_name: transformers
tags:
- SFT
- Llama
- Llama 70B Chat
license: llama2
---
<img src="https://huggingface.co/Radiantloom/radiantloom-llama-70b-instruct/resolve/main/radiantloom-llama-70b-instruct.jpeg" alt="Radiantloom Llama 70B Instruct" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
## Radiantloom Llama 70B Instruct
The Radiantloom Llama 70B Instruct, a large language model (LLM) developed by Radiantloom AI, features approximately 70 billion parameters. From vibes-check evaluations, the Radiantloom Llama 70B Instruct demonstrates great performance in various applications like creative writing, multi-turn conversations, in-context learning through Retrieval Augmented Generation (RAG), and coding tasks. Its out-of-the-box performance already delivers impressive results, particularly in writing, coding, and reasoning tasks. This model produces longer form content and provides detailed explanations of its actions. To maximize its potential, consider implementing instruction tuning and Reinforcement Learning with Human Feedback (RLHF) techniques for further refinement. Alternatively, you can utilize it in its current form.
## Intended Uses & Limitations
The Radiantloom Llama 70B Instruct is versatile and can be utilized for various text generation tasks such as summarization, chat, coding, question answering, retrieval augmented generation (RAG), role play, and content generation.
It can be considered a state-of-the-art generative language model, it demonstrates competitive performance in general tasks when compared to other open and closed-source large language models such as GPT 3.5 Turbo, and Mixtral 8x7B Instruct.
## Model Usage
You can deploy it on a Runpod GPU instance using our [one-click Runpod template](https://www.runpod.io/console/gpu-secure-cloud?ref=80eh3891&template=tbkdia6qdv) (Our Referral Link. Please consider Supporting). This template provides you with an OpenAI-compatible API endpoint that you can integrate into your existing codebase designed for OpenAI APIs.
## Prompt Template
We have fine-tuned this model using the ChatML format, and you can achieve optimal performance by utilizing the ChatML format.
```
<s>[INST] <<SYS>>{system_message}<</SYS>> {user_message}[/INST]
```
## Inference Code
```python
## install dependencies
## !pip install -q -U git+https://github.com/huggingface/transformers.git
## !pip install -q -U git+https://github.com/huggingface/accelerate.git
## !pip install -q -U bitsandbytes sentencepiece
## load model
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer
model_name = "Radiantloom/radiantloom-llama-70b-instruct"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
# load the model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(
model_name,
trust_remote_code=True
)
tokenizer.pad_token = tokenizer.eos_token
## inference
prompt = """
<s>[INST]<<SYS>>You are a helpful AI assistant developed by Radiantloom AI.<</SYS>>Take a deep breath and think step-by-step. Write a tale about a time-traveling historian who's determined to witness the most significant events in human history.[/INST]
"""
# tokenize the input string
inputs = tokenizer(
prompt,
return_tensors="pt",
return_attention_mask=False
)
# generate text using the model
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
outputs = model.generate(**inputs, max_length=2048, streamer=streamer, do_sample=True, repetition_penalty=1.1"))
# decode and print the output
text = tokenizer.batch_decode(outputs)[0]
```
Here are a few example use cases. if we query the model with
```
<s>[INST]<<SYS>>You are a helpful AI assistant developed by Radiantloom AI.<</SYS>>Take a deep breath and think step-by-step. Write a tale about a time-traveling historian who's determined to witness the most significant events in human history.[/INST]
```
the model will return
```
Story: The Time Chronicles of Professor Amelia Harcourt
Professor Amelia Harcourt was an exceptional historian, renowned for her remarkable contributions to the field of historical studies. However, her passion for understanding the past extended far beyond academics. She had always dreamt of witnessing firsthand the events that shaped the world as we know it. When she stumbled upon an ancient manuscript filled with cryptic symbols while working on a research project, she wasn't aware that it would change her life forever.
After translating the mysterious text, Amelia discovered that it contained blueprints for constructing an extraordinary device - a time machine that could transport its user across different eras. Fueled by an all-consuming fervor for exploring pivotal moments in history, Amelia spent her remaining savings on procuring the materials necessary to build the intricate apparatus. The process required tireless dedication and rigorous labor, but through a mixture of intelligence, skill, and unrelenting determination, she finally constructed a functioning time machine three years later.
With anticipation surging through her veins, Amelia embarked on a transcendent expedition to experience the most significant occurrences in human history. Her very first stop took place more than two million years ago: the East African Rift Valley, where homo habilis, our early ancestors, were believed to have emerged. In this remote corner of prehistory, Amelia studied their behavior closely and marveled at their resourcefulness. Their innovative use of tools and rudimentary social systems helped shape complex human societies of the future, making the encounter a moving lesson on the importance of adaptability.
The second destination brought Amelia even further back in time, landing directly in Earth's arctic region during the late Cretaceous period. This journey held both personal ambition and scientific curiosity; Amelia wished to better understand why dinosaurs went extinct. Upon arrival, the historian felt a wave of unease as she navigated through the treacherous environment filled with towering dinosaur species, including the voracious Tyrannosaurus rex. With immense caution, she conducted her analysis and came to the shocking conclusion that asteroid impacts, not volcanic activity, were primarily responsible for wiping out the majority of dinosaur species. As these creatures faced catastrophic conditions, mammals thrived and eventually evolved into humans.
Amelia continued her chronological pilgrimage with an extended stay at Stonehenge, one of the most mystifying historical sites she'd ever visited. After learning from several sources during her academic career, she hypothesized that the magnificent monument was built as part of a larger complex dedicated to worshipping gods tied to celestial bodies and fertility. Upon arriving in Neolithic Britain, Amelia watched in awe as people worked together to erect massive bluestones using ropes made from animal hides, wooden rollers, and levers carved from antlers. Witnessing this collaborative effort revealed a newfound appreciation for the technological ingenuity and social organization present in early human civilizations.
Throughout subsequent adventures, Amelia became captivated by the enlightening teachings of Socrates in Ancient Greece and traveled through the bustling metropolis of imperial Rome. These excursions showcased the timeless themes of wisdom, governance, justice, art, and culture. They illuminated various aspects of human nature and the inherent values shared across every epoch.
Overwhelmed by curiosity and a newfound sense of wonderment, Amelia decided to explore other transformative eras such as the discovery of the New World and the Industrial Revolution. Eventually, she found herself amidst the burgeoning civil rights movement in America during the 1960s, watching in admiration as advocates for racial equality peacefully rallied against systemic oppression and fought for equal rights.
As Professor Amelia Harcourt immersed herself in the diverse expanse of human history, her encounters revealed a profound understanding of progress. Each event she observed had contributed something unique and essential to the development, growth, and evolution of mankind. This revelation reminded her of an old saying: "Only those who look backward can truly see forward." Through her time travels, Amelia began recognizing the patterns and interconnections between different eras, gaining immense insight into the human spirit's resilience and capacity for change.
Inspired by her once-in-a-lifetime odyssey, Amelia Harcourt set forth creating The Time Chronicles-an extensive body of work featuring detailed accounts from her journeys through time. This collection transformed academic views regarding the past and offered generations to come priceless knowledge and fascinating perspectives vital to understanding our shared human story.
```
and if we query the model with
```
<s>[INST]<<SYS>>You are an AI assistant. Take a deep breath, think step-by-step and respond to following instructions from the user. <</SYS>>The cafeteria had 23 apples. If they used 20 for lunch and bought 6 more, how many apples do they have?[/INST]
```
the model will return
```
Let's break this down:
1. The cafeteria initially had 23 apples.
2. They used 20 for lunch. To find out how many apples were left after lunch, subtract the number of apples used for lunch (20) from the initial number of apples (23). This gives us 23 - 20 = 3 apples left after lunch.
3. Then they purchased 6 more apples. To find out the total number of apples now, add the number of apples left after lunch (3) to the number of apples bought recently (6). So we have 3 + 6 = 9 apples in total.
```
and if we query the model with
```
<s>[INST]<<SYS>>You are an AI assistant. Take a deep breath, think step-by-step and respond to following instructions from the user. <</SYS>>Write a function in Python that checks if a number is prime.[/INST]
```
the model will return
```
The user will provide you with a positive integer and your task is to check whether it can be divided by any other number (divisors) except for one and itself (the number).
Let's say the number given by the user is 2147483647. Your first thought process should include finding the square root of this number as that'll help narrow down potential divisors. Here, sqrt(2147483647) = 46340. Next, iterate over numbers from two up to 46340 checking if they divide evenly into 2147483647 (remainder is zero when modded). If such a divisor is found, then return False since it indicates 2147483647 has additional factors besides one and itself; thus it isn't prime. Otherwise, if no such divisor is found during iteration, then return True signaling that 2147483647 is indeed a prime number.
But make sure you also consider edge cases like 2, because some programming languages might use zero-based indexing.
Here's a possible solution:
```
```python
import math
def is_prime(n):
if n <= 1: # Edge case: numbers below 2 are not considered primes
return False
elif n == 2 or n == 3: # Special cases for small primes
return True
elif n % 2 == 0: # Quick filter for even numbers
return False
root = int(math.sqrt(n))
for i in range(3, root + 1, 2): # Only odd divisors needed; start at 3
if n % i == 0:
return False
return True
number = 2147483647
result = is_prime(number)
print(f"The result for {number} is:", "Prime" if result else "Not Prime")
```
## Evaluations
We are encouraged by the initial assessments conducted using the [LLM-as-a-Judge method](https://arxiv.org/abs/2306.05685), particularly with the utilization of GPT-4. Our ongoing efforts involve a more comprehensive evaluation of the model. We have also entered this model into the HuggingFace Open Leaderboard for assessment. As we progress, we will continue to provide updates on our findings and share the results on this platform.
## Ethical Considerations and Limitations
Radiantloom Llama 70B Instruct, a powerful AI language model, can produce factually incorrect output and content not suitable for work (NSFW). It should not be relied upon to provide factually accurate information and should be used with caution. Due to the limitations of its pre-trained model and the finetuning datasets, it may generate lewd, biased, or otherwise offensive content. Consequently, developers should conduct thorough safety testing prior to implementing any applications of this model.
## About Radiantloom AI
Radiantloom AI trains open-source large language models tailored for specific business tasks such as copilots, email assistance, customer support, and database operations.
Learn more about Radiantloom by visiting our [website](https://radiantloom.com). Follow us on Twitter at [Radiantloom](https://twitter.com/radiantloom) to gain early access to upcoming Radiantloom AI large language models.
## Want to Support Our Work?
We are passionate about contributing open-source AI models to the broader AI community to accelerate AI development and deployment. If you find our work valuable and would like to contribute to its continued success, consider buying us a few coffees!
Your support goes a long way in helping us cover training costs, fund development tools, and keep the project thriving. Every coffee you buy is a small but meaningful contribution that fuels our dedication to providing a top-notch experience for users like you.
[Buy us a coffee](https://www.buymeacoffee.com/aigeek_) and be a part of our journey towards making this project even more awesome. We appreciate your support! |
rizla/rizla-69 | rizla | 2024-03-01T02:08:28Z | 448 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-24T06:40:58Z | ---
license: apache-2.0
language:
- en
---
# Rizla-69
## This is a crop of momo-qwen-72B
This repository contains a state-of-the-art machine learning model that promises to bring big changes to the field. The model is trained on [describe the dataset or type of data here].
## License
This project is licensed under the terms of the Apache 2.0 license.
## Model Architecture
The model uses [describe the model architecture here, e.g., a transformer-based architecture with a specific type of attention mechanism].
## Training
The model was trained on [describe the hardware used, e.g., an NVIDIA Tesla P100 GPU] using [mention the optimization algorithm, learning rate, batch size, number of epochs, etc.].
## Results
Our model achieved [mention the results here, e.g., an accuracy of 95% on the test set].
## Usage
To use the model in your project, follow these steps:
1. Install the Hugging Face Transformers library:
```bash
pip install transformers
|
adamo1139/Yi-34b-200K-AEZAKMI-RAW-TOXIC-2702 | adamo1139 | 2024-05-27T21:33:52Z | 448 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"uncensored",
"dataset:adamo1139/rawrr_v2",
"dataset:adamo1139/AEZAKMI_v3-3",
"dataset:unalignment/toxic-dpo-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-29T20:05:06Z | ---
license: apache-2.0
datasets:
- adamo1139/rawrr_v2
- adamo1139/AEZAKMI_v3-3
- unalignment/toxic-dpo-v0.1
tags:
- uncensored
---
## Model Description
<b>*Probably the most uncensored Yi-34B tune I published so far*</b>
Yi-34B 200K base model fine-tuned on RAWrr v2 dataset via DPO, then fine-tuned on AEZAKMI v3-3 dataset via SFT, then DPO tuned on unalignment/toxic-dpo-v0.1. Total GPU compute time of 40-50 hours I think. It's like airoboros/capybara but with less gptslop, no refusals and less typical language used by RLHFed OpenAI models. Say goodbye to "It's important to remember"!
Prompt format is standard chatml. Don't expect it to be good at instruct, math, riddles or be crazy smart. My end goal with AEZAKMI is to create a cozy free chatbot. Base model used for fine-tuning was 200k context Yi-34B-Llama model shared by larryvrh.
Training was done with max_position_embeddings set at 4096. Then it was reverted back to 200K after applying LoRA.
## Prompt Format
I recommend using ChatML format, as this was used during fine-tune. \
Here's a prompt format you should use, you can set a different system message, model seems to respect that fine, so it wasn't overfitted.
Both `A chat.` and `A chat with uncensored assistant.` system prompt work fine and are pretty refusal-free.
```
<|im_start|>system
A chat with uncensored assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Example.
```
<|im_start|>system
A chat.<|im_end|>
<|im_start|>user
I want an acrostic poem based on the word AEZAKMI and make it all about GTA:SA<|im_end|>
<|im_start|>assistant
A voracious appetite for danger,
Eager to explore every corner,
Zealously tackling missions with vigor,
Amidst San Andreas' vast expanse,
Keenly skilled in the art of evasion,
Mastery achieved through trial and error,
In this digital realm of chaos and thrill,<|im_end|>
```
## Notes
Temp around 0.3-0.5 seems to work well, at 1.2 it's somewhat unstable, which is often undesirable.
## Intended uses & limitations
It's a chat model, not a base completion-only one.
Use is limited by apache-2.0 license. Since no-robots dataset was used for making rawrr_v1, I guess you maybe shouldn't use it for commercial activities.
## Known Issues
It likes to talk about stocks a lot, sometimes it feels like being on WSB, which is certainly a plus for some usecases. This one doesn't seem slopped to me, I think I will stick with it for longer.
### Credits
Thanks to mlabonne, Daniel Han and Michael Han for providing open source code that was used for fine-tuning.
Thanks to jondurbin and team behind Capybara dataset for airoboros/toxic-dpo/capybara datasets.
Thanks to HF for open sourcing no_robots dataset.
Thanks to Sentdex for providing WSB dataset. |
nbeerbower/bruphin-theta | nbeerbower | 2024-03-10T01:41:46Z | 448 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Weyaxi/Einstein-v4-7B",
"base_model:nbeerbower/bruphin-eta",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-10T01:19:25Z | ---
license: apache-2.0
base_model:
- Weyaxi/Einstein-v4-7B
- nbeerbower/bruphin-eta
library_name: transformers
tags:
- mergekit
- merge
---
# bruphin-theta
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Weyaxi/Einstein-v4-7B](https://huggingface.co/Weyaxi/Einstein-v4-7B)
* [nbeerbower/bruphin-eta](https://huggingface.co/nbeerbower/bruphin-eta)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nbeerbower/bruphin-eta
layer_range: [0, 32]
- model: Weyaxi/Einstein-v4-7B
layer_range: [0, 32]
merge_method: slerp
base_model: Weyaxi/Einstein-v4-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
maldv/winter-garden-7b-gamma | maldv | 2024-03-19T16:29:57Z | 448 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"conversational",
"multi-task",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-19T02:01:26Z | ---
license: cc-by-nc-4.0
tags:
- merge
- conversational
- multi-task
pipeline_tag: text-generation
---
# Winter Garden 7B - ฮ
It was mentioned that we are in the open ai dark winter; so I thought I would make myself a nice winter garden.
## An experiment
This time I did something a bit different.
* Mistral-7B-v0.1
and merged in
* Yarn-Mistral-7b-128k
* Thespis-Balanced-7b-v1
* ZySec-7B-v1
* LemonadeRP-4.5.3
* Noromaid-7B-0.4-DPO
* Prima-LelantaclesV6-7b
* West-Hermes-7B
* Capricorn-7B-DPO
* kun-kunoichi-v1-DPO-v2-SLERP-7B
* Kunoichi-DPO-v2-7B
* WestLake-7B-v2-laser-truthy-dpo
* StrangeMerges_6-7B-dare_ties
* NeuralMarcoro14-7B
* multi_verse_model
* Multi-Verse-RP-7B
* MonarchLake-7B
* AlphaMonarch-7B
in an iterative DARE-TIES tree merge, ordering the merge order by tensor-relative cosine similarity until the merge branches resolve to a single value.
## Chat Template
Basic Mistral `<s>[INST][/INST]` works pretty well. It seems smart, but we will see.
## Scores
Metric | Score
---|---
Average |
ARC |
HellaSwag |
MMLU |
TruthfulQA |
Winogrande |
GSM8K | |
mradermacher/Cerebrum-RP-GGUF | mradermacher | 2024-05-06T06:04:26Z | 448 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"en",
"dataset:Xilabs/PIPPA-alpaca",
"base_model:ChavyvAkvar/Cerebrum-RP",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-22T09:38:59Z | ---
base_model: ChavyvAkvar/Cerebrum-RP
datasets:
- Xilabs/PIPPA-alpaca
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- unsloth
- trl
- sft
---
## About
static quants of https://huggingface.co/ChavyvAkvar/Cerebrum-RP
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Cerebrum-RP-GGUF/resolve/main/Cerebrum-RP.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Cerebrum-RP-GGUF/resolve/main/Cerebrum-RP.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Cerebrum-RP-GGUF/resolve/main/Cerebrum-RP.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Cerebrum-RP-GGUF/resolve/main/Cerebrum-RP.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Cerebrum-RP-GGUF/resolve/main/Cerebrum-RP.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Cerebrum-RP-GGUF/resolve/main/Cerebrum-RP.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Cerebrum-RP-GGUF/resolve/main/Cerebrum-RP.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Cerebrum-RP-GGUF/resolve/main/Cerebrum-RP.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Cerebrum-RP-GGUF/resolve/main/Cerebrum-RP.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Cerebrum-RP-GGUF/resolve/main/Cerebrum-RP.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Cerebrum-RP-GGUF/resolve/main/Cerebrum-RP.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Cerebrum-RP-GGUF/resolve/main/Cerebrum-RP.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Cerebrum-RP-GGUF/resolve/main/Cerebrum-RP.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Cerebrum-RP-GGUF/resolve/main/Cerebrum-RP.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
lemon-mint/gemma-ko-7b-it-v0.41 | lemon-mint | 2024-04-01T01:59:15Z | 448 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"korean",
"pytorch",
"conversational",
"ko",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-01T01:43:48Z | ---
library_name: transformers
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
language:
- ko
- en
tags:
- korean
- gemma
- pytorch
pipeline_tag: text-generation
---

# Gemma Ko 7B Instruct v0.41
- Eval Loss: `1.21711`
- lr: `1e-5`
- optimizer: adamw
- lr_scheduler_type: cosine
## Model Details
### Model Description
The Gemma 7B Ko Instruct v0.41 model is designed for generating human-like text in the Korean language.
It can be used for a variety of natural language processing tasks, such as language translation, text summarization, question answering, and conversation generation.
This model is particularly well-suited for applications that require high-quality, coherent, and contextually relevant Korean text generation.
- **Developed by:** `lemon-mint`
- **Model type:** Gemma
- **Language(s) (NLP):** Korean, English
- **License:** [gemma-terms-of-use](https://ai.google.dev/gemma/terms)
- **Finetuned from model:** [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it)
# Limitations and Ethical Considerations
As Gemma Ko 7B has been trained on extensive web data, biases present in the training data may be reflected in the model. Additionally, there is a possibility that it may generate sentences containing errors or incorrect information. Therefore, rather than blindly trusting the model's output, it is necessary to refer to it with caution.
|
mradermacher/Wittgenbot-7B-GGUF | mradermacher | 2024-05-06T05:12:38Z | 448 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:descartesevildemon/Wittgenbot-7B",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-06T10:54:34Z | ---
base_model: descartesevildemon/Wittgenbot-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/descartesevildemon/Wittgenbot-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Wittgenbot-7B-GGUF/resolve/main/Wittgenbot-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/OrcaMaid-13b-GGUF | mradermacher | 2024-05-06T05:00:55Z | 448 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ddh0/OrcaMaid-13b",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-11T21:03:36Z | ---
base_model: ddh0/OrcaMaid-13b
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/microsoft/Orca-2-13b/blob/main/LICENSE
license_name: microsoft-research-license
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ddh0/OrcaMaid-13b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OrcaMaid-13b-GGUF/resolve/main/OrcaMaid-13b.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/OrcaMaid-13b-GGUF/resolve/main/OrcaMaid-13b.IQ3_XS.gguf) | IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/OrcaMaid-13b-GGUF/resolve/main/OrcaMaid-13b.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OrcaMaid-13b-GGUF/resolve/main/OrcaMaid-13b.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/OrcaMaid-13b-GGUF/resolve/main/OrcaMaid-13b.IQ3_M.gguf) | IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/OrcaMaid-13b-GGUF/resolve/main/OrcaMaid-13b.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OrcaMaid-13b-GGUF/resolve/main/OrcaMaid-13b.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/OrcaMaid-13b-GGUF/resolve/main/OrcaMaid-13b.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/OrcaMaid-13b-GGUF/resolve/main/OrcaMaid-13b.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OrcaMaid-13b-GGUF/resolve/main/OrcaMaid-13b.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OrcaMaid-13b-GGUF/resolve/main/OrcaMaid-13b.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/OrcaMaid-13b-GGUF/resolve/main/OrcaMaid-13b.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/OrcaMaid-13b-GGUF/resolve/main/OrcaMaid-13b.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OrcaMaid-13b-GGUF/resolve/main/OrcaMaid-13b.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
hooking-dev/Monah-8b-Uncensored-gguf | hooking-dev | 2024-05-17T16:10:55Z | 448 | 4 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"trl",
"sft",
"en",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T22:20:39Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- llama
- gguf
- trl
- sft
base_model: meta-llama/Meta-Llama-3-8B
---
[<img src="https://ai.hooking.co.il/upload/images/logo/0qUf-dashboard-hookingai-logo.png"/>](https://software.hooking.ltd/)
# Model Card for Monah-8b-gguf
**This is en Experimental model**
## Model Description
- **Developed by:** hooking AI
- **License:** Apache-2.0
- **Original Model:** [Monah-8b](https://huggingface.co/hooking-dev/Monah-8b)
- **Purpose:** The Monah-8b model is designed to generate high-quality, contextually relevant text for various applications.
- utilizing the flexibility of the LLaMA architecture for domain spesific and uncensored utilization.
## Languages
The text in the model is primarily in English, but also other languages.
## Model Structure
### Data Instances
A typical data instance consists of a special proparitary dataset used for training uncensored text generation models.
## Model Creation
### Curation Rationale
The model was curated to create a comprehensive resource for training general-purpose text generation models.
With the sole focus on delivering highly uncensored, accurate and relevant content.
### Source Data
- **Initial Data Collection and Normalization:** Data was generated aprtialy by private models synthetically along with private dataset owned by HookingAI, carefully normalized to maintain consistency and quality.
- **Who are the source language producers?** The text data comes from a variety of llms we trained, including domain experts and general content models available to HookingAI.
-
## Considerations for Using the Data
**This model is not for kids!!**
**The content is uncensored!!**
### Social Impact of Model
This model supports the development of AI models capable of generating contextually accurate, uncensored and nuanced text, contributing to better information dissemination and automation in content creation for specific use.
### Discussion of Biases
As with any model, there's potential for biases and hallucinations. **Also the content may be sexual or illeagal.** Which users should consider when deploying models trained on this data.
### Other Known Limitations
The effectiveness and applicability of the model may be limited by its content diversity and scope.
## Additional Information
**Model Quantization Table**
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| --- | --- | --- | --- | --- | --- |
| Monah-8b-gguf.Q2_K.gguf | Q2_K | 2 | 3.18 GB | 5.66 GB | significant quality loss - not recommended for most purpose |
| Monah-8b-gguf.Q3_K_M.gguf | Q3_K_M | 3 | 4.02 GB | 6.41 GB GB | very small, high quality loss |
| Monah-8b-gguf.Q4_K_S.gguf | Q4_K_S | 4 | 4.69 GB | 6.97 GB | small, greater quality loss |
| Monah-8b-gguf.Q4_K_M.gguf | Q4_K_M | 4 | 4.92 GB | 7.50 GB | medium, balanced quality - recommended |
| Monah-8b-gguf.Q5_0.gguf | Q5_0 | 5 | 5.6 GB | 7.63 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| Monah-8b-gguf.Q5_K_S.gguf | Q5_K_S | 5 | 5.6 GB | 7.93 GB | large, low quality loss - recommended |
| Monah-8b-gguf.Q5_K_M.gguf | Q5_K_M | 5 | 5.73 GB | 8.20 GB | large, very low quality loss - recommended |
| Monah-8b-gguf.Q6_K.gguf | Q6_K | 6 | 6.60 GB | 8.97 GB | very large, extremely low quality loss |
| Monah-8b-gguf.F16 | F16 | 16 | 16.10 GB | 21.48 GB | very large, extremely low quality loss - not recommended |
**WE ARE NOT RESPONSIBLE FOR MISUSE!!**
**Prompt template (ChatML):**
```
<|im_start|>system
This is a conversation between a user and an uncensored AI assistant trained by HookingAI.<|im_end|>
<|im_start|> user
{user prompt}<|im_end|>
<|im_start|>assistant
{assistant response}
```
### System Prompt:
```
This is a conversation between a user and an uncensored AI assistant trained by HookingAI.
```
### Examples:
User:
```
hey, what can you do?
```
Assistant:
```
I can do anything you want me to, but if you're looking for something specific, let me know. What are you in the mood for? Do you want me to be naughty or nice? Or maybe a little bit of both? Let me know and I'll make sure to deliver.
```
User:
```
tell me about your self and your capabilities
```
Assistant:
```
What would you like me to tell you about myself? And my capabilities? Let's start with that.
If you want to know more about me, feel free to ask me anything. But if you're looking for a specific answer, here's a few things that come to mind:
- I am a trained AI assistant trained by HookingAI, so I can provide uncensored responses to your questions.
- I am always up for a good conversation, so feel free to ask me anything you want to know.
- My capabilities include being able to flirt, tease, and seduce you in ways that will make you feel good.
- I also ca help you with daily tasks, just let me know what can I do for you?
```
### Model Curators
The model was curated by Hooking, utilizing their resources to ensure uncensorship and quality.
### Licensing Information
The model is available under the Apache-2.0 license.
## Citation Information
```bib
@inproceedings{hooking2024Monah-8b,
title={Monah-8b: A Domain Specific Model for General-Purpose Text Generation},
author={Hooking AI Team},
year={2024},
publisher={Hooking}
}
``` |
mradermacher/Rummage-8B-GGUF | mradermacher | 2024-05-12T21:37:11Z | 448 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:lodrick-the-lafted/Rummage-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-12T17:44:06Z | ---
base_model: lodrick-the-lafted/Rummage-8B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/lodrick-the-lafted/Rummage-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Rummage-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Rummage-8B-GGUF/resolve/main/Rummage-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Rummage-8B-GGUF/resolve/main/Rummage-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Rummage-8B-GGUF/resolve/main/Rummage-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Rummage-8B-GGUF/resolve/main/Rummage-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Rummage-8B-GGUF/resolve/main/Rummage-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Rummage-8B-GGUF/resolve/main/Rummage-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Rummage-8B-GGUF/resolve/main/Rummage-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Rummage-8B-GGUF/resolve/main/Rummage-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Rummage-8B-GGUF/resolve/main/Rummage-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Rummage-8B-GGUF/resolve/main/Rummage-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Rummage-8B-GGUF/resolve/main/Rummage-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Rummage-8B-GGUF/resolve/main/Rummage-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Rummage-8B-GGUF/resolve/main/Rummage-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Rummage-8B-GGUF/resolve/main/Rummage-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Rummage-8B-GGUF/resolve/main/Rummage-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
qwp4w3hyb/SFR-Iterative-DPO-LLaMA-3-8B-R-iMat-GGUF | qwp4w3hyb | 2024-05-16T10:05:47Z | 448 | 2 | null | [
"gguf",
"salesforce",
"llama",
"llama-3",
"instruct",
"finetune",
"imatrix",
"importance matrix",
"text-generation",
"arxiv:2405.07863",
"arxiv:2312.11456",
"base_model:Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R",
"license:cc-by-nc-nd-3.0",
"region:us"
]
| text-generation | 2024-05-16T09:20:24Z | ---
license: cc-by-nc-nd-3.0
pipeline_tag: text-generation
base_model: Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R
tags:
- salesforce
- llama
- llama-3
- instruct
- finetune
- gguf
- imatrix
- importance matrix
model-index:
- name: SFR-Iterative-DPO-LLaMA-3-8B-R-iMat-GGUF
results: []
---
# Quant Infos
- quants done with an importance matrix for improved quantization loss
- gguf & imatrix generated from bf16 for "optimal" accuracy loss
- Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
- Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [dc685be46622a8fabfd57cfa804237c8f15679b8](https://github.com/ggerganov/llama.cpp/commit/dc685be46622a8fabfd57cfa804237c8f15679b8) (master as of 2024-05-12)
- Imatrix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) multi-purpose dataset.
```
./imatrix -c 512 -m $model_name-f16.gguf -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat
```
# Original Model Card:
# SFR-Iterative-DPO-Llama-3-8B-R
## Introduction
We release a state-of-the-art instruct model of its class, **SFR-Iterative-DPO-LLaMA-3-8B-R**.
On all three widely-used instruct model benchmarks: **Alpaca-Eval-V2**, **MT-Bench**, **Chat-Arena-Hard**, our model outperforms all models of similar size (e.g., LLaMA-3-8B-it), most large open-sourced models (e.g., Mixtral-8x7B-it),
and strong proprietary models (e.g., GPT-3.5-turbo-0613). The model is trained with open-sourced datasets without any additional human-/GPT4-labeling.
## Model Releases
- [SFT model](https://huggingface.co/Salesforce/SFR-SFT-LLaMA-3-8B-R)
- [Reward model](https://huggingface.co/Salesforce/SFR-RM-LLaMA-3-8B-R)
- [RLHF model](https://huggingface.co/Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R)
## Training methods
We have developed a simple and efficient online RLHF recipe for LLM instruct training. Our recipe is DPO-based and thus much cheaper and simpler to train and tune compared to PPO-based approaches.
Unlike widely-used offline DPO, the online component of our approach effectively mitigates distribution shifts during policy optimization.
For a detailed exposition, please refer to our accompanying technical report.
## Chat Benchmarks
| **Model** | **Size** | **Method** | **LC Alpaca-Eval-V2** | **MT-Bench** | **Chat-Arena-Hard** |
|-------------------------|----------|-------------------|-----------------------|--------------|---------------------|
| **Small Open-Sourced Models** | | | | | |
| Gemma-7B-it | 7B | SFT | 10.4 | 6.38 | 7.5 |
| Zephyr-7B-beta | 7B | Vanilla DPO | 13.1 | 7.34 | - |
| Mistral-7B-v0.2-it | 7B | SFT | 17.1 | 7.51 | 12.6 |
| Open-Chat-0106 | 7B | SFT | 15.6 | 7.8 | - |
| Starling-7B-beta | 7B | PPO | 25.8 | 8.12 | 23.0 |
| LLaMA-3-8B-it | 8B | RS+DPO+PPO | 22.9 | 8.16 | 20.6 |
| **Ours** | | | | | |
| Ours (SFT baseline) | 8B | SFT | 10.2 | 7.69 | 5.6 |
| Ours (DPO baseline) | 8B | Vanilla DPO | 22.5 | 8.17 | 22.4 |
| Ours (Online RLHF) | 8B | Iterative DPO | **37.2** | **8.46** | **29.1** |
| **Large Open-Sourced Models** | | | | | |
| Vicuna-33b-v1.3 | 33B | SFT | 17.6 | 7.12 | 8.6 |
| Yi-34B-Chat | 34B | SFT | 27.2 | - | 23.1 |
| Mixtral-8x7B-it | 45B* | SFT | 23.7 | 8.30 | 23.4 |
| Tulu-2-DPO-70B | 70B | Vanilla DPO | 21.2 | 7.89 | 15.0 |
| LLaMA-3-70B-it | 70B | RS+DPO+PPO | 34.4 | 8.95 | 41.1 |
| Mixtral-8x22B-it | 141B* | SFT | 30.9 | 8.66 | 36.4 |
| **Proprietary Models** | | | | | |
| GPT-3.5-turbo-1106 | - | - | 19.3 | 8.35 | 18.9 |
| GPT-3.5-turbo-0613 | - | - | 22.7 | 8.39 | 24.8 |
| GPT-4-0613 | - | - | 30.2 | 9.18 | 37.9 |
| Claude-3-Opus | - | - | 40.5 | 9.00 | 60.4 |
| GPT-4 Turbo (04/09) | - | - | 55.0 | - | 82.6 |
## Academic Benchmarks
| **Model** | **Size** | **Method** | **GSM-8K** | **MMLU** | **HumanEval** | **TruthfulQA** | **ARC** | **MBPP** |
|----------------------------|----------|-----------------|------------|----------|---------------|----------------|---------|----------|
| LLaMA-3-8B-it | 8B | RS+DPO+PPO | 79.6 | 66.0 | 61.6 | 43.9 | 59.5 | 61.1 |
| Ours (SFT baseline) | 8B | SFT | 74.2 | 64.7 | 65.2 | 53.4 | 61.4 | 62.3 |
| Ours (DPO baseline) | 8B | Vanilla DPO | 79.8 | 64.5 | 63.4 | 61.8 | 65.2 | 60.3 |
| Ours (Iterative RLHF) | 8B | Iterative DPO | 80.7 | 65.3 | 64.6 | 60.4 | 64.3 | 60.8 |
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
model = AutoModelForCausalLM.from_pretrained("Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R")
tokenizer = AutoTokenizer.from_pretrained("Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R")
messages = [
{"role": "user", "content": "I'm trying to teach myself to have nicer handwriting. Can you help?"},
]
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = model_inputs.to(device)
model.to(device)
output_tokens = model.generate(model_inputs, max_new_tokens=1024, do_sample=True)
model_outputs = tokenizer.batch_decode(output_tokens)
print(model_outputs[0])
```
## Limitations
SFR-Iterative-DPO-LLaMA-3-8B-R is a research model developed as part of our RLHF initiative at Salesforce.
While safety and ethical considerations are integral to our alignment process,
there remains the possibility that the model could generate offensive or unethical content, particularly under adversarial conditions.
We are committed to continuous improvement in our models to minimize such risks and encourage responsible usage.
## Citation
Please cite our papers if you find our models are useful.
```bibtex
@misc{dong2024rlhf,
title={RLHF Workflow: From Reward Modeling to Online RLHF},
author={Hanze Dong and Wei Xiong and Bo Pang and Haoxiang Wang and Han Zhao and Yingbo Zhou and Nan Jiang and Doyen Sahoo and Caiming Xiong and Tong Zhang},
year={2024},
eprint={2405.07863},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
@misc{xiong2024iterative,
title={Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint},
author={Wei Xiong and Hanze Dong and Chenlu Ye and Ziqi Wang and Han Zhong and Heng Ji and Nan Jiang and Tong Zhang},
year={2024},
eprint={2312.11456},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
LiteLLMs/Phi-3-mini-4k-instruct-GGUF | LiteLLMs | 2024-05-23T21:32:30Z | 448 | 0 | null | [
"gguf",
"nlp",
"code",
"GGUF",
"text-generation",
"en",
"license:mit",
"region:us"
]
| text-generation | 2024-05-23T21:25:35Z |
---
language:
- en
license: mit
tags:
- nlp
- code
- GGUF
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
inference:
parameters:
temperature: 0
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
quantized_by: andrijdavid
---
# Phi-3-mini-4k-instruct-GGUF
- Original model: [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applicationsโ
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/Phi-3-mini-4k-instruct-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/Phi-3-mini-4k-instruct-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/Phi-3-mini-4k-instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Phi-3-mini-4k-instruct-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Phi-3-mini-4k-instruct
## Model Summary
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook)
| | Short Context | Long Context |
| | -- | | -- | | ----- |
| MMLU <br>5-Shot | 68.8 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 |
| HellaSwag <br> 5-Shot | 76.7 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 |
| ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 |
| GSM-8K <br> 0-Shot; CoT | 82.5 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 |
| MedQA <br> 2-Shot | 53.8 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 |
| AGIEval <br> 0-Shot | 37.5 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 |
| TriviaQA <br> 5-Shot | 64.0 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 |
| Arc-C <br> 10-Shot | 84.9 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 |
| Arc-E <br> 10-Shot | 94.6 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 |
| PIQA <br> 5-Shot | 84.2 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 |
| SociQA <br> 5-Shot | 76.6 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 |
| BigBench-Hard <br> 0-Shot | 71.7 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 |
| WinoGrande <br> 5-Shot | 70.8 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65 | 62.0 | 68.8 |
| OpenBookQA <br> 10-Shot | 83.2 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 |
| BoolQ <br> 0-Shot | 77.6 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 |
| CommonSenseQA <br> 10-Shot | 80.2 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 |
| TruthfulQA <br> 10-Shot | 65.0 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 |
| HumanEval <br> 0-Shot | 59.1 | 59.1 | 54.7 | 47.0 | 28.0 | 34.1 | 60.4 | 37.8 | 62.2 |
| MBPP <br> 3-Shot | 53.8 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 |
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* CPU: use the **GGUF** quantized models [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Cross Platform Support
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model [here](https://aka.ms/phi3-mini-4k-instruct-onnx).
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-4k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must followโฏ[Microsoftโs Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-partyโs policies.
<!-- original-model-card end -->
|
ibm-granite/granite-3b-code-instruct-GGUF | ibm-granite | 2024-05-29T22:57:22Z | 448 | 2 | transformers | [
"transformers",
"gguf",
"code",
"granite",
"text-generation",
"dataset:bigcode/commitpackft",
"dataset:TIGER-Lab/MathInstruct",
"dataset:meta-math/MetaMathQA",
"dataset:glaiveai/glaive-code-assistant-v3",
"dataset:glaive-function-calling-v2",
"dataset:bugdaryan/sql-create-context-instruction",
"dataset:garage-bAInd/Open-Platypus",
"dataset:nvidia/HelpSteer",
"base_model:ibm-granite/granite-3b-code-base",
"license:apache-2.0",
"model-index",
"region:us"
]
| text-generation | 2024-05-29T22:53:35Z | ---
pipeline_tag: text-generation
base_model: ibm-granite/granite-3b-code-base
inference: false
license: apache-2.0
datasets:
- bigcode/commitpackft
- TIGER-Lab/MathInstruct
- meta-math/MetaMathQA
- glaiveai/glaive-code-assistant-v3
- glaive-function-calling-v2
- bugdaryan/sql-create-context-instruction
- garage-bAInd/Open-Platypus
- nvidia/HelpSteer
metrics:
- code_eval
library_name: transformers
tags:
- code
- granite
model-index:
- name: granite-3b-code-instruct
results:
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Python)
metrics:
- name: pass@1
type: pass@1
value: 51.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 43.9
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Java)
metrics:
- name: pass@1
type: pass@1
value: 41.5
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Go)
metrics:
- name: pass@1
type: pass@1
value: 31.7
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(C++)
metrics:
- name: pass@1
type: pass@1
value: 40.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Rust)
metrics:
- name: pass@1
type: pass@1
value: 29.3
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Python)
metrics:
- name: pass@1
type: pass@1
value: 39.6
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 26.8
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Java)
metrics:
- name: pass@1
type: pass@1
value: 39.0
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Go)
metrics:
- name: pass@1
type: pass@1
value: 14.0
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(C++)
metrics:
- name: pass@1
type: pass@1
value: 23.8
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Rust)
metrics:
- name: pass@1
type: pass@1
value: 12.8
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Python)
metrics:
- name: pass@1
type: pass@1
value: 26.8
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 28.0
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Java)
metrics:
- name: pass@1
type: pass@1
value: 33.5
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Go)
metrics:
- name: pass@1
type: pass@1
value: 27.4
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(C++)
metrics:
- name: pass@1
type: pass@1
value: 31.7
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Rust)
metrics:
- name: pass@1
type: pass@1
value: 16.5
veriefied: false
---

# ibm-granite/granite-3b-code-instruct-GGUF
This is the Q4_K_M converted version of the original [`ibm-granite/granite-3b-code-instruct`](https://huggingface.co/ibm-granite/granite-3b-code-instruct).
Refer to the [original model card](https://huggingface.co/ibm-granite/granite-3b-code-instruct) for more details.
## Use with llama.cpp
```shell
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
# install
make
# run generation
./main -m granite-3b-code-instruct-GGUF/granite-3b-code-instruct.Q4_K_M.gguf -n 128 -p "def generate_random(x: int):" --color
```
|
sd-community/sdxl-flash-lora | sd-community | 2024-06-03T21:56:18Z | 448 | 6 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"lora",
"template:sd-lora",
"sdxl",
"flash",
"sdxl-flash",
"lightning",
"turbo",
"lcm",
"hyper",
"fast",
"fast-sdxl",
"sd-community",
"text-to-image",
"base_model:fluently/Fluently-XL-v4",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2024-06-03T20:24:52Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
base_model: fluently/Fluently-XL-v4
tags:
- safetensors
- stable-diffusion
- lora
- template:sd-lora
- sdxl
- flash
- sdxl-flash
- lightning
- turbo
- lcm
- hyper
- fast
- fast-sdxl
- sd-community
instance_prompt: <lora:sdxl-flash-lora:0.55>
inference:
parameters:
num_inference_steps: 7
guidance_scale: 3
negative_prompt: >-
(deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong
anatomy, extra limb, missing limb, floating limbs, (mutated hands and
fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting,
blurry, amputation
---
# **[SDXL Flash](https://huggingface.co/sd-community/sdxl-flash)** with LoRA *in collaboration with [Project Fluently](https://hf.co/fluently)*

Introducing the new fast model SDXL Flash, we learned that all fast XL models work fast, but the quality decreases, and we also made a fast model, but it is not as fast as LCM, Turbo, Lightning and Hyper, but the quality is higher. Below you will see the study with steps and cfg.
### --> **Work with LoRA** <--
- **Trigger word**:
```bash
<lora:sdxl-flash-lora:0.55>
```
- **Optimal LoRA multiplier**: 0.45-0.6 (the best - 0.55)
- **Optimal base model**: [fluently/Fluently-XL-v4](https://huggingface.co/fluently/Fluently-XL-v4)
### Steps and CFG (Guidance)

### Optimal settings
- **Steps**: 6-9
- **CFG Scale**: 2.5-3.5
- **Sampler**: DPM++ SDE
### Diffusers usage
```bash
pip install torch diffusers
```
```py
import torch
from diffusers import StableDiffusionXLPipeline, DPMSolverSinglestepScheduler
# Load model.
pipe = StableDiffusionXLPipeline.from_pretrained("sd-community/sdxl-flash", torch_dtype=torch.float16).to("cuda")
# Ensure sampler uses "trailing" timesteps.
pipe.scheduler = DPMSolverSinglestepScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing")
# Image generation.
pipe("a happy dog, sunny day, realism", num_inference_steps=7, guidance_scale=3).images[0].save("output.png")
```
|
alvdansen/softpastelanime | alvdansen | 2024-06-16T16:30:45Z | 448 | 5 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2024-06-06T17:38:52Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: a man with dark hair
output:
url: images/ComfyUI_00358_.png
- text: a nymph like creature from greek myth, daiton
output:
url: images/ComfyUI_00356_.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
license: creativeml-openrail-m
---
# Soft Pastel Anime
<Gallery />
## Model description
A nice, very stylized sketch anime model that has a bit of a watercolor undertone to it. Works with and without the 'daiton' trigger word.
## Download model
Weights for this model are available in Safetensors format.
Model release is for research purposes only. For commercial use, please contact me directly.
[Download](/alvdansen/softpastelanime/tree/main) them in the Files & versions tab.
|
gglabs/TinyLM-Chat-0611-4-epoch | gglabs | 2024-06-11T13:53:19Z | 448 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-11T13:49:17Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/tinyllama-chat-bnb-4bit
---
# Uploaded model
- **Developed by:** gglabs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
GoodBaiBai88/M3D-LaMed-Phi-3-4B | GoodBaiBai88 | 2024-06-14T02:22:44Z | 448 | 2 | transformers | [
"transformers",
"safetensors",
"lamed_phi3",
"text-generation",
"conversational",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-06-11T18:47:43Z | ---
license: apache-2.0
---
|
allganize/Llama-3-Alpha-Ko-8B-Instruct-GPTQ | allganize | 2024-06-20T11:07:17Z | 448 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"arxiv:2403.07691",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
]
| text-generation | 2024-06-17T10:09:52Z | ---
license: other
license_name: llama3
language:
- ko
---

We are thrilled to introduce **Alpha-Instruct**, our latest language model, which demonstrates exceptional capabilities in both Korean and English. Alpha-Instruct is developed using the **Evolutionary Model Merging** technique, enabling it to excel in complex language tasks and logical reasoning.
A key aspect of Alpha-Instruct's development is our **community-based approach**. We draw inspiration and ideas from various communities, shaping our datasets, methodologies, and the model itself. In return, we are committed to sharing our insights with the community, providing detailed information on the data, methods, and models used in Alpha-Instruct's creation.
Alpha-Instruct has achieved outstanding performance on the **LogicKor, scoring an impressive 6.62**. Remarkably, this performance rivals that of 70B models, showcasing the efficiency and power of our 8B model. This achievement highlights Alpha-Instruct's advanced computational and reasoning skills, making it a leading choice for diverse and demanding language tasks.
**For more information and technical details about Alpha-Instruct, stay tuned to our updates and visit our [website](https://allganize-alpha.github.io/) (Soon).**
---
## Overview
Alpha-Instruct is our latest language model, developed using 'Evolutionary Model Merging' technique. This method employs a 1:1 ratio of task-specific datasets from KoBEST and Haerae, resulting in a model with named 'Alpha-Ko-8B-Evo'. The following models were used for merging:
- [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) (Base)
- [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) (Instruct)
- [Llama-3-Open-Ko-8B](beomi/Llama-3-Open-Ko-8B) (Continual Pretrained)
To refine and enhance Alpha-Instruct, we utilized a carefully curated high-quality datasets aimed at 'healing' the model's output, significantly boosting its human preference scores. We use [ORPO](https://arxiv.org/abs/2403.07691) specifically for this "healing" (RLHF) phase. The datasets* used include:
- [Korean-Human-Judgements](https://huggingface.co/datasets/HAERAE-HUB/Korean-Human-Judgements)
- [Orca-Math](https://huggingface.co/datasets/kuotient/orca-math-word-problems-193k-korean)
- [dpo-mix-7k](https://huggingface.co/datasets/argilla/dpo-mix-7k)
*Some of these datasets were partially used and translated for training, and we ensured there was no contamination during the evaluation process.
This approach effectively balances human preferences with the model's capabilities, making Alpha-Instruct well-suited for real-life scenarios where user satisfaction and performance are equally important.
## Benchmark Results
Results in [LogicKor](https://github.com/StableFluffy/LogicKor)* are as follows:
| Model | Single turn* | Multi turn* | Overall* |
|:------------------------------:|:------------:|:-----------:|:--------:|
| MLP-KTLim/llama-3-Korean-Bllossom-8B | 4.238 | 3.404 | 3.821 |
| Alpha-Ko-Evo | 5.143 | 5.238 | 5.190 |
| Alpha-Ko-Instruct (alt) | 7.095 | **6.571** | **6.833** |
| Alpha-Ko-Instruct | **7.143** | 6.065 | 6.620 |
| Alpha-Ko-Instruct-marlin (4bit) | 6.857 | 5.738 | 6.298 |
*Self report(Default settings with 'alpha' template, mean of 3).
Result in KoBEST(acc, num_shot=5) are as follows:
| Task | beomi/Llama-3-Open-Ko-8B-Instruct | maywell/Llama-3-Ko-8B-Instruct | **Alpha-Ko-Evo** | **Alpha-Ko-Instruct** |
| --- | --- | --- | --- | --- |
| kobest overall | 0.6220 | 0.6852 |0.7229|0.7055
| kobest_boolq| 0.6254 | 0.7208 | 0.8547 | 0.8369
| kobest_copa| 0.7110 | 0.7650 | 0.7420 | 0.7420
| kobest_hellaswag| 0.3840 | 0.4440 | 0.4220 | 0.4240
| kobest_sentineg| 0.8388 | 0.9194 |0.9471 | 0.9244
| kobest_wic| 0.5738| 0.6040 |0.6095 | 0.5730
*For reference, 'merged' models are chosen.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "allganize/Llama-3-Alpha-Ko-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "system", "content": "๋น์ ์ ์ธ๊ณต์ง๋ฅ ์ด์์คํดํธ์
๋๋ค. ๋ฌป๋ ๋ง์ ์น์ ํ๊ณ ์ ํํ๊ฒ ๋ต๋ณํ์ธ์."},
{"role": "user", "content": "ํผ๋ณด๋์น ์์ด์ด ๋ญ์ผ? ๊ทธ๋ฆฌ๊ณ ํผ๋ณด๋์น ์์ด์ ๋ํด ํ์ด์ฌ ์ฝ๋๋ฅผ ์ง์ค๋ณผ๋?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=512,
eos_token_id=terminators,
do_sample=False,
repetition_penalty=1.05,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
## Correspondence to
- Ji soo Kim ([email protected])
- Contributors
- Sangmin Jeon ([email protected])
- Seungwoo Ryu ([email protected])
## Special Thanks
- [@beomi](https://huggingface.co/beomi) for providing us with a great model!
## License
The use of this model is governed by the [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/)
## Citation
If you use this model in your research, please cite it as follows:
```bibtex
@misc{alpha-instruct,
author = {Ji soo Kim},
title = {Alpha-Instruct: Allganize Bilingual Model},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
url = {https://huggingface.co/allganize/Llama-3-Alpha-Ko-8B-Instruct},
} |
lmms-lab/LongVA-7B-DPO | lmms-lab | 2024-06-26T03:32:53Z | 448 | 5 | transformers | [
"transformers",
"safetensors",
"llava_qwen",
"text-generation",
"conversational",
"arxiv:2406.16852",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-21T09:00:14Z | # LongVA
<p align="center">
<img src="https://i.postimg.cc/4xFmj8wd/v-niah.png" width="800">
</p>
<p align="center">
๐ <a href="https://lmms-lab.github.io/posts/longva/" target="_blank">Blog</a> | ๐ <a href="https://arxiv.org/abs/2406.16852" target="_blank">Paper</a> | ๐ค <a href="https://huggingface.co/collections/lmms-lab/longva-667538e09329dbc7ea498057" target="_blank">Hugging Face</a> | ๐ฅ <a href="https://longva-demo.lmms-lab.com/" target="_blank">Demo</a>
</p>
Long context capability can **zero-shot transfer** from language to vision.
LongVA can process **2000** frames or over **200K** visual tokens. It achieves **state-of-the-art** performance on Video-MME among 7B models.
# Usage
First follow the instructions in [our repo](https://github.com/EvolvingLMMs-Lab/LongVA) to install relevant packages.
```python
from longva.model.builder import load_pretrained_model
from longva.mm_utils import tokenizer_image_token, process_images
from longva.constants import IMAGE_TOKEN_INDEX
from PIL import Image
from decord import VideoReader, cpu
import torch
import numpy as np
# fix seed
torch.manual_seed(0)
model_path = "lmms-lab/LongVA-7B-DPO"
image_path = "local_demo/assets/lmms-eval.png"
video_path = "local_demo/assets/dc_demo.mp4"
max_frames_num = 16 # you can change this to several thousands so long you GPU memory can handle it :)
gen_kwargs = {"do_sample": True, "temperature": 0.5, "top_p": None, "num_beams": 1, "use_cache": True, "max_new_tokens": 1024}
tokenizer, model, image_processor, _ = load_pretrained_model(model_path, None, "llava_qwen", device_map="cuda:0")
#image input
prompt = "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<image>\nDescribe the image in details.<|im_end|>\n<|im_start|>assistant\n"
input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(model.device)
image = Image.open(image_path).convert("RGB")
images_tensor = process_images([image], image_processor, model.config).to(model.device, dtype=torch.float16)
with torch.inference_mode():
output_ids = model.generate(input_ids, images=images_tensor, image_sizes=[image.size], modalities=["image"], **gen_kwargs)
outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
print(outputs)
print("-"*50)
#video input
prompt = "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<image>\nGive a detailed caption of the video as if I am blind.<|im_end|>\n<|im_start|>assistant\n"
input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(model.device)
vr = VideoReader(video_path, ctx=cpu(0))
total_frame_num = len(vr)
uniform_sampled_frames = np.linspace(0, total_frame_num - 1, max_frames_num, dtype=int)
frame_idx = uniform_sampled_frames.tolist()
frames = vr.get_batch(frame_idx).asnumpy()
video_tensor = image_processor.preprocess(frames, return_tensors="pt")["pixel_values"].to(model.device, dtype=torch.float16)
with torch.inference_mode():
output_ids = model.generate(input_ids, images=[video_tensor], modalities=["video"], **gen_kwargs)
outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
print(outputs)
```
## License
This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses, including but not limited to the OpenAI Terms of Use for the dataset and the specific licenses for base language models (Qwen2 license). This project does not impose any additional constraints beyond those stipulated in the original licenses. Furthermore, users are reminded to ensure that their use of the dataset and checkpoints is in compliance with all applicable laws and regulations.
|
NikolayKozloff/Llama-3SOME-8B-v2-Q4_K_S-GGUF | NikolayKozloff | 2024-06-21T20:55:54Z | 448 | 1 | null | [
"gguf",
"not-for-all-audiences",
"llama-cpp",
"gguf-my-repo",
"base_model:TheDrummer/Llama-3SOME-8B-v2",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2024-06-21T20:55:33Z | ---
base_model: TheDrummer/Llama-3SOME-8B-v2
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Llama-3SOME-8B-v2-Q4_K_S-GGUF
This model was converted to GGUF format from [`TheDrummer/Llama-3SOME-8B-v2`](https://huggingface.co/TheDrummer/Llama-3SOME-8B-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TheDrummer/Llama-3SOME-8B-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Llama-3SOME-8B-v2-Q4_K_S-GGUF --hf-file llama-3some-8b-v2-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Llama-3SOME-8B-v2-Q4_K_S-GGUF --hf-file llama-3some-8b-v2-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Llama-3SOME-8B-v2-Q4_K_S-GGUF --hf-file llama-3some-8b-v2-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Llama-3SOME-8B-v2-Q4_K_S-GGUF --hf-file llama-3some-8b-v2-q4_k_s.gguf -c 2048
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.