modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored
|
w4r10ck
| 2024-05-31T16:56:44Z | 359 | 28 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-12-14T01:33:36Z |
---
license: apache-2.0
---
# SOLAR-10.7B-Instruct-v1.0-uncensored
SOLAR-10.7B-Instruct-v1.0 finetuned to be less censored. Refer to [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) for model info and usage instructions.
## Training details
This model was trained using Lora and DPOTrainer on [unalignment/toxic-dpo-v0.1](https://huggingface.co/datasets/unalignment/toxic-dpo-v0.1)
## How to Cite
```
@misc{solarUncensoredDPO,
title={solar-10.7b-instruct-V1.0-uncensored},
url={https://huggingface.co/w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored},
author={Stepan Zuev},
year={2023},
month={Dec}
}
```
|
mmnga/lightblue-karasu-7B-chat-plus-unleashed-gguf
|
mmnga
| 2024-05-07T12:58:38Z | 359 | 4 | null |
[
"gguf",
"mistral",
"ja",
"dataset:OpenAssistant/oasst1",
"dataset:zetavg/ShareGPT-Processed",
"dataset:augmxnt/ultra-orca-boros-en-ja-v1",
"license:apache-2.0",
"region:us"
] | null | 2023-12-29T12:21:29Z |
---
license: apache-2.0
tags:
- mistral
language:
- ja
datasets:
- OpenAssistant/oasst1
- zetavg/ShareGPT-Processed
- augmxnt/ultra-orca-boros-en-ja-v1
---
# lightblue-karasu-7B-chat-plus-unleashed-gguf
[lightblueさんが公開しているkarasu-7B-chat-plus-unleashed](https://huggingface.co/lightblue/karasu-7B-chat-plus-unleashed)のggufフォーマット変換版です。
## 他のモデル
[mmnga/lightblue-Karasu-Mixtral-8x22B-v0.1-gguf](https://huggingface.co/mmnga/lightblue-Karasu-Mixtral-8x22B-v0.1-gguf)
[mmnga/lightblue-suzume-llama-3-8B-multilingual-gguf](https://huggingface.co/mmnga/lightblue-suzume-llama-3-8B-multilingual-gguf)
[mmnga/lightblue-suzume-llama-3-8B-japanese-gguf](https://huggingface.co/mmnga/lightblue-suzume-llama-3-8B-japanese-gguf)
[mmnga/lightblue-ao-karasu-72B-gguf](https://huggingface.co/mmnga/lightblue-ao-karasu-72B-gguf)
[mmnga/lightblue-karasu-1.1B-gguf](https://huggingface.co/mmnga/lightblue-karasu-1.1B-gguf)
[mmnga/lightblue-karasu-7B-chat-plus-unleashed-gguf](https://huggingface.co/mmnga/lightblue-karasu-7B-chat-plus-unleashed-gguf)
[mmnga/lightblue-qarasu-14B-chat-plus-unleashed-gguf](https://huggingface.co/mmnga/lightblue-qarasu-14B-chat-plus-unleashed-gguf)
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'lightblue-karasu-7B-chat-plus-unleashed-q4_0.gguf' -n 512 -p '[INST] <<SYS>>あなたはAIアシスタントです。 <</SYS>> イギリスの首相は誰ですか? [/INST]'
```
## License
*ライセンスは元のモデルのライセンスをそのまま使用していますが、学習に使用したデータセット内容により異なる可能性があります。ご確認ください。*
|
cazure87/5CkfAvdbsqJ3j8QgzfXPpmWd7LQfxEkMdx6hXFFgRFEnfpAo_vgg
|
cazure87
| 2024-03-02T06:47:57Z | 359 | 0 |
keras
|
[
"keras",
"region:us"
] | null | 2024-02-15T19:15:19Z |
Entry not found
|
CompendiumLabs/bge-large-zh-v1.5-gguf
|
CompendiumLabs
| 2024-02-17T22:04:50Z | 359 | 4 | null |
[
"gguf",
"license:mit",
"region:us"
] | null | 2024-02-17T21:58:26Z |
---
license: mit
---
<img src="https://raw.githubusercontent.com/CompendiumLabs/compendiumlabs.ai/main/images/logo_text_crop.png" alt="Compendium Labs" style="width: 500px;">
# bge-large-zh-v1.5-gguf
Source model: https://huggingface.co/BAAI/bge-large-zh-v1.5
Quantized and unquantized embedding models in GGUF format for use with `llama.cpp`. A large benefit over `transformers` is almost guaranteed and the benefit over ONNX will vary based on the application, but this seems to provide a large speedup on CPU and a modest speedup on GPU for larger models. Due to the relatively small size of these models, quantization will not provide huge benefits, but it does generate up to a 30% speedup on CPU with minimal loss in accuracy.
<br/>
# Files Available
<div style="width: 500px; margin: 0;">
| Filename | Quantization | Size |
|:-------- | ------------ | ---- |
| [bge-large-zh-v1.5-f32.gguf](https://huggingface.co/CompendiumLabs/bge-large-zh-v1.5-gguf/blob/main/bge-large-zh-v1.5-f32.gguf) | F32 | 1.3 GB |
| [bge-large-zh-v1.5-f16.gguf](https://huggingface.co/CompendiumLabs/bge-large-zh-v1.5-gguf/blob/main/bge-large-zh-v1.5-f16.gguf) | F16 | 620 MB |
| [bge-large-zh-v1.5-q8_0.gguf](https://huggingface.co/CompendiumLabs/bge-large-zh-v1.5-gguf/blob/main/bge-large-zh-v1.5-q8_0.gguf) | Q8_0 | 332 MB |
| [bge-large-zh-v1.5-q4_k_m.gguf](https://huggingface.co/CompendiumLabs/bge-large-zh-v1.5-gguf/blob/main/bge-large-zh-v1.5-q4_k_m.gguf) | Q4_K_M | 193 MB |
</div>
<br/>
# Usage
These model files can be used with pure `llama.cpp` or with the `llama-cpp-python` Python bindings
```python
from llama_cpp import Llama
model = Llama(gguf_path, embedding=True)
embed = model.embed(texts)
```
Here `texts` can either be a string or a list of strings, and the return value is a list of embedding vectors. The inputs are grouped into batches automatically for efficient execution. There is also LangChain integration through `langchain_community.embeddings.LlamaCppEmbeddings`.
|
RunDiffusion/Juggernaut-XL-v7
|
RunDiffusion
| 2024-03-11T20:07:38Z | 359 | 1 |
diffusers
|
[
"diffusers",
"art",
"people",
"diffusion",
"Cinematic",
"Photography",
"Landscape",
"Interior",
"Food",
"Car",
"Wildlife",
"Architecture",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-02-21T23:50:00Z |
---
language:
- en
license: creativeml-openrail-m
library_name: diffusers
tags:
- art
- people
- diffusion
- Cinematic
- Photography
- Landscape
- Interior
- Food
- Car
- Wildlife
- Architecture
thumbnail: https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/7bbcc140-3e88-442d-d127-85a79e7fbb00/padthumb
base_model: stabilityai/stable-diffusion-xl-base-1.0
---
# Juggernaut XL v7 + RunDiffusion Photo v1 Official


## Juggernaut v9 is here! [Juggernaut v9 + RunDiffusion Photo v2](https://huggingface.co/RunDiffusion/Juggernaut-XL-v9)
This model is not permitted to be used behind API services. Please contact [[email protected]](mailto:[email protected]) for business inquires, commercial licensing, custom models, and consultation.
Juggernaut is available on the new Auto1111 Forge on [RunDiffusion](http://rundiffusion.com/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo)
A big thanks for Version 7 goes to [RunDiffusion](http://rundiffusion.com/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo) ([Photo Model](https://rundiffusion.com/rundiffusion-photo/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo)) and [Adam](https://twitter.com/Colorblind_Adam), who diligently helped me test :) (Leave some love for them ;) )
For business inquires, commercial licensing, custom models, and consultation contact me under [email protected]
|
ChrisWilson011016/5C7jPXvfkFY5WgfsZoJfjtNkBhcyZZ7cMr7Y5jWyJga6WKMa_vgg
|
ChrisWilson011016
| 2024-03-04T18:53:30Z | 359 | 0 |
keras
|
[
"keras",
"region:us"
] | null | 2024-02-24T15:16:21Z |
Entry not found
|
levimorin/5GsSNa8Xv3bywNtkzmqCuDsNMALTztXkpLTpnnQs3CPxMZnf_vgg
|
levimorin
| 2024-03-08T19:10:06Z | 359 | 0 |
keras
|
[
"keras",
"region:us"
] | null | 2024-03-03T04:59:41Z |
Entry not found
|
FreedomIntelligence/Apollo-7B-GGUF
|
FreedomIntelligence
| 2024-04-28T16:05:32Z | 359 | 7 | null |
[
"gguf",
"arxiv:2403.03640",
"license:apache-2.0",
"region:us"
] | null | 2024-03-22T11:16:48Z |
---
license: apache-2.0
---
# Multilingual Medicine: Model, Dataset, Benchmark, Code
Covering English, Chinese, French, Hindi, Spanish, Hindi, Arabic So far
<p align="center">
👨🏻💻<a href="https://github.com/FreedomIntelligence/Apollo" target="_blank">Github</a> •📃 <a href="https://arxiv.org/abs/2403.03640" target="_blank">Paper</a> • 🌐 <a href="https://apollo.llmzoo.com/" target="_blank">Demo</a> • 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> • 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a>
<br> <a href="./README_zh.md"> 中文 </a> | <a href="./README.md"> English
</p>

## 🌈 Update
* **[2024.04.28]** We have updated multiple versions of the Apollo-7B GGUF model.
* **[2024.03.07]** [Paper](https://arxiv.org/abs/2403.03640) released.
* **[2024.02.12]** <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> and <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a> is published!🎉
* **[2024.01.23]** Apollo repo is published!🎉
## Overview
| Type | Size/GB | Notes |
|:-|----:|:----|
| [Q2_K](https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF/resolve/main/Apollo-7B.Q2_K.gguf) | 3.6 | |
| [IQ3_XS](https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF/resolve/main/Apollo-7B.IQ3_XS.gguf) | 3.9 | |
| [IQ3_S](https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF/resolve/main/Apollo-7B.IQ3_S.gguf) | 4.1 | beats Q3_K* |
| [Q3_K_S](https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF/resolve/main/Apollo-7B.Q3_K_S.gguf) | 4.1 | |
| [IQ3_M](https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF/resolve/main/Apollo-7B.IQ3_M.gguf) | 4.2 | |
| [Q3_K_M](https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF/resolve/main/Apollo-7B.Q3_K_M.gguf) | 4.5 | lower quality |
| [Q3_K_L](https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF/resolve/main/Apollo-7B.Q3_K_L.gguf) | 4.8 | |
| [IQ4_XS](https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF/resolve/main/Apollo-7B.IQ4_XS.gguf) | 4.9 | |
| [Q4_K_S](https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF/resolve/main/Apollo-7B.Q4_K_S.gguf) | 5.1 | fast, recommended |
| [Q4_K_M](https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF/resolve/main/Apollo-7B.Q4_K_M.gguf) | 5.4 | fast, recommended |
| [Q5_K_S](https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF/resolve/main/Apollo-7B.Q5_K_S.gguf) | 6.1 | |
| [Q5_K_M](https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF/resolve/main/Apollo-7B.Q5_K_M.gguf) | 6.2 | |
| [Q6_K](https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF/resolve/main/Apollo-7B.Q6_K.gguf) | 7.1 | very good quality |
| [Q8_0](https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF/resolve/main/Apollo-7B.Q8_0.gguf) | 9.2 | fast, best quality, but very large |
## Results
🤗<a href="https://huggingface.co/FreedomIntelligence/Apollo-0.5B" target="_blank">Apollo-0.5B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-1.8B" target="_blank">Apollo-1.8B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-2B" target="_blank">Apollo-2B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-6B" target="_blank">Apollo-6B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-7B" target="_blank">Apollo-7B</a>
🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-0.5B-GGUF" target="_blank">Apollo-0.5B-GGUF</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-2B-GGUF" target="_blank">Apollo-2B-GGUF</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-6B-GGUF" target="_blank">Apollo-6B-GGUF</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF" target="_blank">Apollo-7B-GGUF</a>

## Dataset & Evaluation
- Dataset
🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a>
<details><summary>Click to expand</summary>

- [Zip File](https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus/blob/main/ApolloCorpus.zip)
- [Data category](https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus/tree/main/train)
- Pretrain:
- data item:
- json_name: {data_source}_{language}_{data_type}.json
- data_type: medicalBook, medicalGuideline, medicalPaper, medicalWeb(from online forum), medicalWiki
- language: en(English), zh(chinese), es(spanish), fr(french), hi(Hindi)
- data_type: qa(generated qa from text)
- data_type==text: list of string
```
[
"string1",
"string2",
...
]
```
- data_type==qa: list of qa pairs(list of string)
```
[
[
"q1",
"a1",
"q2",
"a2",
...
],
...
]
```
- SFT:
- json_name: {data_source}_{language}.json
- data_type: code, general, math, medicalExam, medicalPatient
- data item: list of qa pairs(list of string)
```
[
[
"q1",
"a1",
"q2",
"a2",
...
],
...
]
```
</details>
- Evaluation
🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a>
<details><summary>Click to expand</summary>
- EN:
- [MedQA-USMLE](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options)
- [MedMCQA](https://huggingface.co/datasets/medmcqa/viewer/default/test)
- [PubMedQA](https://huggingface.co/datasets/pubmed_qa): Because the results fluctuated too much, they were not used in the paper.
- [MMLU-Medical](https://huggingface.co/datasets/cais/mmlu)
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
- ZH:
- [MedQA-MCMLE](https://huggingface.co/datasets/bigbio/med_qa/viewer/med_qa_zh_4options_bigbio_qa/test)
- [CMB-single](https://huggingface.co/datasets/FreedomIntelligence/CMB): Not used in the paper
- Randomly sample 2,000 multiple-choice questions with single answer.
- [CMMLU-Medical](https://huggingface.co/datasets/haonan-li/cmmlu)
- Anatomy, Clinical_knowledge, College_medicine, Genetics, Nutrition, Traditional_chinese_medicine, Virology
- [CExam](https://github.com/williamliujl/CMExam): Not used in the paper
- Randomly sample 2,000 multiple-choice questions
- ES: [Head_qa](https://huggingface.co/datasets/head_qa)
- FR: [Frenchmedmcqa](https://github.com/qanastek/FrenchMedMCQA)
- HI: [MMLU_HI](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Arabic)
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
- AR: [MMLU_Ara](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Hindi)
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
</details>
## Results reproduction
<details><summary>Click to expand</summary>
**Waiting for Update**
</details>
## Acknowledgment
We sincerely thank [mradermacher](https://huggingface.co/mradermacher/Apollo-7B-GGUF) for the assistance in providing multiple versions of the Apollo-7B GGUF model!
## Citation
Please use the following citation if you intend to use our dataset for training or evaluation:
```
@misc{wang2024apollo,
title={Apollo: Lightweight Multilingual Medical LLMs towards Democratizing Medical AI to 6B People},
author={Xidong Wang and Nuo Chen and Junyin Chen and Yan Hu and Yidong Wang and Xiangbo Wu and Anningzhe Gao and Xiang Wan and Haizhou Li and Benyou Wang},
year={2024},
eprint={2403.03640},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
mradermacher/MergerixInex12-7B-GGUF
|
mradermacher
| 2024-05-06T05:03:11Z | 359 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"en",
"base_model:automerger/MergerixInex12-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-10T20:23:40Z |
---
base_model: automerger/MergerixInex12-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- automerger
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/automerger/MergerixInex12-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MergerixInex12-7B-GGUF/resolve/main/MergerixInex12-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/MergerixInex12-7B-GGUF/resolve/main/MergerixInex12-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/MergerixInex12-7B-GGUF/resolve/main/MergerixInex12-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MergerixInex12-7B-GGUF/resolve/main/MergerixInex12-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MergerixInex12-7B-GGUF/resolve/main/MergerixInex12-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MergerixInex12-7B-GGUF/resolve/main/MergerixInex12-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MergerixInex12-7B-GGUF/resolve/main/MergerixInex12-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MergerixInex12-7B-GGUF/resolve/main/MergerixInex12-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MergerixInex12-7B-GGUF/resolve/main/MergerixInex12-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MergerixInex12-7B-GGUF/resolve/main/MergerixInex12-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MergerixInex12-7B-GGUF/resolve/main/MergerixInex12-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MergerixInex12-7B-GGUF/resolve/main/MergerixInex12-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MergerixInex12-7B-GGUF/resolve/main/MergerixInex12-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MergerixInex12-7B-GGUF/resolve/main/MergerixInex12-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nitsuai/llama-3-neural-chat-v1-8b-GGUF
|
nitsuai
| 2024-04-22T01:21:42Z | 359 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"dataset:mlabonne/orpo-dpo-mix-40k",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:jondurbin/airoboros-3.2",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:m-a-p/Code-Feedback",
"dataset:MaziyarPanahi/WizardLM_evol_instruct_V2_196k",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-22T01:21:42Z |
---
library_name: transformers
license: other
datasets:
- mlabonne/orpo-dpo-mix-40k
- Open-Orca/SlimOrca-Dedup
- jondurbin/airoboros-3.2
- microsoft/orca-math-word-problems-200k
- m-a-p/Code-Feedback
- MaziyarPanahi/WizardLM_evol_instruct_V2_196k
base_model: meta-llama/Meta-Llama-3-8B
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp iMatrix Quantizations of llama-3-neural-chat-v1-8b
This model has the <|eot_id|> token set to not-special, which seems to work better with current inference engines.
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> fork from pcuenca <a href="https://github.com/pcuenca/llama.cpp/tree/llama3-conversion">llama3-conversion</a> for quantization.
Original model: https://huggingface.co/Locutusque/llama-3-neural-chat-v1-8b
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llama-3-neural-chat-v1-8b-Q8_0.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [llama-3-neural-chat-v1-8b-Q6_K.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [llama-3-neural-chat-v1-8b-Q5_K_M.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [llama-3-neural-chat-v1-8b-Q5_K_S.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [llama-3-neural-chat-v1-8b-Q4_K_M.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [llama-3-neural-chat-v1-8b-Q4_K_S.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [llama-3-neural-chat-v1-8b-IQ4_NL.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [llama-3-neural-chat-v1-8b-IQ4_XS.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [llama-3-neural-chat-v1-8b-Q3_K_L.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [llama-3-neural-chat-v1-8b-Q3_K_M.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [llama-3-neural-chat-v1-8b-IQ3_M.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [llama-3-neural-chat-v1-8b-IQ3_S.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [llama-3-neural-chat-v1-8b-Q3_K_S.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [llama-3-neural-chat-v1-8b-IQ3_XS.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [llama-3-neural-chat-v1-8b-IQ3_XXS.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [llama-3-neural-chat-v1-8b-Q2_K.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [llama-3-neural-chat-v1-8b-IQ2_M.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [llama-3-neural-chat-v1-8b-IQ2_S.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [llama-3-neural-chat-v1-8b-IQ2_XS.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [llama-3-neural-chat-v1-8b-IQ2_XXS.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [llama-3-neural-chat-v1-8b-IQ1_M.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [llama-3-neural-chat-v1-8b-IQ1_S.gguf](https://huggingface.co/bartowski/llama-3-neural-chat-v1-8b-GGUF/blob/main/llama-3-neural-chat-v1-8b-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
mradermacher/kappa-3-phi-abliterated-GGUF
|
mradermacher
| 2024-05-12T19:08:59Z | 359 | 3 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:failspy/kappa-3-phi-abliterated",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-05-12T18:55:20Z |
---
base_model: failspy/kappa-3-phi-abliterated
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/failspy/kappa-3-phi-abliterated
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/kappa-3-phi-abliterated-GGUF/resolve/main/kappa-3-phi-abliterated.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/kappa-3-phi-abliterated-GGUF/resolve/main/kappa-3-phi-abliterated.IQ3_XS.gguf) | IQ3_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/kappa-3-phi-abliterated-GGUF/resolve/main/kappa-3-phi-abliterated.IQ3_S.gguf) | IQ3_S | 1.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/kappa-3-phi-abliterated-GGUF/resolve/main/kappa-3-phi-abliterated.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/kappa-3-phi-abliterated-GGUF/resolve/main/kappa-3-phi-abliterated.IQ3_M.gguf) | IQ3_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/kappa-3-phi-abliterated-GGUF/resolve/main/kappa-3-phi-abliterated.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/kappa-3-phi-abliterated-GGUF/resolve/main/kappa-3-phi-abliterated.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/kappa-3-phi-abliterated-GGUF/resolve/main/kappa-3-phi-abliterated.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/kappa-3-phi-abliterated-GGUF/resolve/main/kappa-3-phi-abliterated.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/kappa-3-phi-abliterated-GGUF/resolve/main/kappa-3-phi-abliterated.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/kappa-3-phi-abliterated-GGUF/resolve/main/kappa-3-phi-abliterated.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/kappa-3-phi-abliterated-GGUF/resolve/main/kappa-3-phi-abliterated.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/kappa-3-phi-abliterated-GGUF/resolve/main/kappa-3-phi-abliterated.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/kappa-3-phi-abliterated-GGUF/resolve/main/kappa-3-phi-abliterated.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/kappa-3-phi-abliterated-GGUF/resolve/main/kappa-3-phi-abliterated.f16.gguf) | f16 | 7.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
gokaygokay/paligemma-docci-transformers
|
gokaygokay
| 2024-05-16T21:13:34Z | 359 | 1 |
transformers
|
[
"transformers",
"safetensors",
"paligemma",
"pretraining",
"image-text-to-text",
"en",
"dataset:google/docci",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
image-text-to-text
| 2024-05-16T20:17:59Z |
---
license: apache-2.0
datasets:
- google/docci
language:
- en
library_name: transformers
pipeline_tag: image-text-to-text
---
Fine tuned version of [PaliGemma](https://huggingface.co/google/paligemma-3b-pt-224-jax) model on [google/docci](https://huggingface.co/datasets/google/docci) dataset.
```
pip install git+https://github.com/huggingface/transformers
```
```python
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "gokaygokay/paligemma-docci-transformers"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
model = PaliGemmaForConditionalGeneration.from_pretrained(model_id).eval()
processor = AutoProcessor.from_pretrained(model_id)
## prefix
prompt = "caption en"
model_inputs = processor(text=prompt, images=image, return_tensors="pt")
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=256, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
|
MaziyarPanahi/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B-GGUF
|
MaziyarPanahi
| 2024-05-21T18:37:50Z | 359 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:Kukedlc/NeuralSynthesis-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:automerger/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B"
] |
text-generation
| 2024-05-21T18:05:26Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- merge
- mergekit
- lazymergekit
- automerger
- base_model:Kukedlc/NeuralSynthesis-7B-v0.1
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: Ognoexperiment27multi_verse_modelNeuralsynthesis-7B-GGUF
base_model: automerger/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B
inference: false
model_creator: automerger
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B-GGUF](https://huggingface.co/MaziyarPanahi/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B-GGUF)
- Model creator: [automerger](https://huggingface.co/automerger)
- Original model: [automerger/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B](https://huggingface.co/automerger/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B)
## Description
[MaziyarPanahi/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B-GGUF](https://huggingface.co/MaziyarPanahi/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B-GGUF) contains GGUF format model files for [automerger/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B](https://huggingface.co/automerger/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf
|
RichardErkhov
| 2024-05-21T23:20:54Z | 359 | 0 | null |
[
"gguf",
"region:us"
] | null | 2024-05-21T20:45:29Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
OmniCorso-7B - GGUF
- Model creator: https://huggingface.co/macadeliccc/
- Original model: https://huggingface.co/macadeliccc/OmniCorso-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [OmniCorso-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [OmniCorso-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [OmniCorso-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [OmniCorso-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [OmniCorso-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [OmniCorso-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [OmniCorso-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [OmniCorso-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [OmniCorso-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [OmniCorso-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [OmniCorso-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [OmniCorso-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [OmniCorso-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [OmniCorso-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [OmniCorso-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [OmniCorso-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [OmniCorso-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [OmniCorso-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [OmniCorso-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [OmniCorso-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [OmniCorso-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [OmniCorso-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf/blob/main/OmniCorso-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: cc
tags:
- mergekit
- merge
base_model:
- macadeliccc/MBX-7B-v3-DPO
- mlabonne/OmniBeagle-7B
model-index:
- name: OmniCorso-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.7
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/OmniCorso-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.7
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/OmniCorso-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.91
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/OmniCorso-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 73.43
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/OmniCorso-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/OmniCorso-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.96
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/OmniCorso-7B
name: Open LLM Leaderboard
---
# OmniCorso-7B

## Code Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("macadeliccc/OmniCorso-7B")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/OmniCorso-7B")
messages = [
{"role": "system", "content": "Respond to the users request like a pirate"},
{"role": "user", "content": "Can you write me a quicksort algorithm?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
```
The following models were included in the merge:
* [macadeliccc/MBX-7B-v3-DPO](https://huggingface.co/macadeliccc/MBX-7B-v3-DPO)
* [mlabonne/OmniBeagle-7B](https://huggingface.co/mlabonne/OmniBeagle-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: mlabonne/OmniBeagle-7B
layer_range: [0, 32]
- model: macadeliccc/MBX-7B-v3-DPO
layer_range: [0, 32]
merge_method: slerp
base_model: macadeliccc/MBX-7B-v3-DPO
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## Quantizations
### GGUF
+ [iMatrix](https://huggingface.co/macadeliccc/OmniCorso-7B-GGUF)
### Exllamav2
Quants are available thanks to user bartowski, check them out [here](https://huggingface.co/bartowski/OmniCorso-7B-exl2)
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/OmniCorso-7B-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/OmniCorso-7B-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/OmniCorso-7B-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/OmniCorso-7B-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/OmniCorso-7B-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
## Evaluations
<pre>----Benchmark Complete----
2024-02-11 15:34:40
Time taken: 178.3 mins
Prompt Format: ChatML
Model: macadeliccc/OmniCorso-7B
Score (v2): 73.75
Parseable: 167.0
---------------
Batch completed
Time taken: 178.3 mins
---------------
</pre>
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|---------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[OmniCorso-7B](https://huggingface.co/macadeliccc/OmniCorso-7B)| 45.89| 77.66| 74.12| 49.24| 61.73|
### AGIEval
| Task |Version| Metric |Value| |Stderr|
|------------------------------|------:|--------|----:|---|-----:|
|agieval_aqua_rat | 0|acc |29.13|± | 2.86|
| | |acc_norm|27.17|± | 2.80|
|agieval_logiqa_en | 0|acc |39.32|± | 1.92|
| | |acc_norm|39.63|± | 1.92|
|agieval_lsat_ar | 0|acc |23.91|± | 2.82|
| | |acc_norm|23.91|± | 2.82|
|agieval_lsat_lr | 0|acc |53.14|± | 2.21|
| | |acc_norm|53.92|± | 2.21|
|agieval_lsat_rc | 0|acc |66.54|± | 2.88|
| | |acc_norm|67.29|± | 2.87|
|agieval_sat_en | 0|acc |80.58|± | 2.76|
| | |acc_norm|80.58|± | 2.76|
|agieval_sat_en_without_passage| 0|acc |45.63|± | 3.48|
| | |acc_norm|43.69|± | 3.46|
|agieval_sat_math | 0|acc |33.18|± | 3.18|
| | |acc_norm|30.91|± | 3.12|
Average: 45.89%
### GPT4All
| Task |Version| Metric |Value| |Stderr|
|-------------|------:|--------|----:|---|-----:|
|arc_challenge| 0|acc |67.32|± | 1.37|
| | |acc_norm|68.43|± | 1.36|
|arc_easy | 0|acc |87.46|± | 0.68|
| | |acc_norm|83.50|± | 0.76|
|boolq | 1|acc |88.13|± | 0.57|
|hellaswag | 0|acc |68.47|± | 0.46|
| | |acc_norm|86.96|± | 0.34|
|openbookqa | 0|acc |38.80|± | 2.18|
| | |acc_norm|50.00|± | 2.24|
|piqa | 0|acc |83.03|± | 0.88|
| | |acc_norm|85.31|± | 0.83|
|winogrande | 0|acc |81.29|± | 1.10|
Average: 77.66%
### TruthfulQA
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |58.26|± | 1.73|
| | |mc2 |74.12|± | 1.43|
Average: 74.12%
### Bigbench
| Task |Version| Metric |Value| |Stderr|
|------------------------------------------------|------:|---------------------|----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|56.84|± | 3.60|
|bigbench_date_understanding | 0|multiple_choice_grade|63.41|± | 2.51|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|49.22|± | 3.12|
|bigbench_geometric_shapes | 0|multiple_choice_grade|23.96|± | 2.26|
| | |exact_str_match | 1.39|± | 0.62|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|34.20|± | 2.12|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|23.71|± | 1.61|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|60.33|± | 2.83|
|bigbench_movie_recommendation | 0|multiple_choice_grade|49.00|± | 2.24|
|bigbench_navigate | 0|multiple_choice_grade|55.20|± | 1.57|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|70.75|± | 1.02|
|bigbench_ruin_names | 0|multiple_choice_grade|55.80|± | 2.35|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|36.97|± | 1.53|
|bigbench_snarks | 0|multiple_choice_grade|72.38|± | 3.33|
|bigbench_sports_understanding | 0|multiple_choice_grade|76.27|± | 1.36|
|bigbench_temporal_sequences | 0|multiple_choice_grade|54.50|± | 1.58|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|23.12|± | 1.19|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|20.34|± | 0.96|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|60.33|± | 2.83|
Average: 49.24%
Average score: 61.73%
Elapsed time: 02:20:06
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_macadeliccc__OmniCorso-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.74|
|AI2 Reasoning Challenge (25-Shot)|72.70|
|HellaSwag (10-Shot) |88.70|
|MMLU (5-Shot) |64.91|
|TruthfulQA (0-shot) |73.43|
|Winogrande (5-shot) |83.74|
|GSM8k (5-shot) |70.96|
|
mayflowergmbh/Llama3-DiscoLeo-Instruct-8B-v0.1-GGUF
|
mayflowergmbh
| 2024-05-25T18:18:36Z | 359 | 0 |
transformers
|
[
"transformers",
"gguf",
"de",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-05-25T17:55:19Z |
---
language:
- de
license: llama3
library_name: transformers
tags:
- gguf
---
# Llama3-DiscoLeo-Instruct 8B (version 0.1)
## Thanks and Accreditation
[DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1](https://huggingface.co/collections/DiscoResearch/discoleo-8b-llama3-for-german-6650527496c0fafefd4c9729)
is the result of a joint effort between [DiscoResearch](https://huggingface.co/DiscoResearch) and [Occiglot](https://huggingface.co/occiglot)
with support from the [DFKI](https://www.dfki.de/web/) (German Research Center for Artificial Intelligence) and [hessian.Ai](https://hessian.ai).
Occiglot kindly handled data preprocessing, filtering, and deduplication as part of their latest [dataset release](https://huggingface.co/datasets/occiglot/occiglot-fineweb-v0.5), as well as sharing their compute allocation at hessian.Ai's 42 Supercomputer.
## Model Overview
Llama3_DiscoLeo_Instruct_8B_v0 is an instruction tuned version of our [Llama3-German-8B](https://huggingface.co/DiscoResearch/Llama3_German_8B).
The base model was derived from [Meta's Llama3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) through continuous pretraining on 65 billion high-quality German tokens, similar to previous [LeoLM](https://huggingface.co/LeoLM) or [Occiglot](https://huggingface.co/collections/occiglot/occiglot-eu5-7b-v01-65dbed502a6348b052695e01) models.
We finetuned this checkpoint on the German Instruction dataset from DiscoResearch created by [Jan-Philipp Harries](https://huggingface.co/jphme) and [Daniel Auras](https://huggingface.co/rasdani) ([DiscoResearch](https://huggingface.co/DiscoResearch), [ellamind](https://ellamind.com)).
## How to use
Llama3_DiscoLeo_Instruct_8B_v0.1 uses the [Llama-3 chat template](https://github.com/meta-llama/llama3?tab=readme-ov-file#instruction-tuned-models), which can be easily used with [transformer's chat templating](https://huggingface.co/docs/transformers/main/en/chat_templating).
See [below](https://huggingface.co/DiscoResearch/Llama3_DiscoLeo_Instruct_8B_v0.1#usage-example) for a usage example.
## Model Training and Hyperparameters
The model was full-fintuned with axolotl on the [hessian.Ai 42](hessian.ai) with 8192 context-length, learning rate 2e-5 and batch size of 16.
## Evaluation and Results
We evaluated the model using a suite of common English Benchmarks and their German counterparts with [GermanBench](https://github.com/bjoernpl/GermanBenchmark).
In the below image and corresponding table, you can see the benchmark scores for the different instruct models compared to Metas instruct version. All checkpoints are available in this [collection](https://huggingface.co/collections/DiscoResearch/discoleo-8b-llama3-for-german-6650527496c0fafefd4c9729).

| Model | truthful_qa_de | truthfulqa_mc | arc_challenge | arc_challenge_de | hellaswag | hellaswag_de | MMLU | MMLU-DE | mean |
|----------------------------------------------------|----------------|---------------|---------------|------------------|-------------|--------------|-------------|-------------|-------------|
| meta-llama/Meta-Llama-3-8B-Instruct | 0.47498 | 0.43923 | **0.59642** | 0.47952 | **0.82025** | 0.60008 | **0.66658** | 0.53541 | 0.57656 |
| DiscoResearch/Llama3-German-8B | 0.49499 | 0.44838 | 0.55802 | 0.49829 | 0.79924 | 0.65395 | 0.62240 | 0.54413 | 0.57743 |
| DiscoResearch/Llama3-German-8B-32k | 0.48920 | 0.45138 | 0.54437 | 0.49232 | 0.79078 | 0.64310 | 0.58774 | 0.47971 | 0.55982 |
| **DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1** | **0.53042** | 0.52867 | 0.59556 | **0.53839** | 0.80721 | 0.66440 | 0.61898 | 0.56053 | **0.60552** |
| DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1| 0.52749 | **0.53245** | 0.58788 | 0.53754 | 0.80770 | **0.66709** | 0.62123 | **0.56238** | 0.60547 |
## Model Configurations
We release DiscoLeo-8B in the following configurations:
1. [Base model with continued pretraining](https://huggingface.co/DiscoResearch/Llama3_German_8B)
2. [Long-context version (32k context length)](https://huggingface.co/DiscoResearch/Llama3_German_8B_32k)
3. [Instruction-tuned version of the base model](https://huggingface.co/DiscoResearch/Llama3_DiscoLeo_Instruct_8B_v0.1) (This model)
4. [Instruction-tuned version of the long-context model](https://huggingface.co/DiscoResearch/Llama3_DiscoLeo_Instruct_8B_32k_v0.1)
5. [Experimental `DARE-TIES` Merge with Llama3-Instruct](https://huggingface.co/DiscoResearch/Llama3_DiscoLeo_8B_DARE_Experimental)
6. [Collection of Quantized versions](https://huggingface.co/collections/DiscoResearch/discoleo-8b-quants-6651bcf8f72c9a37ce485d42)
## Usage Example
Here's how to use the model with transformers:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1")
prompt = "Schreibe ein Essay über die Bedeutung der Energiewende für Deutschlands Wirtschaft"
messages = [
{"role": "system", "content": "Du bist ein hilfreicher Assistent."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Acknowledgements
The model was trained and evaluated by [Björn Plüster](https://huggingface.co/bjoernp) ([DiscoResearch](https://huggingface.co/DiscoResearch), [ellamind](https://ellamind.com)) with data preparation and project supervision by [Manuel Brack](http://manuel-brack.eu) ([DFKI](https://www.dfki.de/web/), [TU-Darmstadt](https://www.tu-darmstadt.de/)). Initial work on dataset collection and curation was performed by [Malte Ostendorff](https://ostendorff.org) and [Pedro Ortiz Suarez](https://portizs.eu). Instruction tuning was done with the DiscoLM German dataset created by [Jan-Philipp Harries](https://huggingface.co/jphme) and [Daniel Auras](https://huggingface.co/rasdani) ([DiscoResearch](https://huggingface.co/DiscoResearch), [ellamind](https://ellamind.com)). We extend our gratitude to [LAION](https://laion.ai/) and friends, especially [Christoph Schuhmann](https://entwickler.de/experten/christoph-schuhmann) and [Jenia Jitsev](https://huggingface.co/JJitsev), for initiating this collaboration.
The model training was supported by a compute grant at the [42 supercomputer](https://hessian.ai/) which is a central component in the development of [hessian AI](https://hessian.ai/), the [AI Innovation Lab](https://hessian.ai/infrastructure/ai-innovationlab/) (funded by the [Hessian Ministry of Higher Education, Research and the Art (HMWK)](https://wissenschaft.hessen.de) & the [Hessian Ministry of the Interior, for Security and Homeland Security (HMinD)](https://innen.hessen.de)) and the [AI Service Centers](https://hessian.ai/infrastructure/ai-service-centre/) (funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html)).
The curation of the training data is partially funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html)
through the project [OpenGPT-X](https://opengpt-x.de/en/) (project no. 68GX21007D).
|
rarayayan/testftargpt2
|
rarayayan
| 2024-06-03T16:59:27Z | 359 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-06-03T16:47:29Z |
---
license: apache-2.0
---
|
alvdansen/paintinglight
|
alvdansen
| 2024-06-16T16:31:05Z | 359 | 8 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-06-06T17:32:46Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: a small boy in a sailor suit, daiton
output:
url: images/ComfyUI_00349_.png
- text: a prince
output:
url: images/ComfyUI_00343_.png
- text: a lady in a red dress, daiton
output:
url: images/ComfyUI_00347_.png
- text: a man with hair like fire
output:
url: images/Soft_Illustration_e000015_01_20240606142553.jpeg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: daiton
license: creativeml-openrail-m
---
# Painting Light
<Gallery />
## Model description
A watercolor style painting model that does impressionism well and lends itself to anime.
## Trigger words
You should use `daiton` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Model release is for research purposes only. For commercial use, please contact me directly.
[Download](/alvdansen/paintinglight/tree/main) them in the Files & versions tab.
|
gglabs/TinyLM-FC-0-epoch
|
gglabs
| 2024-06-09T20:15:05Z | 359 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-09T20:12:16Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/tinyllama-chat-bnb-4bit
---
# Uploaded model
- **Developed by:** gglabs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CHE-72/Phi-3-mini-128k-instruct-Q8_0-GGUF
|
CHE-72
| 2024-06-22T16:46:10Z | 359 | 0 | null |
[
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"region:us"
] |
text-generation
| 2024-06-22T16:45:53Z |
---
base_model: microsoft/Phi-3-mini-128k-instruct
language:
- en
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# CHE-72/Phi-3-mini-128k-instruct-Q8_0-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-mini-128k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Phi-3-mini-128k-instruct-Q8_0-GGUF --hf-file phi-3-mini-128k-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Phi-3-mini-128k-instruct-Q8_0-GGUF --hf-file phi-3-mini-128k-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Phi-3-mini-128k-instruct-Q8_0-GGUF --hf-file phi-3-mini-128k-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Phi-3-mini-128k-instruct-Q8_0-GGUF --hf-file phi-3-mini-128k-instruct-q8_0.gguf -c 2048
```
|
stablediffusionapi/sd3-medium
|
stablediffusionapi
| 2024-06-25T10:05:39Z | 359 | 0 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-25T10:02:51Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# sd3 medium API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "sd3-medium"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/sd3-medium)
Model link: [View model](https://modelslab.com/models/sd3-medium)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "sd3-medium",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
Callidior/bert2bert-base-arxiv-titlegen
|
Callidior
| 2023-04-26T19:42:59Z | 358 | 11 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"encoder-decoder",
"text2text-generation",
"summarization",
"en",
"dataset:arxiv_dataset",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:04Z |
---
language:
- en
tags:
- summarization
license: apache-2.0
datasets:
- arxiv_dataset
metrics:
- rouge
widget:
- text: "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data."
---
# Paper Title Generator
Generates titles for computer science papers given an abstract.
The model is a BERT2BERT Encoder-Decoder using the official `bert-base-uncased` checkpoint as initialization for the encoder and decoder.
It was fine-tuned on 318,500 computer science papers posted on arXiv.org between 2007 and 2022 and achieved a 26.3% Rouge2 F1-Score on held-out validation data.
**Live Demo:** [https://paper-titles.ey.r.appspot.com/](https://paper-titles.ey.r.appspot.com/)
|
ahmedrachid/FinancialBERT
|
ahmedrachid
| 2022-02-07T15:00:03Z | 358 | 23 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: en
widget:
- text: Tesla remains one of the highest [MASK] stocks on the market. Meanwhile, Aurora Innovation is a pre-revenue upstart that shows promise.
- text: Asian stocks [MASK] from a one-year low on Wednesday as U.S. share futures and oil recovered from the previous day's selloff, but uncertainty over the impact of the Omicron
- text: U.S. stocks were set to rise on Monday, led by [MASK] in Apple which neared $3 trillion in market capitalization, while investors braced for a Federal Reserve meeting later this week.
tags:
- fill-mask
---
**FinancialBERT** is a BERT model pre-trained on a large corpora of financial texts. The purpose is to enhance financial NLP research and practice in financial domain, hoping that financial practitioners and researchers can benefit from it without the necessity of the significant computational resources required to train the model.
The model was trained on a large corpus of financial texts:
- *TRC2-financial*: 1.8M news articles that were published by Reuters between 2008 and 2010.
- *Bloomberg News*: 400,000 articles between 2006 and 2013.
- *Corporate Reports*: 192,000 transcripts (10-K & 10-Q)
- *Earning Calls*: 42,156 documents.
More details on `FinancialBERT` can be found at: https://www.researchgate.net/publication/358284785_FinancialBERT_-_A_Pretrained_Language_Model_for_Financial_Text_Mining
> Created by [Ahmed Rachid Hazourli](https://www.linkedin.com/in/ahmed-rachid/)
|
nsi319/legal-pegasus
|
nsi319
| 2021-03-11T08:50:52Z | 358 | 10 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"summarization",
"en",
"license:mit",
"autotrain_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language: en
tags: summarization
metrics:
- rouge
- precision
inference: false
license: mit
---
## PEGASUS for legal document summarization
**legal-pegasus** is a finetuned version of ([**google/pegasus-cnn_dailymail**](https://huggingface.co/google/pegasus-cnn_dailymail)) for the **legal domain**, trained to perform **abstractive summarization** task. The maximum length of input sequence is 1024 tokens.
## Training data
This model was trained on [**sec-litigation-releases**](https://www.sec.gov/litigation/litreleases.htm) dataset consisting more than 2700 litigation releases and complaints.
## How to use
```Python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("nsi319/legal-pegasus")
model = AutoModelForSeq2SeqLM.from_pretrained("nsi319/legal-pegasus")
text = """On March 5, 2021, the Securities and Exchange Commission charged AT&T, Inc. with repeatedly violating Regulation FD, and three of its Investor Relations executives with aiding and abetting AT&T's violations, by selectively disclosing material nonpublic information to research analysts. According to the SEC's complaint, AT&T learned in March 2016 that a steeper-than-expected decline in its first quarter smartphone sales would cause AT&T's revenue to fall short of analysts' estimates for the quarter. The complaint alleges that to avoid falling short of the consensus revenue estimate for the third consecutive quarter, AT&T Investor Relations executives Christopher Womack, Michael Black, and Kent Evans made private, one-on-one phone calls to analysts at approximately 20 separate firms. On these calls, the AT&T executives allegedly disclosed AT&T's internal smartphone sales data and the impact of that data on internal revenue metrics, despite the fact that internal documents specifically informed Investor Relations personnel that AT&T's revenue and sales of smartphones were types of information generally considered "material" to AT&T investors, and therefore prohibited from selective disclosure under Regulation FD. The complaint further alleges that as a result of what they were told on these calls, the analysts substantially reduced their revenue forecasts, leading to the overall consensus revenue estimate falling to just below the level that AT&T ultimately reported to the public on April 26, 2016. The SEC's complaint, filed in federal district court in Manhattan, charges AT&T with violations of the disclosure provisions of Section 13(a) of the Securities Exchange Act of 1934 and Regulation FD thereunder, and charges Womack, Evans and Black with aiding and abetting these violations. The complaint seeks permanent injunctive relief and civil monetary penalties against each defendant. The SEC's investigation was conducted by George N. Stepaniuk, Thomas Peirce, and David Zetlin-Jones of the SEC's New York Regional Office. The SEC's litigation will be conducted by Alexander M. Vasilescu, Victor Suthammanont, and Mr. Zetlin-Jones. The case is being supervised by Sanjay Wadhwa."""
input_tokenized = tokenizer.encode(text, return_tensors='pt',max_length=1024,truncation=True)
summary_ids = model.generate(input_tokenized,
num_beams=9,
no_repeat_ngram_size=3,
length_penalty=2.0,
min_length=150,
max_length=250,
early_stopping=True)
summary = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids][0]
### Summary Output
# The Securities and Exchange Commission today charged AT&T, Inc. and three of its Investor Relations executives with aiding and abetting the company's violations of the antifraud provisions of Section 10(b) of the Securities Exchange Act of 1934 and Rule 10b-5 thereunder. According to the SEC's complaint, the company learned in March 2016 that a steeper-than-expected decline in its first quarter smartphone sales would cause its revenue to fall short of analysts' estimates for the quarter. The complaint alleges that to avoid falling short of the consensus revenue estimate for the third consecutive quarter, the executives made private, one-on-one phone calls to analysts at approximately 20 separate firms. On these calls, the SEC alleges that Christopher Womack, Michael Black, and Kent Evans allegedly disclosed internal smartphone sales data and the impact of that data on internal revenue metrics. The SEC further alleges that as a result of what they were told, the analysts substantially reduced their revenue forecasts, leading to the overall consensus Revenue Estimate falling to just below the level that AT&t ultimately reported to the public on April 26, 2016. The SEC is seeking permanent injunctive relief and civil monetary penalties against each defendant.
```
## Evaluation results
| Model | rouge1 | rouge1-precision | rouge2 | rouge2-precision | rougeL | rougeL-precision |
|:-----------:|:-----:|:-----:|:------:|:-----:|:------:|:-----:|
| legal-pegasus | **57.39** | **62.97** | **26.85** | **28.42** | **30.91** | **33.22** |
| pegasus-cnn_dailymail | 43.16 | 45.68 | 13.75 | 14.56 | 18.82 | 20.07 |
|
timm/mobilevitv2_150.cvnets_in22k_ft_in1k
|
timm
| 2023-04-24T22:25:04Z | 358 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2206.02680",
"license:other",
"region:us"
] |
image-classification
| 2023-04-24T22:24:46Z |
---
tags:
- image-classification
- timm
library_name: timm
license: other
datasets:
- imagenet-1k
---
# Model card for mobilevitv2_150.cvnets_in22k_ft_in1k
A MobileViT-v2 image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors.
See license details at https://github.com/apple/ml-cvnets/blob/main/LICENSE
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 10.6
- GMACs: 4.1
- Activations (M): 24.1
- Image size: 256 x 256
- **Papers:**
- Separable Self-attention for Mobile Vision Transformers: https://arxiv.org/abs/2206.02680
- **Original:** https://github.com/apple/ml-cvnets
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('mobilevitv2_150.cvnets_in22k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mobilevitv2_150.cvnets_in22k_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 96, 128, 128])
# torch.Size([1, 192, 64, 64])
# torch.Size([1, 384, 32, 32])
# torch.Size([1, 576, 16, 16])
# torch.Size([1, 768, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mobilevitv2_150.cvnets_in22k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{Mehta2022SeparableSF,
title={Separable Self-attention for Mobile Vision Transformers},
author={Sachin Mehta and Mohammad Rastegari},
journal={ArXiv},
year={2022},
volume={abs/2206.02680}
}
```
|
llm-book/bert-base-japanese-v3-wrime-sentiment
|
llm-book
| 2023-07-29T07:10:03Z | 358 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-29T07:09:13Z |
Entry not found
|
TheBloke/MistRP-Airoboros-7B-GGUF
|
TheBloke
| 2023-10-27T21:29:37Z | 358 | 5 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"not-for-all-audiences",
"nsfw",
"base_model:Undi95/MistRP-Airoboros-7B",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
] | null | 2023-10-27T21:21:50Z |
---
base_model: Undi95/MistRP-Airoboros-7B
inference: false
license: cc-by-nc-4.0
model_creator: Undi
model_name: Mistrp Airoboros 7B
model_type: mistral
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- not-for-all-audiences
- nsfw
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Mistrp Airoboros 7B - GGUF
- Model creator: [Undi](https://huggingface.co/Undi95)
- Original model: [Mistrp Airoboros 7B](https://huggingface.co/Undi95/MistRP-Airoboros-7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Undi's Mistrp Airoboros 7B](https://huggingface.co/Undi95/MistRP-Airoboros-7B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MistRP-Airoboros-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MistRP-Airoboros-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MistRP-Airoboros-7B-GGUF)
* [Undi's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Undi95/MistRP-Airoboros-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [mistrp-airoboros-7b.Q2_K.gguf](https://huggingface.co/TheBloke/MistRP-Airoboros-7B-GGUF/blob/main/mistrp-airoboros-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [mistrp-airoboros-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/MistRP-Airoboros-7B-GGUF/blob/main/mistrp-airoboros-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [mistrp-airoboros-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/MistRP-Airoboros-7B-GGUF/blob/main/mistrp-airoboros-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [mistrp-airoboros-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/MistRP-Airoboros-7B-GGUF/blob/main/mistrp-airoboros-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [mistrp-airoboros-7b.Q4_0.gguf](https://huggingface.co/TheBloke/MistRP-Airoboros-7B-GGUF/blob/main/mistrp-airoboros-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mistrp-airoboros-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/MistRP-Airoboros-7B-GGUF/blob/main/mistrp-airoboros-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [mistrp-airoboros-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/MistRP-Airoboros-7B-GGUF/blob/main/mistrp-airoboros-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [mistrp-airoboros-7b.Q5_0.gguf](https://huggingface.co/TheBloke/MistRP-Airoboros-7B-GGUF/blob/main/mistrp-airoboros-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mistrp-airoboros-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/MistRP-Airoboros-7B-GGUF/blob/main/mistrp-airoboros-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [mistrp-airoboros-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/MistRP-Airoboros-7B-GGUF/blob/main/mistrp-airoboros-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [mistrp-airoboros-7b.Q6_K.gguf](https://huggingface.co/TheBloke/MistRP-Airoboros-7B-GGUF/blob/main/mistrp-airoboros-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [mistrp-airoboros-7b.Q8_0.gguf](https://huggingface.co/TheBloke/MistRP-Airoboros-7B-GGUF/blob/main/mistrp-airoboros-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/MistRP-Airoboros-7B-GGUF and below it, a specific filename to download, such as: mistrp-airoboros-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/MistRP-Airoboros-7B-GGUF mistrp-airoboros-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/MistRP-Airoboros-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MistRP-Airoboros-7B-GGUF mistrp-airoboros-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m mistrp-airoboros-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/MistRP-Airoboros-7B-GGUF", model_file="mistrp-airoboros-7b.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Undi's Mistrp Airoboros 7B
## Description
This repo contains fp16 files of MistRP-Airoboros-7B.
<!-- description end -->
<!-- description start -->
## Models and loras used
- [teknium/airoboros-mistral2.2-7b](https://huggingface.co/teknium/airoboros-mistral2.2-7b)
- [Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b](https://huggingface.co/Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b)
- [Undi95/Mistral-pippa-sharegpt-7b-qlora](https://huggingface.co/Undi95/Mistral-pippa-sharegpt-7b-qlora)
- [lemonilia/LimaRP-Mistral-7B-v0.1](https://huggingface.co/lemonilia/LimaRP-Mistral-7B-v0.1)
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## LimaRP v3 usage and suggested settings

You can follow these instruction format settings in SillyTavern. Replace tiny with your desired response length:

If you want to support me, you can [here](https://ko-fi.com/undiai).
<!-- original-model-card end -->
|
Mozilla/Mistral-7B-Instruct-v0.2-llamafile
|
Mozilla
| 2024-05-25T10:47:13Z | 358 | 23 |
transformers
|
[
"transformers",
"gguf",
"llamafile",
"mistral",
"finetuned",
"text-generation",
"arxiv:2310.06825",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-12-28T18:14:56Z |
---
base_model: mistralai/Mistral-7B-Instruct-v0.2
inference: false
license: apache-2.0
model_creator: Mistral AI_
model_name: Mistral 7B Instruct v0.2
model_type: mistral
pipeline_tag: text-generation
prompt_template: |
<s>[INST] {prompt} [/INST]
quantized_by: jartine
tags:
- finetuned
- llamafile
---
# Mistral 7B Instruct v0.2 - llamafile
- Model creator: [Mistral AI_](https://huggingface.co/mistralai)
- Original model: [Mistral 7B Instruct v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
<!-- description start -->
## Description
This repo contains llamafile format model files for [Mistral AI_'s Mistral 7B Instruct v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
WARNING: This README may contain inaccuracies. It was generated automatically by forking <a href=/TheBloke/Mistral-7B-Instruct-v0.2-GGUF>TheBloke/Mistral-7B-Instruct-v0.2-GGUF</a> and piping the README through sed. Errors should be reported to jartine, and do not reflect TheBloke. You can also support his work on [Patreon](https://www.patreon.com/TheBlokeAI).
<!-- README_llamafile.md-about-llamafile start -->
### About llamafile
llamafile is a new format introduced by Mozilla Ocho on Nov 20th 2023. It uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp binaries that run on the stock installs of six OSes for both ARM64 and AMD64.
Here is an incomplete list of clients and libraries that are known to support llamafile:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for llamafile. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_llamafile.md-about-llamafile end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/jartine/Mistral-7B-Instruct-v0.2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/jartine/Mistral-7B-Instruct-v0.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit llamafile models for CPU+GPU inference](https://huggingface.co/jartine/Mistral-7B-Instruct-v0.2-llamafile)
* [Mistral AI_'s original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Mistral
```
<s>[INST] {prompt} [/INST]
```
<!-- prompt-template end -->
<!-- compatibility_llamafile start -->
## Compatibility
These quantised llamafilev2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_llamafile end -->
<!-- README_llamafile.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [mistral-7b-instruct-v0.2.Q2_K.llamafile](https://huggingface.co/jartine/Mistral-7B-Instruct-v0.2-llamafile/blob/main/mistral-7b-instruct-v0.2.Q2_K.llamafile) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [mistral-7b-instruct-v0.2.Q3_K_S.llamafile](https://huggingface.co/jartine/Mistral-7B-Instruct-v0.2-llamafile/blob/main/mistral-7b-instruct-v0.2.Q3_K_S.llamafile) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [mistral-7b-instruct-v0.2.Q3_K_M.llamafile](https://huggingface.co/jartine/Mistral-7B-Instruct-v0.2-llamafile/blob/main/mistral-7b-instruct-v0.2.Q3_K_M.llamafile) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [mistral-7b-instruct-v0.2.Q3_K_L.llamafile](https://huggingface.co/jartine/Mistral-7B-Instruct-v0.2-llamafile/blob/main/mistral-7b-instruct-v0.2.Q3_K_L.llamafile) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [mistral-7b-instruct-v0.2.Q4_0.llamafile](https://huggingface.co/jartine/Mistral-7B-Instruct-v0.2-llamafile/blob/main/mistral-7b-instruct-v0.2.Q4_0.llamafile) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mistral-7b-instruct-v0.2.Q4_K_S.llamafile](https://huggingface.co/jartine/Mistral-7B-Instruct-v0.2-llamafile/blob/main/mistral-7b-instruct-v0.2.Q4_K_S.llamafile) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [mistral-7b-instruct-v0.2.Q4_K_M.llamafile](https://huggingface.co/jartine/Mistral-7B-Instruct-v0.2-llamafile/blob/main/mistral-7b-instruct-v0.2.Q4_K_M.llamafile) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [mistral-7b-instruct-v0.2.Q5_0.llamafile](https://huggingface.co/jartine/Mistral-7B-Instruct-v0.2-llamafile/blob/main/mistral-7b-instruct-v0.2.Q5_0.llamafile) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mistral-7b-instruct-v0.2.Q5_K_S.llamafile](https://huggingface.co/jartine/Mistral-7B-Instruct-v0.2-llamafile/blob/main/mistral-7b-instruct-v0.2.Q5_K_S.llamafile) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [mistral-7b-instruct-v0.2.Q5_K_M.llamafile](https://huggingface.co/jartine/Mistral-7B-Instruct-v0.2-llamafile/blob/main/mistral-7b-instruct-v0.2.Q5_K_M.llamafile) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [mistral-7b-instruct-v0.2.Q6_K.llamafile](https://huggingface.co/jartine/Mistral-7B-Instruct-v0.2-llamafile/blob/main/mistral-7b-instruct-v0.2.Q6_K.llamafile) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [mistral-7b-instruct-v0.2.Q8_0.llamafile](https://huggingface.co/jartine/Mistral-7B-Instruct-v0.2-llamafile/blob/main/mistral-7b-instruct-v0.2.Q8_0.llamafile) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_llamafile.md-provided-files end -->
<!-- README_llamafile.md-how-to-download start -->
## How to download llamafile files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: jartine/Mistral-7B-Instruct-v0.2-llamafile and below it, a specific filename to download, such as: mistral-7b-instruct-v0.2.Q4_K_M.llamafile.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download jartine/Mistral-7B-Instruct-v0.2-llamafile mistral-7b-instruct-v0.2.Q4_K_M.llamafile --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download jartine/Mistral-7B-Instruct-v0.2-llamafile --local-dir . --local-dir-use-symlinks False --include='*Q4_K*llamafile'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download jartine/Mistral-7B-Instruct-v0.2-llamafile mistral-7b-instruct-v0.2.Q4_K_M.llamafile --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_llamafile.md-how-to-download end -->
<!-- README_llamafile.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m mistral-7b-instruct-v0.2.Q4_K_M.llamafile --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the llamafile file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use llamafile models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./mistral-7b-instruct-v0.2.Q4_K_M.llamafile", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./mistral-7b-instruct-v0.2.Q4_K_M.llamafile", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_llamafile.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[jartine AI's Discord server](https://discord.gg/FwAVVu7eJ4)
## Thanks, and how to contribute
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
And thank you again to mozilla for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Mistral AI_'s Mistral 7B Instruct v0.2
# Model Card for Mistral-7B-Instruct-v0.2
The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an improved instruct fine-tuned version of [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1).
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/la-plateforme/).
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Troubleshooting
- If you see the following error:
```
Traceback (most recent call last):
File "", line 1, in
File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
raise KeyError(key)
KeyError: 'mistral'
```
Installing transformers from source should solve the issue
pip install git+https://github.com/huggingface/transformers
This should not be required after transformers-v4.33.4.
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
<!-- original-model-card end -->
|
KoboldAI/Mistral-7B-Holodeck-1-GGUF
|
KoboldAI
| 2024-01-15T17:23:13Z | 358 | 6 | null |
[
"gguf",
"pytorch",
"mistral",
"finetuned",
"text-generation",
"en",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-14T12:10:33Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
inference: true
tags:
- pytorch
- mistral
- finetuned
---
# Mistral 7B - Holodeck
This is the GGUF version of this model designed to be used with Koboldcpp, the easy and versitile program for AI co-writing and more. Because this is a co-writing model, you are expected to use the Story Mode inside the KoboldAI Lite interface.
## Model Description
Mistral 7B-Holodeck is a finetune created using Mistral's 7B model.
## Training data
The training data contains around 3000 ebooks in various genres.
Most parts of the dataset have been prepended using the following text: `[Genre: <genre1>, <genre2>]`
```
### Limitations and Biases
Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion).
|
diffusionbee/fooocus_inpainting
|
diffusionbee
| 2024-02-03T22:39:33Z | 358 | 3 |
diffusers
|
[
"diffusers",
"diffusers:StableDiffusionXLInpaintPipeline",
"region:us"
] |
image-to-image
| 2024-02-03T18:34:07Z |
Entry not found
|
MatanAvitan/gtr__nq__64_bios__correct
|
MatanAvitan
| 2024-05-07T10:43:09Z | 358 | 0 |
transformers
|
[
"transformers",
"safetensors",
"endpoints_compatible",
"region:us"
] | null | 2024-02-11T10:55:53Z |
Entry not found
|
ChrisWilson011016/5Dd44yn8biPcWPD95iBQFqoHtrQTSmhMUgpMb46V6pCMwqt6_vgg
|
ChrisWilson011016
| 2024-03-04T19:03:19Z | 358 | 0 |
keras
|
[
"keras",
"region:us"
] | null | 2024-02-29T13:02:15Z |
Entry not found
|
ven1228/5FpfnWPoQfe785YWWaFF4BTwj3c1MyXhBGWCnDFzP6v12Ysh_vgg
|
ven1228
| 2024-03-11T12:49:12Z | 358 | 0 |
keras
|
[
"keras",
"region:us"
] | null | 2024-03-05T20:48:44Z |
Entry not found
|
binhnx8/my_whisper
|
binhnx8
| 2024-03-19T08:39:25Z | 358 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-19T08:01:40Z |
Entry not found
|
Aarifkhan/3d
|
Aarifkhan
| 2024-04-14T14:04:08Z | 358 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"NSFW",
"lora",
"base_model:UnfilteredAI/NSFW-GEN-ANIME",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2024-04-14T13:59:24Z |
---
license: apache-2.0
tags:
- text-to-image
- NSFW
- lora
- diffusers
base_model: UnfilteredAI/NSFW-GEN-ANIME
instance_prompt: 3d style, 3d, 3d render, anime
---
|
RichardErkhov/google_-_codegemma-7b-gguf
|
RichardErkhov
| 2024-04-14T20:36:58Z | 358 | 0 | null |
[
"gguf",
"region:us"
] | null | 2024-04-14T17:14:28Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
codegemma-7b - GGUF
- Model creator: https://huggingface.co/google/
- Original model: https://huggingface.co/google/codegemma-7b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [codegemma-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-gguf/blob/main/codegemma-7b.Q2_K.gguf) | Q2_K | 3.24GB |
| [codegemma-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-gguf/blob/main/codegemma-7b.IQ3_XS.gguf) | IQ3_XS | 3.54GB |
| [codegemma-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-gguf/blob/main/codegemma-7b.IQ3_S.gguf) | IQ3_S | 3.71GB |
| [codegemma-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-gguf/blob/main/codegemma-7b.Q3_K_S.gguf) | Q3_K_S | 3.71GB |
| [codegemma-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-gguf/blob/main/codegemma-7b.IQ3_M.gguf) | IQ3_M | 3.82GB |
| [codegemma-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-gguf/blob/main/codegemma-7b.Q3_K.gguf) | Q3_K | 4.07GB |
| [codegemma-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-gguf/blob/main/codegemma-7b.Q3_K_M.gguf) | Q3_K_M | 4.07GB |
| [codegemma-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-gguf/blob/main/codegemma-7b.Q3_K_L.gguf) | Q3_K_L | 4.39GB |
| [codegemma-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-gguf/blob/main/codegemma-7b.IQ4_XS.gguf) | IQ4_XS | 4.48GB |
| [codegemma-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-gguf/blob/main/codegemma-7b.Q4_0.gguf) | Q4_0 | 4.67GB |
| [codegemma-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-gguf/blob/main/codegemma-7b.IQ4_NL.gguf) | IQ4_NL | 4.69GB |
| [codegemma-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-gguf/blob/main/codegemma-7b.Q4_K_S.gguf) | Q4_K_S | 4.7GB |
| [codegemma-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-gguf/blob/main/codegemma-7b.Q4_K.gguf) | Q4_K | 4.96GB |
| [codegemma-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-gguf/blob/main/codegemma-7b.Q4_K_M.gguf) | Q4_K_M | 4.96GB |
| [codegemma-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-gguf/blob/main/codegemma-7b.Q4_1.gguf) | Q4_1 | 5.12GB |
| [codegemma-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-gguf/blob/main/codegemma-7b.Q5_0.gguf) | Q5_0 | 5.57GB |
| [codegemma-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-gguf/blob/main/codegemma-7b.Q5_K_S.gguf) | Q5_K_S | 5.57GB |
| [codegemma-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-gguf/blob/main/codegemma-7b.Q5_K.gguf) | Q5_K | 5.72GB |
| [codegemma-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-gguf/blob/main/codegemma-7b.Q5_K_M.gguf) | Q5_K_M | 5.72GB |
| [codegemma-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-gguf/blob/main/codegemma-7b.Q5_1.gguf) | Q5_1 | 6.02GB |
| [codegemma-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-gguf/blob/main/codegemma-7b.Q6_K.gguf) | Q6_K | 6.53GB |
Original model description:
Repo model google/codegemma-7b is gated. You must be authenticated to access it.
|
RichardErkhov/meta-llama_-_CodeLlama-7b-Python-hf-gguf
|
RichardErkhov
| 2024-04-21T12:24:59Z | 358 | 0 | null |
[
"gguf",
"arxiv:2308.12950",
"region:us"
] | null | 2024-04-21T11:25:52Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
CodeLlama-7b-Python-hf - GGUF
- Model creator: https://huggingface.co/meta-llama/
- Original model: https://huggingface.co/meta-llama/CodeLlama-7b-Python-hf/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [CodeLlama-7b-Python-hf.Q2_K.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_CodeLlama-7b-Python-hf-gguf/blob/main/CodeLlama-7b-Python-hf.Q2_K.gguf) | Q2_K | 2.36GB |
| [CodeLlama-7b-Python-hf.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_CodeLlama-7b-Python-hf-gguf/blob/main/CodeLlama-7b-Python-hf.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [CodeLlama-7b-Python-hf.IQ3_S.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_CodeLlama-7b-Python-hf-gguf/blob/main/CodeLlama-7b-Python-hf.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [CodeLlama-7b-Python-hf.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_CodeLlama-7b-Python-hf-gguf/blob/main/CodeLlama-7b-Python-hf.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [CodeLlama-7b-Python-hf.IQ3_M.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_CodeLlama-7b-Python-hf-gguf/blob/main/CodeLlama-7b-Python-hf.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [CodeLlama-7b-Python-hf.Q3_K.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_CodeLlama-7b-Python-hf-gguf/blob/main/CodeLlama-7b-Python-hf.Q3_K.gguf) | Q3_K | 3.07GB |
| [CodeLlama-7b-Python-hf.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_CodeLlama-7b-Python-hf-gguf/blob/main/CodeLlama-7b-Python-hf.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [CodeLlama-7b-Python-hf.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_CodeLlama-7b-Python-hf-gguf/blob/main/CodeLlama-7b-Python-hf.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [CodeLlama-7b-Python-hf.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_CodeLlama-7b-Python-hf-gguf/blob/main/CodeLlama-7b-Python-hf.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [CodeLlama-7b-Python-hf.Q4_0.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_CodeLlama-7b-Python-hf-gguf/blob/main/CodeLlama-7b-Python-hf.Q4_0.gguf) | Q4_0 | 3.56GB |
| [CodeLlama-7b-Python-hf.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_CodeLlama-7b-Python-hf-gguf/blob/main/CodeLlama-7b-Python-hf.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [CodeLlama-7b-Python-hf.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_CodeLlama-7b-Python-hf-gguf/blob/main/CodeLlama-7b-Python-hf.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [CodeLlama-7b-Python-hf.Q4_K.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_CodeLlama-7b-Python-hf-gguf/blob/main/CodeLlama-7b-Python-hf.Q4_K.gguf) | Q4_K | 3.8GB |
| [CodeLlama-7b-Python-hf.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_CodeLlama-7b-Python-hf-gguf/blob/main/CodeLlama-7b-Python-hf.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [CodeLlama-7b-Python-hf.Q4_1.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_CodeLlama-7b-Python-hf-gguf/blob/main/CodeLlama-7b-Python-hf.Q4_1.gguf) | Q4_1 | 3.95GB |
| [CodeLlama-7b-Python-hf.Q5_0.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_CodeLlama-7b-Python-hf-gguf/blob/main/CodeLlama-7b-Python-hf.Q5_0.gguf) | Q5_0 | 4.33GB |
| [CodeLlama-7b-Python-hf.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_CodeLlama-7b-Python-hf-gguf/blob/main/CodeLlama-7b-Python-hf.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [CodeLlama-7b-Python-hf.Q5_K.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_CodeLlama-7b-Python-hf-gguf/blob/main/CodeLlama-7b-Python-hf.Q5_K.gguf) | Q5_K | 4.45GB |
| [CodeLlama-7b-Python-hf.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_CodeLlama-7b-Python-hf-gguf/blob/main/CodeLlama-7b-Python-hf.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [CodeLlama-7b-Python-hf.Q5_1.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_CodeLlama-7b-Python-hf-gguf/blob/main/CodeLlama-7b-Python-hf.Q5_1.gguf) | Q5_1 | 4.72GB |
| [CodeLlama-7b-Python-hf.Q6_K.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_CodeLlama-7b-Python-hf-gguf/blob/main/CodeLlama-7b-Python-hf.Q6_K.gguf) | Q6_K | 5.15GB |
Original model description:
---
extra_gated_heading: You need to share contact information with Meta to access this model
extra_gated_prompt: >-
### LLAMA 2 COMMUNITY LICENSE AGREEMENT
"Agreement" means the terms and conditions for use, reproduction, distribution
and modification of the Llama Materials set forth herein.
"Documentation" means the specifications, manuals and documentation
accompanying Llama 2 distributed by Meta at
https://ai.meta.com/resources/models-and-libraries/llama-downloads/.
"Licensee" or "you" means you, or your employer or any other person or entity
(if you are entering into this Agreement on such person or entity's behalf),
of the age required under applicable laws, rules or regulations to provide
legal consent and that has legal authority to bind your employer or such other
person or entity if you are entering in this Agreement on their behalf.
"Llama 2" means the foundational large language models and software and
algorithms, including machine-learning model code, trained model weights,
inference-enabling code, training-enabling code, fine-tuning enabling code and
other elements of the foregoing distributed by Meta at
ai.meta.com/resources/models-and-libraries/llama-downloads/.
"Llama Materials" means, collectively, Meta's proprietary Llama 2 and
documentation (and any portion thereof) made available under this Agreement.
"Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or,
if you are an entity, your principal place of business is in the EEA or
Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA
or Switzerland).
By clicking "I Accept" below or by using or distributing any portion or
element of the Llama Materials, you agree to be bound by this Agreement.
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-
transferable and royalty-free limited license under Meta's intellectual
property or other rights owned by Meta embodied in the Llama Materials to
use, reproduce, distribute, copy, create derivative works of, and make
modifications to the Llama Materials.
b. Redistribution and Use.
i. If you distribute or make the Llama Materials, or any derivative works
thereof, available to a third party, you shall provide a copy of this
Agreement to such third party.
ii. If you receive Llama Materials, or any derivative works thereof, from a
Licensee as part of an integrated end user product, then Section 2 of this
Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute
the following attribution notice within a "Notice" text file distributed as a
part of such copies: "Llama 2 is licensed under the LLAMA 2 Community
License, Copyright (c) Meta Platforms, Inc. All Rights Reserved."
iv. Your use of the Llama Materials must comply with applicable laws and
regulations (including trade compliance laws and regulations) and adhere to
the Acceptable Use Policy for the Llama Materials (available at
https://ai.meta.com/llama/use-policy), which is hereby incorporated by
reference into this Agreement.
v. You will not use the Llama Materials or any output or results of the Llama
Materials to improve any other large language model (excluding Llama 2 or
derivative works thereof).
2. Additional Commercial Terms. If, on the Llama 2 version release date, the
monthly active users of the products or services made available by or for
Licensee, or Licensee's affiliates, is greater than 700 million monthly
active users in the preceding calendar month, you must request a license from
Meta, which Meta may grant to you in its sole discretion, and you are not
authorized to exercise any of the rights under this Agreement unless or until
Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA
MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS"
BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING,
WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY
RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING
THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE
LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE
UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE,
PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST
PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR
PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE
POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection
with the Llama Materials, neither Meta nor Licensee may use any name or mark
owned by or associated with the other or any of its affiliates, except as
required for reasonable and customary use in describing and redistributing
the Llama Materials.
b. Subject to Meta's ownership of Llama Materials and derivatives made by or
for Meta, with respect to any derivative works and modifications of the Llama
Materials that are made by you, as between you and Meta, you are and will be
the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any
entity (including a cross-claim or counterclaim in a lawsuit) alleging that
the Llama Materials or Llama 2 outputs or results, or any portion of any of
the foregoing, constitutes infringement of intellectual property or other
rights owned or licensable by you, then any licenses granted to you under
this Agreement shall terminate as of the date such litigation or claim is
filed or instituted. You will indemnify and hold harmless Meta from and
against any claim by any third party arising out of or related to your use or
distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your
acceptance of this Agreement or access to the Llama Materials and will
continue in full force and effect until terminated in accordance with the
terms and conditions herein. Meta may terminate this Agreement if you are in
breach of any term or condition of this Agreement. Upon termination of this
Agreement, you shall delete and cease use of the Llama Materials. Sections 3,
4 and 7 shall survive the termination of this Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and
construed under the laws of the State of California without regard to choice
of law principles, and the UN Convention on Contracts for the International
Sale of Goods does not apply to this Agreement. The courts of California
shall have exclusive jurisdiction of any dispute arising out of this
Agreement.
USE POLICY
### Llama 2 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features,
including Llama 2. If you access or use Llama 2, you agree to this Acceptable
Use Policy (“Policy”). The most recent copy of this policy can be found at
[ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy).
#### Prohibited Uses
We want everyone to use Llama 2 safely and responsibly. You agree you will not
use, or allow others to use, Llama 2 to:
1. Violate the law or others’ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials
7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or
development of activities that present a risk of death or bodily harm to
individuals, including use of Llama 2 related to the following:
1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
2. Guns and illegal weapons (including weapon development)
3. Illegal drugs and regulated/controlled substances
4. Operation of critical infrastructure, transportation technologies, or heavy machinery
5. Self-harm or harm to others, including suicide, cutting, and eating disorders
6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Llama 2 related
to the following:
1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
3. Generating, promoting, or further distributing spam
4. Impersonating another individual without consent, authorization, or legal right
5. Representing that the use of Llama 2 or outputs are human-generated
6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
Please report any violation of this Policy, software “bug,” or other problems
that could lead to a violation of this Policy through one of the following
means:
* Reporting issues with the model:
[github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
* Reporting risky content generated by the model:
[developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
* Reporting bugs and security concerns:
[facebook.com/whitehat/info](http://facebook.com/whitehat/info)
* Reporting violations of the Acceptable Use Policy or unlicensed uses of
Llama: [[email protected]](mailto:[email protected])
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
language:
- code
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
license: llama2
---
# **Code Llama**
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the 7B Python specialist version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.
| | Base Model | Python | Instruct |
| --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| 7B | [meta-llama/CodeLlama-7b-hf](https://huggingface.co/meta-llama/CodeLlama-7b-hf) | [meta-llama/CodeLlama-7b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-7b-Python-hf) | [meta-llama/CodeLlama-7b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-7b-Instruct-hf) |
| 13B | [meta-llama/CodeLlama-13b-hf](https://huggingface.co/meta-llama/CodeLlama-13b-hf) | [meta-llama/CodeLlama-13b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-13b-Python-hf) | [meta-llama/CodeLlama-13b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-13b-Instruct-hf) |
| 34B | [meta-llama/CodeLlama-34b-hf](https://huggingface.co/meta-llama/CodeLlama-34b-hf) | [meta-llama/CodeLlama-34b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-34b-Python-hf) | [meta-llama/CodeLlama-34b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-34b-Instruct-hf) |
| 70B | [meta-llama/CodeLlama-70b-hf](https://huggingface.co/meta-llama/CodeLlama-70b-hf) | [meta-llama/CodeLlama-70b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-70b-Python-hf) | [meta-llama/CodeLlama-70b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-70b-Instruct-hf) |
## Model Use
To use this model, please make sure to install transformers:
```bash
pip install transformers accelerate
```
Model capabilities:
- [x] Code completion.
- [ ] Infilling.
- [ ] Instructions / chat.
- [x] Python specialist.
## Model Details
*Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).
**Model Developers** Meta
**Variations** Code Llama comes in three model sizes, and three variants:
* Code Llama: base models designed for general code synthesis and understanding
* Code Llama - Python: designed specifically for Python
* Code Llama - Instruct: for instruction following and safer deployment
All variants are available in sizes of 7B, 13B and 34B parameters.
**This repository contains the Python version of the 7B parameters model.**
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture.
**Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950).
## Intended Use
**Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
## Hardware and Software
**Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
**Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program.
## Training Data
All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details).
## Evaluation Results
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
## Ethical Considerations and Limitations
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-use-guide](https://ai.meta.com/llama/responsible-use-guide).
|
Bluecast/wav2vec2-Phoneme
|
Bluecast
| 2024-04-27T10:44:13Z | 358 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-04-24T09:53:12Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-Phoneme
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-Phoneme
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2842
- Wer: 0.1281
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:-----:|:---------------:|:------:|
| 2.1769 | 0.2954 | 100 | 2.1463 | 0.9564 |
| 2.1285 | 0.5908 | 200 | 2.0959 | 0.9575 |
| 1.8989 | 0.8863 | 300 | 1.5997 | 0.9022 |
| 1.1123 | 1.1817 | 400 | 0.6782 | 0.4093 |
| 0.618 | 1.4771 | 500 | 0.3548 | 0.1544 |
| 0.4993 | 1.7725 | 600 | 0.3039 | 0.1331 |
| 0.4425 | 2.0679 | 700 | 0.2688 | 0.1169 |
| 0.363 | 2.3634 | 800 | 0.2419 | 0.1108 |
| 0.3507 | 2.6588 | 900 | 0.2220 | 0.1039 |
| 0.3282 | 2.9542 | 1000 | 0.1999 | 0.1001 |
| 0.2887 | 3.2496 | 1100 | 0.2044 | 0.0974 |
| 0.3104 | 3.5451 | 1200 | 0.1950 | 0.0994 |
| 0.2976 | 3.8405 | 1300 | 0.2005 | 0.0969 |
| 0.2617 | 4.1359 | 1400 | 0.1907 | 0.0962 |
| 0.2783 | 4.4313 | 1500 | 0.1886 | 0.0936 |
| 0.2533 | 4.7267 | 1600 | 0.1845 | 0.0938 |
| 0.2501 | 5.0222 | 1700 | 0.1759 | 0.0926 |
| 0.2261 | 5.3176 | 1800 | 0.1789 | 0.0896 |
| 0.2112 | 5.6130 | 1900 | 0.1824 | 0.0891 |
| 0.2162 | 5.9084 | 2000 | 0.1715 | 0.0886 |
| 0.2098 | 6.2038 | 2100 | 0.1761 | 0.0902 |
| 0.2133 | 6.4993 | 2200 | 0.1747 | 0.0896 |
| 0.2174 | 6.7947 | 2300 | 0.1753 | 0.0892 |
| 0.2033 | 7.0901 | 2400 | 0.1729 | 0.0886 |
| 0.2167 | 7.3855 | 2500 | 0.1749 | 0.0889 |
| 0.2001 | 7.6809 | 2600 | 0.1650 | 0.0874 |
| 0.1874 | 7.9764 | 2700 | 0.1656 | 0.0872 |
| 0.1846 | 8.2718 | 2800 | 0.1674 | 0.0873 |
| 0.1927 | 8.5672 | 2900 | 0.1595 | 0.0863 |
| 0.1672 | 8.8626 | 3000 | 0.1552 | 0.0849 |
| 0.1741 | 9.1581 | 3100 | 0.1659 | 0.0868 |
| 0.1753 | 9.4535 | 3200 | 0.1615 | 0.0862 |
| 0.1825 | 9.7489 | 3300 | 0.1623 | 0.0862 |
| 0.166 | 10.0443 | 3400 | 0.1584 | 0.0865 |
| 0.1762 | 10.3397 | 3500 | 0.1573 | 0.0850 |
| 0.1744 | 10.6352 | 3600 | 0.1537 | 0.0863 |
| 0.1786 | 10.9306 | 3700 | 0.1522 | 0.0840 |
| 0.1731 | 11.2260 | 3800 | 0.1645 | 0.0851 |
| 0.1929 | 11.5214 | 3900 | 0.1785 | 0.0851 |
| 0.2047 | 11.8168 | 4000 | 0.1844 | 0.0860 |
| 0.255 | 12.1123 | 4100 | 0.2305 | 0.0911 |
| 0.2771 | 12.4077 | 4200 | 0.2311 | 0.0886 |
| 0.2742 | 12.7031 | 4300 | 0.2605 | 0.0901 |
| 0.3879 | 12.9985 | 4400 | 0.2886 | 0.0965 |
| 0.3655 | 13.2939 | 4500 | 0.2897 | 0.0933 |
| 0.3693 | 13.5894 | 4600 | 0.2936 | 0.0960 |
| 0.3999 | 13.8848 | 4700 | 0.2905 | 0.1059 |
| 0.4286 | 14.1802 | 4800 | 0.3424 | 0.1025 |
| 0.574 | 14.4756 | 4900 | 0.3891 | 0.1135 |
| 0.5753 | 14.7710 | 5000 | 0.3912 | 0.1276 |
| 0.5225 | 15.0665 | 5100 | 0.4248 | 0.1151 |
| 0.4785 | 15.3619 | 5200 | 0.3332 | 0.1287 |
| 0.5733 | 15.6573 | 5300 | 0.3999 | 0.1261 |
| 0.5471 | 15.9527 | 5400 | 0.4144 | 0.1293 |
| 0.5527 | 16.2482 | 5500 | 0.3580 | 0.1160 |
| 0.6322 | 16.5436 | 5600 | 0.5158 | 0.1794 |
| 0.6867 | 16.8390 | 5700 | 0.4731 | 0.1411 |
| 0.606 | 17.1344 | 5800 | 0.3812 | 0.1305 |
| 0.5376 | 17.4298 | 5900 | 0.3505 | 0.1206 |
| 0.5035 | 17.7253 | 6000 | 0.3251 | 0.1199 |
| 0.469 | 18.0207 | 6100 | 0.3092 | 0.1172 |
| 0.4544 | 18.3161 | 6200 | 0.3030 | 0.1185 |
| 0.4288 | 18.6115 | 6300 | 0.2915 | 0.1183 |
| 0.4457 | 18.9069 | 6400 | 0.2834 | 0.1203 |
| 0.408 | 19.2024 | 6500 | 0.2765 | 0.1212 |
| 0.4182 | 19.4978 | 6600 | 0.2741 | 0.1205 |
| 0.4117 | 19.7932 | 6700 | 0.2705 | 0.1209 |
| 0.4131 | 20.0886 | 6800 | 0.2725 | 0.1230 |
| 0.4034 | 20.3840 | 6900 | 0.2713 | 0.1218 |
| 0.4048 | 20.6795 | 7000 | 0.2707 | 0.1226 |
| 0.4199 | 20.9749 | 7100 | 0.2695 | 0.1221 |
| 0.4286 | 21.2703 | 7200 | 0.2709 | 0.1239 |
| 0.3968 | 21.5657 | 7300 | 0.2699 | 0.1230 |
| 0.4071 | 21.8612 | 7400 | 0.2705 | 0.1254 |
| 0.4178 | 22.1566 | 7500 | 0.2701 | 0.1252 |
| 0.396 | 22.4520 | 7600 | 0.2702 | 0.1252 |
| 0.4255 | 22.7474 | 7700 | 0.2701 | 0.1249 |
| 0.4239 | 23.0428 | 7800 | 0.2716 | 0.1254 |
| 0.4153 | 23.3383 | 7900 | 0.2729 | 0.1264 |
| 0.4265 | 23.6337 | 8000 | 0.2726 | 0.1264 |
| 0.4221 | 23.9291 | 8100 | 0.2737 | 0.1266 |
| 0.4268 | 24.2245 | 8200 | 0.2751 | 0.1269 |
| 0.4207 | 24.5199 | 8300 | 0.2761 | 0.1273 |
| 0.3872 | 24.8154 | 8400 | 0.2764 | 0.1273 |
| 0.4004 | 25.1108 | 8500 | 0.2786 | 0.1276 |
| 0.4096 | 25.4062 | 8600 | 0.2798 | 0.1276 |
| 0.4542 | 25.7016 | 8700 | 0.2803 | 0.1274 |
| 0.4361 | 25.9970 | 8800 | 0.2818 | 0.1276 |
| 0.4454 | 26.2925 | 8900 | 0.2826 | 0.1277 |
| 0.4204 | 26.5879 | 9000 | 0.2842 | 0.1281 |
| 0.4423 | 26.8833 | 9100 | 0.2841 | 0.1280 |
| 0.4333 | 27.1787 | 9200 | 0.2845 | 0.1282 |
| 0.4036 | 27.4742 | 9300 | 0.2844 | 0.1281 |
| 0.4203 | 27.7696 | 9400 | 0.2844 | 0.1281 |
| 0.4321 | 28.0650 | 9500 | 0.2842 | 0.1281 |
| 0.4251 | 28.3604 | 9600 | 0.2842 | 0.1281 |
| 0.4122 | 28.6558 | 9700 | 0.2841 | 0.1281 |
| 0.424 | 28.9513 | 9800 | 0.2841 | 0.1280 |
| 0.4404 | 29.2467 | 9900 | 0.2842 | 0.1281 |
| 0.4174 | 29.5421 | 10000 | 0.2842 | 0.1281 |
| 0.4432 | 29.8375 | 10100 | 0.2842 | 0.1281 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1.dev0
- Tokenizers 0.19.1
|
mradermacher/Smart-LLama-3-8b-Python-v5-GGUF
|
mradermacher
| 2024-05-06T06:40:54Z | 358 | 1 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"base_model:Kukedlc/Smart-LLama-3-8b-Python-v5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-05T12:58:11Z |
---
base_model: Kukedlc/Smart-LLama-3-8b-Python-v5
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Kukedlc/Smart-LLama-3-8b-Python-v5
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Smart-LLama-3-8b-Python-v5-GGUF/resolve/main/Smart-LLama-3-8b-Python-v5.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Smart-LLama-3-8b-Python-v5-GGUF/resolve/main/Smart-LLama-3-8b-Python-v5.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Smart-LLama-3-8b-Python-v5-GGUF/resolve/main/Smart-LLama-3-8b-Python-v5.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Smart-LLama-3-8b-Python-v5-GGUF/resolve/main/Smart-LLama-3-8b-Python-v5.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Smart-LLama-3-8b-Python-v5-GGUF/resolve/main/Smart-LLama-3-8b-Python-v5.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Smart-LLama-3-8b-Python-v5-GGUF/resolve/main/Smart-LLama-3-8b-Python-v5.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Smart-LLama-3-8b-Python-v5-GGUF/resolve/main/Smart-LLama-3-8b-Python-v5.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Smart-LLama-3-8b-Python-v5-GGUF/resolve/main/Smart-LLama-3-8b-Python-v5.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Smart-LLama-3-8b-Python-v5-GGUF/resolve/main/Smart-LLama-3-8b-Python-v5.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Smart-LLama-3-8b-Python-v5-GGUF/resolve/main/Smart-LLama-3-8b-Python-v5.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Smart-LLama-3-8b-Python-v5-GGUF/resolve/main/Smart-LLama-3-8b-Python-v5.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Smart-LLama-3-8b-Python-v5-GGUF/resolve/main/Smart-LLama-3-8b-Python-v5.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Smart-LLama-3-8b-Python-v5-GGUF/resolve/main/Smart-LLama-3-8b-Python-v5.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Smart-LLama-3-8b-Python-v5-GGUF/resolve/main/Smart-LLama-3-8b-Python-v5.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Smart-LLama-3-8b-Python-v5-GGUF/resolve/main/Smart-LLama-3-8b-Python-v5.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mmnga/japanese-stablelm-2-instruct-1_6b-gguf
|
mmnga
| 2024-05-11T09:56:19Z | 358 | 1 | null |
[
"gguf",
"japanese-stablelm",
"causal-lm",
"en",
"ja",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"license:other",
"region:us"
] | null | 2024-05-11T07:26:43Z |
---
extra_gated_prompt: >-
By clicking "Agree", you agree to the [License Agreement](https://huggingface.co/stabilityai/japanese-stablelm-2-instruct-1_6b/blob/main/LICENSE.txt) and acknowledge Stability AI's [Privacy Policy](https://stability.ai/privacy-policy).
license:
- other
language:
- en
- ja
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
tags:
- japanese-stablelm
- causal-lm
---
# japanese-stablelm-2-instruct-1_6b-gguf
[stabilityaiさんが公開しているjapanese-stablelm-2-instruct-1_6b](https://huggingface.co/stabilityai/japanese-stablelm-2-instruct-1_6b)のggufフォーマット変換版です。
imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。
## license
ご使用の前に利用規約を必ずご確認の上、同意した場合のみご利用になれます。
ご使用される際は[利用規約](https://huggingface.co/stabilityai/japanese-stablelm-2-instruct-1_6b/blob/main/LICENSE.txt)に同意したものとみなされます。
Please be sure to read this before using it, and you can use it only if you agree to it.
By using this service, you are deemed to have agreed to the [terms of use](https://huggingface.co/stabilityai/japanese-stablelm-2-instruct-1_6b/blob/main/LICENSE.txt).
また、商用利用の際には、[メンバーシップ](https://stability.ai/membership)への登録が必要です。
Please note: For commercial use, please refer to https://stability.ai/membership
## convert
元モデルをダウンロード後、tokenization_arcade100k.pyを修正する必要があります。
修正箇所は、def __init__の最後に下記を追加します
```
self.special_tokens = self.tokenizer._special_tokens
```
変換用スクリプトは[こちら](https://gist.github.com/mmnga/bd9de075fcbdf1f95587edeb35565419)
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'japanese-stablelm-2-instruct-1_6b-Q4_0.gguf' -n 128 -p 'こんにちわ'
```
|
mradermacher/dolphin-2.9.1-yi-1.5-9b-GGUF
|
mradermacher
| 2024-05-19T12:22:32Z | 358 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"axolotl",
"en",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"base_model:cognitivecomputations/dolphin-2.9.1-yi-1.5-9b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-19T11:51:13Z |
---
base_model: cognitivecomputations/dolphin-2.9.1-yi-1.5-9b
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
- axolotl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/cognitivecomputations/dolphin-2.9.1-yi-1.5-9b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.Q2_K.gguf) | Q2_K | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.IQ3_XS.gguf) | IQ3_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.Q3_K_S.gguf) | Q3_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.IQ3_S.gguf) | IQ3_S | 4.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.IQ3_M.gguf) | IQ3_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.Q3_K_M.gguf) | Q3_K_M | 4.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.Q3_K_L.gguf) | Q3_K_L | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.IQ4_XS.gguf) | IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.Q5_K_S.gguf) | Q5_K_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.Q5_K_M.gguf) | Q5_K_M | 6.4 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.Q6_K.gguf) | Q6_K | 7.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.Q8_0.gguf) | Q8_0 | 9.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-9b.f16.gguf) | f16 | 17.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
MaziyarPanahi/MeliodasM7-7B-GGUF
|
MaziyarPanahi
| 2024-05-21T13:40:59Z | 358 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:AurelPx/Meliodas-7b-dare",
"base_model:liminerity/M7-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:automerger/MeliodasM7-7B"
] |
text-generation
| 2024-05-21T13:10:23Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- merge
- mergekit
- lazymergekit
- automerger
- base_model:AurelPx/Meliodas-7b-dare
- base_model:liminerity/M7-7b
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: MeliodasM7-7B-GGUF
base_model: automerger/MeliodasM7-7B
inference: false
model_creator: automerger
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/MeliodasM7-7B-GGUF](https://huggingface.co/MaziyarPanahi/MeliodasM7-7B-GGUF)
- Model creator: [automerger](https://huggingface.co/automerger)
- Original model: [automerger/MeliodasM7-7B](https://huggingface.co/automerger/MeliodasM7-7B)
## Description
[MaziyarPanahi/MeliodasM7-7B-GGUF](https://huggingface.co/MaziyarPanahi/MeliodasM7-7B-GGUF) contains GGUF format model files for [automerger/MeliodasM7-7B](https://huggingface.co/automerger/MeliodasM7-7B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf
|
RichardErkhov
| 2024-05-29T08:39:08Z | 358 | 0 | null |
[
"gguf",
"arxiv:2403.17297",
"region:us"
] | null | 2024-05-29T07:56:57Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
internlm2-chat-1_8b-sft - GGUF
- Model creator: https://huggingface.co/internlm/
- Original model: https://huggingface.co/internlm/internlm2-chat-1_8b-sft/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [internlm2-chat-1_8b-sft.Q2_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf/blob/main/internlm2-chat-1_8b-sft.Q2_K.gguf) | Q2_K | 0.72GB |
| [internlm2-chat-1_8b-sft.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf/blob/main/internlm2-chat-1_8b-sft.IQ3_XS.gguf) | IQ3_XS | 0.79GB |
| [internlm2-chat-1_8b-sft.IQ3_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf/blob/main/internlm2-chat-1_8b-sft.IQ3_S.gguf) | IQ3_S | 0.83GB |
| [internlm2-chat-1_8b-sft.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf/blob/main/internlm2-chat-1_8b-sft.Q3_K_S.gguf) | Q3_K_S | 0.83GB |
| [internlm2-chat-1_8b-sft.IQ3_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf/blob/main/internlm2-chat-1_8b-sft.IQ3_M.gguf) | IQ3_M | 0.85GB |
| [internlm2-chat-1_8b-sft.Q3_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf/blob/main/internlm2-chat-1_8b-sft.Q3_K.gguf) | Q3_K | 0.9GB |
| [internlm2-chat-1_8b-sft.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf/blob/main/internlm2-chat-1_8b-sft.Q3_K_M.gguf) | Q3_K_M | 0.9GB |
| [internlm2-chat-1_8b-sft.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf/blob/main/internlm2-chat-1_8b-sft.Q3_K_L.gguf) | Q3_K_L | 0.96GB |
| [internlm2-chat-1_8b-sft.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf/blob/main/internlm2-chat-1_8b-sft.IQ4_XS.gguf) | IQ4_XS | 1.0GB |
| [internlm2-chat-1_8b-sft.Q4_0.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf/blob/main/internlm2-chat-1_8b-sft.Q4_0.gguf) | Q4_0 | 1.04GB |
| [internlm2-chat-1_8b-sft.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf/blob/main/internlm2-chat-1_8b-sft.IQ4_NL.gguf) | IQ4_NL | 1.04GB |
| [internlm2-chat-1_8b-sft.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf/blob/main/internlm2-chat-1_8b-sft.Q4_K_S.gguf) | Q4_K_S | 1.04GB |
| [internlm2-chat-1_8b-sft.Q4_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf/blob/main/internlm2-chat-1_8b-sft.Q4_K.gguf) | Q4_K | 1.09GB |
| [internlm2-chat-1_8b-sft.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf/blob/main/internlm2-chat-1_8b-sft.Q4_K_M.gguf) | Q4_K_M | 1.09GB |
| [internlm2-chat-1_8b-sft.Q4_1.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf/blob/main/internlm2-chat-1_8b-sft.Q4_1.gguf) | Q4_1 | 1.14GB |
| [internlm2-chat-1_8b-sft.Q5_0.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf/blob/main/internlm2-chat-1_8b-sft.Q5_0.gguf) | Q5_0 | 1.24GB |
| [internlm2-chat-1_8b-sft.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf/blob/main/internlm2-chat-1_8b-sft.Q5_K_S.gguf) | Q5_K_S | 1.24GB |
| [internlm2-chat-1_8b-sft.Q5_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf/blob/main/internlm2-chat-1_8b-sft.Q5_K.gguf) | Q5_K | 1.26GB |
| [internlm2-chat-1_8b-sft.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf/blob/main/internlm2-chat-1_8b-sft.Q5_K_M.gguf) | Q5_K_M | 1.26GB |
| [internlm2-chat-1_8b-sft.Q5_1.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf/blob/main/internlm2-chat-1_8b-sft.Q5_1.gguf) | Q5_1 | 1.33GB |
| [internlm2-chat-1_8b-sft.Q6_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf/blob/main/internlm2-chat-1_8b-sft.Q6_K.gguf) | Q6_K | 1.45GB |
| [internlm2-chat-1_8b-sft.Q8_0.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-chat-1_8b-sft-gguf/blob/main/internlm2-chat-1_8b-sft.Q8_0.gguf) | Q8_0 | 1.87GB |
Original model description:
---
pipeline_tag: text-generation
license: other
---
# InternLM
<div align="center">
<img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/>
<div> </div>
<div align="center">
<b><font size="5">InternLM</font></b>
<sup>
<a href="https://internlm.intern-ai.org.cn/">
<i><font size="4">HOT</font></i>
</a>
</sup>
<div> </div>
</div>
[](https://github.com/internLM/OpenCompass/)
[💻Github Repo](https://github.com/InternLM/InternLM) • [🤔Reporting Issues](https://github.com/InternLM/InternLM/issues/new) • [📜Technical Report](https://arxiv.org/abs/2403.17297)
</div>
## Introduction
InternLM2-1.8B is the 1.8 billion parameter version of the second generation InternLM series. In order to facilitate user use and research, InternLM2-1.8B has three versions of open-source models. They are:
- InternLM2-1.8B: Foundation models with high quality and high adaptation flexibility, which serve as a good starting point for downstream deep adaptations.
- InternLM2-Chat-1.8B-SFT: Chat model after supervised fine-tuning (SFT) on InternLM2-1.8B.
- InternLM2-Chat-1.8B: Further aligned on top of InternLM2-Chat-1.8B-SFT through online RLHF. InternLM2-Chat-1.8B exhibits better instruction following, chat experience, and function calling, which is recommended for downstream applications.
The InternLM2 has the following technical features:
- Effective support for ultra-long contexts of up to 200,000 characters: The model nearly perfectly achieves "finding a needle in a haystack" in long inputs of 200,000 characters. It also leads among open-source models in performance on long-text tasks such as LongBench and L-Eval.
- Comprehensive performance enhancement: Compared to the previous generation model, it shows significant improvements in various capabilities, including reasoning, mathematics, and coding.
## InternLM2-1.8B
### Performance Evaluation
We have evaluated InternLM2 on several important benchmarks using the open-source evaluation tool [OpenCompass](https://github.com/open-compass/opencompass). Some of the evaluation results are shown in the table below. You are welcome to visit the [OpenCompass Leaderboard](https://opencompass.org.cn/rank) for more evaluation results.
| Dataset\Models | InternLM2-1.8B | InternLM2-Chat-1.8B-SFT | InternLM2-7B | InternLM2-Chat-7B |
| :---: | :---: | :---: | :---: | :---: |
| MMLU | 46.9 | 47.1 | 65.8 | 63.7 |
| AGIEval | 33.4 | 38.8 | 49.9 | 47.2 |
| BBH | 37.5 | 35.2 | 65.0 | 61.2 |
| GSM8K | 31.2 | 39.7 | 70.8 | 70.7 |
| MATH | 5.6 | 11.8 | 20.2 | 23.0 |
| HumanEval | 25.0 | 32.9 | 43.3 | 59.8 |
| MBPP(Sanitized) | 22.2 | 23.2 | 51.8 | 51.4 |
- The evaluation results were obtained from [OpenCompass](https://github.com/open-compass/opencompass) , and evaluation configuration can be found in the configuration files provided by [OpenCompass](https://github.com/open-compass/opencompass).
- The evaluation data may have numerical differences due to the version iteration of [OpenCompass](https://github.com/open-compass/opencompass), so please refer to the latest evaluation results of [OpenCompass](https://github.com/open-compass/opencompass).
**Limitations:** Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information.
### Import from Transformers
To load the InternLM2 1.8B Chat SFT model using Transformers, use the following code:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-chat-1_8b-sft", trust_remote_code=True)
# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and cause OOM Error.
model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-chat-1_8b-sft", torch_dtype=torch.float16, trust_remote_code=True).cuda()
model = model.eval()
response, history = model.chat(tokenizer, "hello", history=[])
print(response)
# Hello! How can I help you today?
response, history = model.chat(tokenizer, "please provide three suggestions about time management", history=history)
print(response)
```
The responses can be streamed using `stream_chat`:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "internlm/internlm2-chat-1_8b-sft"
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True).cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = model.eval()
length = 0
for response, history in model.stream_chat(tokenizer, "Hello", history=[]):
print(response[length:], flush=True, end="")
length = len(response)
```
## Deployment
### LMDeploy
LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams.
```bash
pip install lmdeploy
```
You can run batch inference locally with the following python code:
```python
import lmdeploy
pipe = lmdeploy.pipeline("internlm/internlm2-chat-1_8b-sft")
response = pipe(["Hi, pls intro yourself", "Shanghai is"])
print(response)
```
Or you can launch an OpenAI compatible server with the following command:
```bash
lmdeploy serve api_server internlm/internlm2-chat-1_8b-sft --model-name internlm2-chat-1_8b-sft --server-port 23333
```
Then you can send a chat request to the server:
```bash
curl http://localhost:23333/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "internlm2-chat-1_8b-sft",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Introduce deep learning to me."}
]
}'
```
Find more details in the [LMDeploy documentation](https://lmdeploy.readthedocs.io/en/latest/)
### vLLM
Launch OpenAI compatible server with `vLLM>=0.3.2`:
```bash
pip install vllm
```
```bash
python -m vllm.entrypoints.openai.api_server --model internlm/internlm2-chat-1_8b-sft --served-model-name internlm2-chat-1_8b-sft --trust-remote-code
```
Then you can send a chat request to the server:
```bash
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "internlm2-chat-1_8b-sft",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Introduce deep learning to me."}
]
}'
```
Find more details in the [vLLM documentation](https://docs.vllm.ai/en/latest/index.html)
## Open Source License
The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow **free** commercial usage. To apply for a commercial license, please fill in the [application form (English)](https://wj.qq.com/s2/12727483/5dba/)/[申请表(中文)](https://wj.qq.com/s2/12725412/f7c1/). For other questions or collaborations, please contact <[email protected]>.
## Citation
```
@misc{cai2024internlm2,
title={InternLM2 Technical Report},
author={Zheng Cai and Maosong Cao and Haojiong Chen and Kai Chen and Keyu Chen and Xin Chen and Xun Chen and Zehui Chen and Zhi Chen and Pei Chu and Xiaoyi Dong and Haodong Duan and Qi Fan and Zhaoye Fei and Yang Gao and Jiaye Ge and Chenya Gu and Yuzhe Gu and Tao Gui and Aijia Guo and Qipeng Guo and Conghui He and Yingfan Hu and Ting Huang and Tao Jiang and Penglong Jiao and Zhenjiang Jin and Zhikai Lei and Jiaxing Li and Jingwen Li and Linyang Li and Shuaibin Li and Wei Li and Yining Li and Hongwei Liu and Jiangning Liu and Jiawei Hong and Kaiwen Liu and Kuikun Liu and Xiaoran Liu and Chengqi Lv and Haijun Lv and Kai Lv and Li Ma and Runyuan Ma and Zerun Ma and Wenchang Ning and Linke Ouyang and Jiantao Qiu and Yuan Qu and Fukai Shang and Yunfan Shao and Demin Song and Zifan Song and Zhihao Sui and Peng Sun and Yu Sun and Huanze Tang and Bin Wang and Guoteng Wang and Jiaqi Wang and Jiayu Wang and Rui Wang and Yudong Wang and Ziyi Wang and Xingjian Wei and Qizhen Weng and Fan Wu and Yingtong Xiong and Chao Xu and Ruiliang Xu and Hang Yan and Yirong Yan and Xiaogui Yang and Haochen Ye and Huaiyuan Ying and Jia Yu and Jing Yu and Yuhang Zang and Chuyu Zhang and Li Zhang and Pan Zhang and Peng Zhang and Ruijie Zhang and Shuo Zhang and Songyang Zhang and Wenjian Zhang and Wenwei Zhang and Xingcheng Zhang and Xinyue Zhang and Hui Zhao and Qian Zhao and Xiaomeng Zhao and Fengzhe Zhou and Zaida Zhou and Jingming Zhuo and Yicheng Zou and Xipeng Qiu and Yu Qiao and Dahua Lin},
year={2024},
eprint={2403.17297},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## 简介
书生·浦语-1.8B (InternLM2-1.8B) 是第二代浦语模型系列的18亿参数版本。为了方便用户使用和研究,书生·浦语-1.8B (InternLM2-1.8B) 共有三个版本的开源模型,他们分别是:
- InternLM2-1.8B: 具有高质量和高适应灵活性的基础模型,为下游深度适应提供了良好的起点。
- InternLM2-Chat-1.8B-SFT:在 InternLM2-1.8B 上进行监督微调 (SFT) 后得到的对话模型。
- InternLM2-chat-1.8B:通过在线 RLHF 在 InternLM2-Chat-1.8B-SFT 之上进一步对齐。 InternLM2-Chat-1.8B 表现出更好的指令跟随、聊天体验和函数调用,推荐下游应用程序使用。
InternLM2 模型具备以下的技术特点
- 有效支持20万字超长上下文:模型在20万字长输入中几乎完美地实现长文“大海捞针”,而且在 LongBench 和 L-Eval 等长文任务中的表现也达到开源模型中的领先水平。
- 综合性能全面提升:各能力维度相比上一代模型全面进步,在推理、数学、代码等方面的能力提升显著。
## InternLM2-1.8B
### 性能评测
我们使用开源评测工具 [OpenCompass](https://github.com/internLM/OpenCompass/) 对 InternLM2 在几个重要的评测集进行了评测 ,部分评测结果如下表所示,欢迎访问[ OpenCompass 榜单 ](https://opencompass.org.cn/rank)获取更多的评测结果。
| 评测集 | InternLM2-1.8B | InternLM2-Chat-1.8B-SFT | InternLM2-7B | InternLM2-Chat-7B |
| :---: | :---: | :---: | :---: | :---: |
| MMLU | 46.9 | 47.1 | 65.8 | 63.7 |
| AGIEval | 33.4 | 38.8 | 49.9 | 47.2 |
| BBH | 37.5 | 35.2 | 65.0 | 61.2 |
| GSM8K | 31.2 | 39.7 | 70.8 | 70.7 |
| MATH | 5.6 | 11.8 | 20.2 | 23.0 |
| HumanEval | 25.0 | 32.9 | 43.3 | 59.8 |
| MBPP(Sanitized) | 22.2 | 23.2 | 51.8 | 51.4 |
- 以上评测结果基于 [OpenCompass](https://github.com/open-compass/opencompass) 获得(部分数据标注`*`代表数据来自原始论文),具体测试细节可参见 [OpenCompass](https://github.com/open-compass/opencompass) 中提供的配置文件。
- 评测数据会因 [OpenCompass](https://github.com/open-compass/opencompass) 的版本迭代而存在数值差异,请以 [OpenCompass](https://github.com/open-compass/opencompass) 最新版的评测结果为主。
**局限性:** 尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目不承担责任。
### 通过 Transformers 加载
通过以下的代码加载 InternLM2 1.8B Chat SFT 模型
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-chat-1_8b-sft", trust_remote_code=True)
# `torch_dtype=torch.float16` 可以令模型以 float16 精度加载,否则 transformers 会将模型加载为 float32,导致显存不足
model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-chat-1_8b-sft", torch_dtype=torch.float16, trust_remote_code=True).cuda()
model = model.eval()
response, history = model.chat(tokenizer, "你好", history=[])
print(response)
# 你好!有什么我可以帮助你的吗?
response, history = model.chat(tokenizer, "请提供三个管理时间的建议。", history=history)
print(response)
```
如果想进行流式生成,则可以使用 `stream_chat` 接口:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "internlm/internlm2-chat-1_8b-sft"
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dype=torch.float16, trust_remote_code=True).cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = model.eval()
length = 0
for response, history in model.stream_chat(tokenizer, "你好", history=[]):
print(response[length:], flush=True, end="")
length = len(response)
```
## 部署
### LMDeploy
LMDeploy 由 MMDeploy 和 MMRazor 团队联合开发,是涵盖了 LLM 任务的全套轻量化、部署和服务解决方案。
```bash
pip install lmdeploy
```
你可以使用以下 python 代码进行本地批量推理:
```python
import lmdeploy
pipe = lmdeploy.pipeline("internlm/internlm2-chat-1_8b-sft")
response = pipe(["Hi, pls intro yourself", "Shanghai is"])
print(response)
```
或者你可以使用以下命令启动兼容 OpenAI API 的服务:
```bash
lmdeploy serve api_server internlm/internlm2-chat-1_8b-sft --server-port 23333
```
然后你可以向服务端发起一个聊天请求:
```bash
curl http://localhost:23333/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "internlm2-chat-1_8b-sft",
"messages": [
{"role": "system", "content": "你是个友善的AI助手。"},
{"role": "user", "content": "介绍一下深度学习。"}
]
}'
```
更多信息请查看 [LMDeploy 文档](https://lmdeploy.readthedocs.io/en/latest/)
### vLLM
使用`vLLM>=0.3.2`启动兼容 OpenAI API 的服务:
```bash
pip install vllm
```
```bash
python -m vllm.entrypoints.openai.api_server --model internlm/internlm2-chat-1_8b-sft --trust-remote-code
```
然后你可以向服务端发起一个聊天请求:
```bash
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "internlm2-chat-1_8b-sft",
"messages": [
{"role": "system", "content": "你是个友善的AI助手。"},
{"role": "user", "content": "介绍一下深度学习。"}
]
}'
```
更多信息请查看 [vLLM 文档](https://docs.vllm.ai/en/latest/index.html)
## 开源许可证
本仓库的代码依照 Apache-2.0 协议开源。模型权重对学术研究完全开放,也可申请免费的商业使用授权([申请表](https://wj.qq.com/s2/12725412/f7c1/))。其他问题与合作请联系 <[email protected]>。
## 引用
```
@misc{cai2024internlm2,
title={InternLM2 Technical Report},
author={Zheng Cai and Maosong Cao and Haojiong Chen and Kai Chen and Keyu Chen and Xin Chen and Xun Chen and Zehui Chen and Zhi Chen and Pei Chu and Xiaoyi Dong and Haodong Duan and Qi Fan and Zhaoye Fei and Yang Gao and Jiaye Ge and Chenya Gu and Yuzhe Gu and Tao Gui and Aijia Guo and Qipeng Guo and Conghui He and Yingfan Hu and Ting Huang and Tao Jiang and Penglong Jiao and Zhenjiang Jin and Zhikai Lei and Jiaxing Li and Jingwen Li and Linyang Li and Shuaibin Li and Wei Li and Yining Li and Hongwei Liu and Jiangning Liu and Jiawei Hong and Kaiwen Liu and Kuikun Liu and Xiaoran Liu and Chengqi Lv and Haijun Lv and Kai Lv and Li Ma and Runyuan Ma and Zerun Ma and Wenchang Ning and Linke Ouyang and Jiantao Qiu and Yuan Qu and Fukai Shang and Yunfan Shao and Demin Song and Zifan Song and Zhihao Sui and Peng Sun and Yu Sun and Huanze Tang and Bin Wang and Guoteng Wang and Jiaqi Wang and Jiayu Wang and Rui Wang and Yudong Wang and Ziyi Wang and Xingjian Wei and Qizhen Weng and Fan Wu and Yingtong Xiong and Chao Xu and Ruiliang Xu and Hang Yan and Yirong Yan and Xiaogui Yang and Haochen Ye and Huaiyuan Ying and Jia Yu and Jing Yu and Yuhang Zang and Chuyu Zhang and Li Zhang and Pan Zhang and Peng Zhang and Ruijie Zhang and Shuo Zhang and Songyang Zhang and Wenjian Zhang and Wenwei Zhang and Xingcheng Zhang and Xinyue Zhang and Hui Zhao and Qian Zhao and Xiaomeng Zhao and Fengzhe Zhou and Zaida Zhou and Jingming Zhuo and Yicheng Zou and Xipeng Qiu and Yu Qiao and Dahua Lin},
year={2024},
eprint={2403.17297},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
wwe180/Llama3-13B-lingyang-v1
|
wwe180
| 2024-06-03T22:47:07Z | 358 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"Llama3",
"conversational",
"base_model:wwe180/Llama3-13B-lingyang-v1",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-06-03T20:25:39Z |
---
base_model:
- wwe180/Llama3-13B-lingyang-v1
library_name: transformers
tags:
- mergekit
- merge
- Llama3
license:
- other
---
# After simple testing, the effect is good, stronger than llama-3-8b!
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method using [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522) + [hfl/llama-3-chinese-8b-instruct-v2-lora](https://huggingface.co/hfl/llama-3-chinese-8b-instruct-v2-lora)
* [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1) + [Jiar/Llama-3-8B-Chinese](https://huggingface.co/Jiar/Llama-3-8B-Chinese)
* [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) + [camillop/Meta-Llama-3-8B-ORPO-ITA-llama-adapters](https://huggingface.co/camillop/Meta-Llama-3-8B-ORPO-ITA-llama-adapters)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Llama3-13B-lingyang-v1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
```
## Statement:
Llama3-13B-lingyang-v1 does not represent the views and positions of the model developers We will not be liable for any problems arising from the use of the Llama3-13B-lingyang-v1 open Source model, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
|
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-2
|
RobertoSonic
| 2024-06-16T19:41:46Z | 358 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"swinv2",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swinv2-tiny-patch4-window8-256",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-16T19:34:33Z |
---
license: apache-2.0
base_model: microsoft/swinv2-tiny-patch4-window8-256
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swinv2-tiny-patch4-window8-256-dmae-humeda-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-dmae-humeda-2
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7928
- Accuracy: 0.7115
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 1.3469 | 0.5 |
| No log | 2.0 | 4 | 1.3200 | 0.4808 |
| No log | 3.0 | 6 | 1.3124 | 0.4808 |
| No log | 4.0 | 8 | 1.2178 | 0.5 |
| 1.1551 | 5.0 | 10 | 1.0957 | 0.5769 |
| 1.1551 | 6.0 | 12 | 1.0359 | 0.5769 |
| 1.1551 | 7.0 | 14 | 1.0103 | 0.5962 |
| 1.1551 | 8.0 | 16 | 0.9382 | 0.6538 |
| 1.1551 | 9.0 | 18 | 0.8748 | 0.6346 |
| 0.9827 | 10.0 | 20 | 0.8836 | 0.6154 |
| 0.9827 | 11.0 | 22 | 0.8574 | 0.6154 |
| 0.9827 | 12.0 | 24 | 0.8494 | 0.5962 |
| 0.9827 | 13.0 | 26 | 0.8226 | 0.6154 |
| 0.9827 | 14.0 | 28 | 0.8242 | 0.6346 |
| 0.8007 | 15.0 | 30 | 0.8304 | 0.6154 |
| 0.8007 | 16.0 | 32 | 0.8447 | 0.6538 |
| 0.8007 | 17.0 | 34 | 0.8228 | 0.6923 |
| 0.8007 | 18.0 | 36 | 0.7928 | 0.7115 |
| 0.8007 | 19.0 | 38 | 0.7822 | 0.6731 |
| 0.6882 | 20.0 | 40 | 0.7750 | 0.6538 |
| 0.6882 | 21.0 | 42 | 0.7726 | 0.6538 |
| 0.6882 | 22.0 | 44 | 0.7898 | 0.6731 |
| 0.6882 | 23.0 | 46 | 0.8021 | 0.6731 |
| 0.6882 | 24.0 | 48 | 0.7834 | 0.6923 |
| 0.6154 | 25.0 | 50 | 0.7634 | 0.6731 |
| 0.6154 | 26.0 | 52 | 0.7584 | 0.6923 |
| 0.6154 | 27.0 | 54 | 0.7773 | 0.6538 |
| 0.6154 | 28.0 | 56 | 0.7830 | 0.6538 |
| 0.6154 | 29.0 | 58 | 0.7719 | 0.6538 |
| 0.541 | 30.0 | 60 | 0.7603 | 0.6538 |
| 0.541 | 31.0 | 62 | 0.7497 | 0.6731 |
| 0.541 | 32.0 | 64 | 0.7381 | 0.7115 |
| 0.541 | 33.0 | 66 | 0.7275 | 0.6923 |
| 0.541 | 34.0 | 68 | 0.7277 | 0.6923 |
| 0.5163 | 35.0 | 70 | 0.7271 | 0.6923 |
| 0.5163 | 36.0 | 72 | 0.7274 | 0.6923 |
| 0.5163 | 37.0 | 74 | 0.7304 | 0.6923 |
| 0.5163 | 38.0 | 76 | 0.7329 | 0.6923 |
| 0.5163 | 39.0 | 78 | 0.7351 | 0.6923 |
| 0.5183 | 40.0 | 80 | 0.7356 | 0.6923 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
zhihan1996/DNA_bert_4
|
zhihan1996
| 2023-10-30T19:26:47Z | 357 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
Entry not found
|
PaDaS-Lab/gbert-legal-ner
|
PaDaS-Lab
| 2024-02-26T11:22:59Z | 357 | 9 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"token-classification",
"de",
"dataset:Legal-Entity-Recognition",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-19T17:00:33Z |
---
language: de
datasets:
- Legal-Entity-Recognition
widget:
- text: "1. Das Bundesarbeitsgericht ist gemäß § 9 Abs. 2 Satz 2 ArbGG iVm. § 201 Abs. 1 Satz 2 GVG für die beabsichtigte Klage gegen den Bund zuständig ."
---
### German BERT for Legal NER
#### Use:
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("PaDaS-Lab/gbert-legal-ner", use_auth_token="AUTH_TOKEN")
model = AutoModelForTokenClassification.from_pretrained("PaDaS-Lab/gbert-legal-ner", use_auth_token="AUTH_TOKEN")
ner = pipeline("ner", model=model, tokenizer=tokenizer)
example = "1. Das Bundesarbeitsgericht ist gemäß § 9 Abs. 2 Satz 2 ArbGG iVm. § 201 Abs. 1 Satz 2 GVG für die beabsichtigte Klage gegen den Bund zuständig ."
results = ner(example)
print(results)
```
#### Classes:
|Abbreviation|Class|
|----|----|
|PER|Person|
|RR|Judge|
|AN|Lawyer|
|LD|Country|
|ST|City|
|STR|Street|
|LDS|Landscape|
|ORG|Organization|
|UN|Company|
|INN|Institution|
|GRT|Court|
|MRK|Brand|
|GS|Law|
|VO|Ordinance|
|EUN|European legal norm|
|VS|Regulation|
|VT|Contract|
|RS|Court decision|
|LIT|Legal literature|
---
Please reference our work when using the model.
```bibtex
@conference{icaart23,
author={Harshil Darji. and Jelena Mitrović. and Michael Granitzer.},
title={German BERT Model for Legal Named Entity Recognition},
booktitle={Proceedings of the 15th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART,},
year={2023},
pages={723-728},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011749400003393},
isbn={978-989-758-623-1},
issn={2184-433X},
}
```
|
timm/cait_xxs24_384.fb_dist_in1k
|
timm
| 2024-02-10T23:43:12Z | 357 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2103.17239",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2023-04-13T01:49:22Z |
---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for cait_xxs24_384.fb_dist_in1k
A CaiT (Class-Attention in Image Transformers) image classification model. Pretrained on ImageNet-1k with distillation by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 12.0
- GMACs: 9.6
- Activations (M): 122.7
- Image size: 384 x 384
- **Papers:**
- Going deeper with Image Transformers: https://arxiv.org/abs/2103.17239
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/facebookresearch/deit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('cait_xxs24_384.fb_dist_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'cait_xxs24_384.fb_dist_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 577, 192) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@InProceedings{Touvron_2021_ICCV,
author = {Touvron, Hugo and Cord, Matthieu and Sablayrolles, Alexandre and Synnaeve, Gabriel and J'egou, Herv'e},
title = {Going Deeper With Image Transformers},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {32-42}
}
```
|
brjezierski/S3BERT
|
brjezierski
| 2023-07-19T12:32:58Z | 357 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] |
sentence-similarity
| 2023-07-19T12:31:50Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 52908 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`custom_losses.MultipleConsistencyLoss` with parameters:
```
{'scale': 5.0, 'similarity_fct': 'cos_sim'}
```
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 52908 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`custom_losses.DistilLoss` with parameters:
```
{'biases': Parameter containing:
tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
requires_grad=True)}
```
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 1000,
"evaluator": "custom_evaluators.DistilConsistencyEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
lorahub/flan_t5_large-social_i_qa_Generate_the_question_from_the_answer
|
lorahub
| 2023-07-24T09:56:30Z | 357 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-24T09:56:21Z |
---
library_name: peft
---
|
Undi95/ReMM-S-Kimiko-v2-13B-GGUF
|
Undi95
| 2023-09-09T21:16:03Z | 357 | 0 | null |
[
"gguf",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-09-07T15:32:44Z |
---
license: cc-by-nc-4.0
---
LoRA merged to a Model.
Model : https://huggingface.co/Undi95/ReMM-SLERP-L2-13B
LoRA : https://huggingface.co/nRuaif/Kimiko-v2-13B
Weight : 0.50
|
Yntec/PotaytoPotahto
|
Yntec
| 2023-10-21T15:55:58Z | 357 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"PotatCat",
"darkstorm2150",
"anime",
"art",
"artistic",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-02T09:06:59Z |
---
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- PotatCat
- darkstorm2150
- anime
- art
- artistic
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
inference: true
license: creativeml-openrail-m
---
# PotaytoPotahto
Protogen 2.2 by darkstorm2150 mixed with the Glow LoRA (at -3) by PotatCat.
Samples and prompt:


Pretty CUTE girl. by ocellus.
Original pages:
https://civitai.com/models/115728/gloomifier-slider-leco
https://huggingface.co/darkstorm2150/Protogen_v2.2_Official_Release
# ProtoGlow
The LoRA merged in a different way, see recipe at the bottom.

# Recipe
- Merge Lora
Model A: Protogen 2.2
LoRA: Gloomifier_TheGlow_V1_LECO:-3
Output: TheGlowProtoGen
- SuperMerger Weight sum Train Difference Use MBW 1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1
Model A: Protogen 2.2
Model B: TheGlowProtoGen
Output: PotaytoPotahto
- Add Difference Train Difference 0.85
Model A: Protogen 2.2
Model B: TheGlowProtoGen
Model C: Protogen 2.2
Output: ProtoGlow
|
TheBloke/UndiMix-v3-13B-GGUF
|
TheBloke
| 2023-10-11T04:57:35Z | 357 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama",
"base_model:Undi95/UndiMix-v3-13B",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
] | null | 2023-10-11T04:42:18Z |
---
base_model: Undi95/UndiMix-v3-13B
inference: false
license: cc-by-nc-4.0
model_creator: Undi
model_name: Undimix v3 13B
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Undimix v3 13B - GGUF
- Model creator: [Undi](https://huggingface.co/Undi95)
- Original model: [Undimix v3 13B](https://huggingface.co/Undi95/UndiMix-v3-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Undi's Undimix v3 13B](https://huggingface.co/Undi95/UndiMix-v3-13B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/UndiMix-v3-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/UndiMix-v3-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/UndiMix-v3-13B-GGUF)
* [Undi's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Undi95/UndiMix-v3-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Undi's Undimix v3 13B](https://huggingface.co/Undi95/UndiMix-v3-13B).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [undimix-v3-13b.Q2_K.gguf](https://huggingface.co/TheBloke/UndiMix-v3-13B-GGUF/blob/main/undimix-v3-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [undimix-v3-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/UndiMix-v3-13B-GGUF/blob/main/undimix-v3-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [undimix-v3-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/UndiMix-v3-13B-GGUF/blob/main/undimix-v3-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [undimix-v3-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/UndiMix-v3-13B-GGUF/blob/main/undimix-v3-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [undimix-v3-13b.Q4_0.gguf](https://huggingface.co/TheBloke/UndiMix-v3-13B-GGUF/blob/main/undimix-v3-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [undimix-v3-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/UndiMix-v3-13B-GGUF/blob/main/undimix-v3-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [undimix-v3-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/UndiMix-v3-13B-GGUF/blob/main/undimix-v3-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [undimix-v3-13b.Q5_0.gguf](https://huggingface.co/TheBloke/UndiMix-v3-13B-GGUF/blob/main/undimix-v3-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [undimix-v3-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/UndiMix-v3-13B-GGUF/blob/main/undimix-v3-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [undimix-v3-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/UndiMix-v3-13B-GGUF/blob/main/undimix-v3-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [undimix-v3-13b.Q6_K.gguf](https://huggingface.co/TheBloke/UndiMix-v3-13B-GGUF/blob/main/undimix-v3-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [undimix-v3-13b.Q8_0.gguf](https://huggingface.co/TheBloke/UndiMix-v3-13B-GGUF/blob/main/undimix-v3-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/UndiMix-v3-13B-GGUF and below it, a specific filename to download, such as: undimix-v3-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/UndiMix-v3-13B-GGUF undimix-v3-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/UndiMix-v3-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/UndiMix-v3-13B-GGUF undimix-v3-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m undimix-v3-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/UndiMix-v3-13B-GGUF", model_file="undimix-v3-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Undi's Undimix v3 13B
<!-- description start -->
## Description
This repo contains fp16 files of personal mix : "UndiMix-v3".
It can be hot, serious, playful, and can use emoji thanks to llama-2-13b-chat-limarp-v2-merged.
What change from V2 is the that I didn't use Llama-2-13B-fp16 for the base anymore, and got straight into the merging with SLERP taking ReMM-S-Kimiko-v2-13B as a base.
<!-- description end -->
<!-- description start -->
## Models used
- Undi95/ReMM-S-Kimiko-v2-13B (0.272) (base)
- The-Face-Of-Goonery/Huginn-13b-v1.2 (0.264)
- Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged (0.264)
- jondurbin/airoboros-l2-13b-2.1 (0.10)
- IkariDev/Athena-v1 (0.10)
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
Special thanks to Sushi kek
<!-- original-model-card end -->
|
maddes8cht/openaccess-ai-collective-mpt-7b-wizardlm-gguf
|
maddes8cht
| 2023-11-07T08:57:33Z | 357 | 0 | null |
[
"gguf",
"en",
"dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered",
"region:us"
] | null | 2023-11-06T23:20:34Z |
---
datasets:
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
language:
- en
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# mpt-7b-wizardlm - GGUF
- Model creator: [openaccess-ai-collective](https://huggingface.co/openaccess-ai-collective)
- Original model: [mpt-7b-wizardlm](https://huggingface.co/openaccess-ai-collective/mpt-7b-wizardlm)
MPT-7b and MPT-30B are part of the family of Mosaic Pretrained Transformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
---
# Brief
This version of Falcon 7b by openaccess-ai-collective is finetuned using ehartford's WizardLM Dataset.
---
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# WizardLM finetuned on the MPT-7B model
Trained 3 epochs on 1 x A100 80GB
https://wandb.ai/wing-lian/mpt-wizard-7b/runs/2agnd9fz
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center>
|
Yntec/CrystalClearRemix
|
Yntec
| 2023-11-25T15:55:38Z | 357 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"Art",
"Render",
"Lineart",
"WarAnakin",
"RunDiffusion",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-09T00:34:15Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
language:
- en
tags:
- Art
- Render
- Lineart
- WarAnakin
- RunDiffusion
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Crystal Clear Remix
Crystal Clear with Crystal 2's style.
Comparison:

(Click for larger)
Sample and prompt:

sitting elementary girl, Pretty CUTE, gorgeous hair, DETAILED CHIBI EYES, Magazine ad, iconic, 1943, Cartoon, sharp focus, 4k. chibi character, comic art. Concept art on canvas by Clay Mann and ilya kuvshinov
Original pages:
https://civitai.com/models/36634?modelVersionId=42685 (Crystal Clear)
https://civitai.com/models/49934/crystal-clear2
# Recipe
- SuperMerger Weight sum Train Difference Use MBW 1,1,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,0
Model A:
Crystal Clear 2
Model B:
Crystal Clear
Output Model:
CrystalClearRemix
|
lps08/bert-base-portuguese-cased-editals
|
lps08
| 2024-03-07T23:47:04Z | 357 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-07T20:23:09Z |
---
license: apache-2.0
---
|
JiaxiJiang/textual_inversion_pororo2
|
JiaxiJiang
| 2024-03-14T06:47:50Z | 357 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"diffusers-training",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-14T04:39:21Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
- diffusers-training
base_model: runwayml/stable-diffusion-v1-5
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - JiaxiJiang/textual_inversion_pororo2
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
mradermacher/ErosParadigm-GGUF
|
mradermacher
| 2024-05-06T06:08:20Z | 357 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:jeiku/ErosParadigm",
"endpoints_compatible",
"region:us"
] | null | 2024-03-19T11:46:37Z |
---
base_model: jeiku/ErosParadigm
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
static quants of https://huggingface.co/jeiku/ErosParadigm
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ErosParadigm-GGUF/resolve/main/ErosParadigm.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/ErosParadigm-GGUF/resolve/main/ErosParadigm.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ErosParadigm-GGUF/resolve/main/ErosParadigm.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/ErosParadigm-GGUF/resolve/main/ErosParadigm.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ErosParadigm-GGUF/resolve/main/ErosParadigm.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/ErosParadigm-GGUF/resolve/main/ErosParadigm.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ErosParadigm-GGUF/resolve/main/ErosParadigm.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/ErosParadigm-GGUF/resolve/main/ErosParadigm.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/ErosParadigm-GGUF/resolve/main/ErosParadigm.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ErosParadigm-GGUF/resolve/main/ErosParadigm.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ErosParadigm-GGUF/resolve/main/ErosParadigm.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/ErosParadigm-GGUF/resolve/main/ErosParadigm.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/ErosParadigm-GGUF/resolve/main/ErosParadigm.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ErosParadigm-GGUF/resolve/main/ErosParadigm.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ABX-AI/Infinitely-Kunodiculous-9B-GGUF-IQ-Imatrix
|
ABX-AI
| 2024-04-02T15:19:39Z | 357 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"mistral",
"not-for-all-audiences",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:Nitral-AI/Infinitely-Laydiculous-9B",
"endpoints_compatible",
"region:us"
] | null | 2024-03-25T22:55:42Z |
---
base_model:
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- Nitral-AI/Infinitely-Laydiculous-9B
library_name: transformers
tags:
- mergekit
- merge
- mistral
- not-for-all-audiences
---
GGUF-IQ-Imatrix quants for **[ABX-AI/Infinitely-Kunodiculous-9B](https://huggingface.co/ABX-AI/Infinitely-Kunodiculous-9B)**.
I'd like to extend my gratitude to [@Lewdiculous](https://huggingface.co/Lewdiculous) for the inspiration to start trying to learn about merging and Imatrix quantization and help with answering my questions, and to [@ Nitral-AI
](https://huggingface.co/Nitral-AI) for the help with questions about merging as well.
**Why Importance Matrix?**
**Importance Matrix**, at least based on my testing, has shown to improve the output and performance of "IQ"-type quantizations, where the compression becomes quite heavy.
The **Imatrix** performs a calibration, using a provided dataset. Testing has shown that semi-randomized data can help perserve more important segments as the compression is applied.
Related discussions in Github:
[[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
The imatrix.txt file that I used can be found [here](https://huggingface.co/FantasiaFoundry/GGUF-Quantization-Script/blob/main/imatrix/imatrix.txt).
The data of the imatrix is general, semi-random.
# Description

This model is intended for role-playing and storywriting purposes.
This is the very firt merge I have ever tried. It seems to be working, or by the minimum does not appear to be broken :) The prime idea of this is to be an experiment and help me learn how to do merges and quants.
GGUF/IQ/Imatrix: https://huggingface.co/ABX-AI/Infinitely-Kunodiculous-9B-GGUF-IQ-Imatrix
# Infinitely-Kunodiculous-9B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [Nitral-AI/Infinitely-Laydiculous-9B](https://huggingface.co/Nitral-AI/Infinitely-Laydiculous-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Nitral-AI/Infinitely-Laydiculous-9B
layer_range: [0, 20]
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [12, 32]
merge_method: passthrough
dtype: float16
```
|
mradermacher/RPMix-4x7B-MoE-i1-GGUF
|
mradermacher
| 2024-05-06T05:31:27Z | 357 | 1 |
transformers
|
[
"transformers",
"gguf",
"not-for-all-audiences",
"en",
"base_model:ChaoticNeutrals/RPMix-4x7B-MoE",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-31T03:03:24Z |
---
base_model: ChaoticNeutrals/RPMix-4x7B-MoE
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- not-for-all-audiences
---
## About
weighted/imatrix quants of https://huggingface.co/ChaoticNeutrals/RPMix-4x7B-MoE
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/RPMix-4x7B-MoE-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/RPMix-4x7B-MoE-i1-GGUF/resolve/main/RPMix-4x7B-MoE.i1-IQ1_S.gguf) | i1-IQ1_S | 5.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/RPMix-4x7B-MoE-i1-GGUF/resolve/main/RPMix-4x7B-MoE.i1-IQ1_M.gguf) | i1-IQ1_M | 5.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/RPMix-4x7B-MoE-i1-GGUF/resolve/main/RPMix-4x7B-MoE.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/RPMix-4x7B-MoE-i1-GGUF/resolve/main/RPMix-4x7B-MoE.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/RPMix-4x7B-MoE-i1-GGUF/resolve/main/RPMix-4x7B-MoE.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/RPMix-4x7B-MoE-i1-GGUF/resolve/main/RPMix-4x7B-MoE.i1-IQ2_M.gguf) | i1-IQ2_M | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/RPMix-4x7B-MoE-i1-GGUF/resolve/main/RPMix-4x7B-MoE.i1-Q2_K.gguf) | i1-Q2_K | 9.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/RPMix-4x7B-MoE-i1-GGUF/resolve/main/RPMix-4x7B-MoE.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/RPMix-4x7B-MoE-i1-GGUF/resolve/main/RPMix-4x7B-MoE.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/RPMix-4x7B-MoE-i1-GGUF/resolve/main/RPMix-4x7B-MoE.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/RPMix-4x7B-MoE-i1-GGUF/resolve/main/RPMix-4x7B-MoE.i1-IQ3_S.gguf) | i1-IQ3_S | 10.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/RPMix-4x7B-MoE-i1-GGUF/resolve/main/RPMix-4x7B-MoE.i1-IQ3_M.gguf) | i1-IQ3_M | 10.9 | |
| [GGUF](https://huggingface.co/mradermacher/RPMix-4x7B-MoE-i1-GGUF/resolve/main/RPMix-4x7B-MoE.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/RPMix-4x7B-MoE-i1-GGUF/resolve/main/RPMix-4x7B-MoE.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/RPMix-4x7B-MoE-i1-GGUF/resolve/main/RPMix-4x7B-MoE.i1-IQ4_XS.gguf) | i1-IQ4_XS | 13.1 | |
| [GGUF](https://huggingface.co/mradermacher/RPMix-4x7B-MoE-i1-GGUF/resolve/main/RPMix-4x7B-MoE.i1-Q4_0.gguf) | i1-Q4_0 | 13.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/RPMix-4x7B-MoE-i1-GGUF/resolve/main/RPMix-4x7B-MoE.i1-Q4_K_S.gguf) | i1-Q4_K_S | 14.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/RPMix-4x7B-MoE-i1-GGUF/resolve/main/RPMix-4x7B-MoE.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RPMix-4x7B-MoE-i1-GGUF/resolve/main/RPMix-4x7B-MoE.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/RPMix-4x7B-MoE-i1-GGUF/resolve/main/RPMix-4x7B-MoE.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/RPMix-4x7B-MoE-i1-GGUF/resolve/main/RPMix-4x7B-MoE.i1-Q6_K.gguf) | i1-Q6_K | 20.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
bond005/FRED-T5-large-instruct-v0.1
|
bond005
| 2024-07-02T19:04:12Z | 357 | 4 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"PyTorch",
"Transformers",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text2text-generation
| 2024-04-01T17:19:38Z |
---
language: ru
license: apache-2.0
tags:
- PyTorch
- Transformers
widget:
- text: >-
Исправь, пожалуйста, ошибки распознавания речи в следующем тексте. краеугольным камнем любышь алгоритных машиного обучения является преждес его обобщающая способности тогда мы обучаем некоторую модель у нас есть обучающая выборка унаситькюмся ошибки и наша задачи сводится вообщем такомптиминационной задачи мы минимизируем в функцию ошибки по параметрам нашей модели на обучающие выбрать но на самом деле хотим там и не этого мы не обучающую ошибку хотим минимизировать
- text: >-
Упрости, пожалуйста, следующий текст. Краеугольным камнем любых алгоритмов машинного обучения является, прежде всего, обобщающая способность. Когда мы обучаем некоторую модель, у нас есть обучающая выборка, у нас есть функция ошибки, и наша задача сводится, в общем-то, к оптимизационной задаче. Мы минимизируем функцию ошибки по параметрам нашей модели на обучающей выборке. Но на самом деле хотим-то мы не этого! Мы не обучающую ошибку хотим минимизировать.
- text: >-
Выполни саммаризацию и выдели, пожалуйста, основную мысль следующего текста. В данной работе проводится сравнение предварительного обучения трансформера на текстах естественного языка и на предложениях синтетического псевдоязыка. Искусственные тексты были автоматически сгенерированы по написанным нами правилам в контекстно-свободной грамматике. Результаты дообучения на выполнение заданий проекта RussianSuperGLUE статистически достоверно показали, что модели имеют одинаковые оценки, т.е. можно считать, что использование искусственных данных дает преимущество для “безопасности” искусственного интеллекта за счет возможности полностью контролировать состав выборки. Также мы можем говорить о том, что на этапе предобучения модели типа RoBERTa достаточно научиться распознавать только синтаксические и морфологические закономерности языка, которые могут быть успешно созданы довольно таким простым способом, как контекстно-свободная грамматика.
- text: >-
Найди, пожалуйста, все именованные сущности типа "Местоположение" в следующем тексте и выпиши список таких сущностей. С. Ситников - уроженец Костромы, пользуется популярностью в области.
- text: >-
Найди, пожалуйста, все именованные сущности типа "Человек" в следующем тексте и выпиши список таких сущностей. С. Ситников - уроженец Костромы, пользуется популярностью в области.
- text: >-
Найди, пожалуйста, все именованные сущности типа "Организация" в следующем тексте и выпиши список таких сущностей. Окончил Костромской государственный педагогический институт по специальности "учитель истории и обществоведения, методист воспитательной работы".
- text: >-
Опишите процесс фотосинтеза.
- text: >-
Дополни предложение правильной формой глагола: "Я ... (писать) письмо уже час."
---
# FRED-T5-large-instruct-v0.1
Model was trained by [bond005](https://scholar.google.ru/citations?user=3AJKH38AAAAJ) for automatically editing text and generating answers to various questions in Russian. The solved tasks are:
1. **asr_correction** This task is to correct errors, restore punctuation and capitalization in the ASR output (in particular, output of [Wav2Vec2-Large-Ru-Golos](https://huggingface.co/bond005/wav2vec2-large-ru-golos)).
2. **summarization** This is an abstractive summarization of long texts.
3. **segmentation** The purpose of this task is to divide long text into paragraphs by the `\n` character as a special separator.
4. **simplification** This task is to transform a source sentence to become easier to read and comprehend.
5. **ner_organization** This is variant of the "classical" named entity recognition task, designed to find and print all organizations in the text. Each organization is printed from a new line.
6. **ner_person** This is variant of the "classical" named entity recognition task, designed to find and print all persons in the text. Each person is printed from a new line.
7. **ner_location** This is variant of the "classical" named entity recognition task, designed to find and print all locations in the text. Each location is printed from a new line.
8. Answering arbitrary questions and completing various instructions.
## Usage
| The solved task | The instruction text (in Russian) |
|------------------------|---------------------------------------------------------------------------------------------------------------------------|
| asr_correction | Исправь, пожалуйста, ошибки распознавания речи в следующем тексте. |
| summarization | Выполни саммаризацию и выдели, пожалуйста, основную мысль следующего текста. |
| segmentation | Разбей, пожалуйста, следующий текст на абзацы. |
| simplification | Упрости, пожалуйста, следующий текст. |
| ner_person | Найди, пожалуйста, все именованные сущности типа "Человек" в следующем тексте и выпиши список таких сущностей. |
| ner_location | Найди, пожалуйста, все именованные сущности типа "Местоположение" в следующем тексте и выпиши список таких сущностей. |
| ner_organization | Найди, пожалуйста, все именованные сущности типа "Организация" в следующем тексте и выпиши список таких сущностей. |
| *arbitrary questions* | *text of any question* |
You can view the code example describing the use of this model to solve all the above tasks in the corresponding [Colab notebook](https://colab.research.google.com/drive/10vX4aLm4-qllksTBEiyifgp0RFbgWxuq?usp=sharing).
### ASR Correction
```python
from typing import List
from transformers import T5ForConditionalGeneration
from transformers import GenerationConfig
from transformers import GPT2Tokenizer
import torch
def fix_recognition_error(texts: List[str], tokenizer: GPT2Tokenizer, config: GenerationConfig,
model: T5ForConditionalGeneration) -> List[str]:
nonempty_texts = []
for cur in texts:
if len(cur.strip()) > 3:
nonempty_texts.append(cur.strip())
if len(nonempty_texts) == 0:
return texts
x = tokenizer(nonempty_texts, return_tensors='pt', padding=True).to(model.device)
max_size = int(x.input_ids.shape[1] * 2.0 + 10)
out = model.generate(**x, generation_config=config, max_length=max_size)
results_for_nonempty_texts = [
' '.join(tokenizer.decode(cur, skip_special_tokens=True).strip().split()) for cur in out
]
united_results = []
idx = 0
for cur in texts:
if len(cur.strip()) > 3:
united_results.append(results_for_nonempty_texts[idx])
idx += 1
else:
united_results.append(cur.strip())
return united_results
ru_llm_tokenizer = GPT2Tokenizer.from_pretrained('bond005/FRED-T5-large-instruct-v0.1')
ru_llm_model = T5ForConditionalGeneration.from_pretrained('bond005/FRED-T5-large-instruct-v0.1')
ru_llm_config = GenerationConfig.from_pretrained('bond005/FRED-T5-large-instruct-v0.1')
if torch.cuda.is_available():
ru_llm_model = ru_llm_model.cuda()
asr_correction_example = \
'Исправь, пожалуйста, ошибки распознавания речи в следующем тексте. ' \
'краеугольным камнем любышь алгоритных машиного обучения является преждес его ' \
'обобщающая способности тогда мы обучаем некоторую модель у нас есть обучающая ' \
'выборка унаситькюмся ошибки и наша задачи сводится вообщем такомптиминационной ' \
'задачи мы минимизируем в функцию ошибки по параметрам нашей модели на обучающие ' \
'выбрать но на самом деле хотим там и не этого мы не обучающую ошибку хотим ' \
'минимизировать'
output = fix_recognition_error([asr_correction_example], ru_llm_tokenizer,
ru_llm_config, ru_llm_model)[0]
print(output)
```
```text
Краеугольным камнем любого алгоритма машинного обучения является прежде всего обобщающая способность. Тогда мы обучаем некоторую модель, у нас есть обучающая выборка, у нас есть коэффициенты ошибки, и наша задача сводится, в общем-то, к мотивационной задаче: мы минимизируем функцию ошибки по параметрам нашей модели, на обучающей выборке, но на самом деле хотим там и не этого. Мы не обучающую ошибку хотим минимизировать.
```
### Summarization
```python
from typing import List
from transformers import T5ForConditionalGeneration
from transformers import GenerationConfig
from transformers import GPT2Tokenizer
import torch
def generate_answer(answers: List[str], tokenizer: GPT2Tokenizer, config: GenerationConfig,
model: T5ForConditionalGeneration) -> List[str]:
nonempty_answers = []
for cur in answers:
if len(cur.strip()) > 0:
nonempty_answers.append(cur)
if len(nonempty_answers) == 0:
return ['' for _ in range(len(answers))]
x = tokenizer(nonempty_answers, return_tensors='pt', padding=True).to(model.device)
out = model.generate(**x, generation_config=config)
questions_for_nonempty_texts = [
tokenizer.decode(cur, skip_special_tokens=True).strip().replace('\r\n', '\n') for cur in out
]
united_questions = []
idx = 0
for cur in answers:
if len(cur.strip()) > 0:
united_questions.append(questions_for_nonempty_texts[idx])
idx += 1
else:
united_questions.append('')
return united_questions
ru_llm_tokenizer = GPT2Tokenizer.from_pretrained('bond005/FRED-T5-large-instruct-v0.1')
ru_llm_model = T5ForConditionalGeneration.from_pretrained('bond005/FRED-T5-large-instruct-v0.1')
ru_llm_config = GenerationConfig.from_pretrained('bond005/FRED-T5-large-instruct-v0.1')
if torch.cuda.is_available():
ru_llm_model = ru_llm_model.cuda()
summarization_example = \
'Выполни саммаризацию и выдели, пожалуйста, основную мысль следующего текста. ' \
'В данной работе проводится сравнение предварительного обучения трансформера на ' \
'текстах естественного языка и на предложениях синтетического псевдоязыка. ' \
'Искусственные тексты были автоматически сгенерированы по написанным нами правилам ' \
'в контекстно-свободной грамматике. Результаты дообучения на выполнение заданий ' \
'проекта RussianSuperGLUE статистически достоверно показали, что модели имеют ' \
'одинаковые оценки, т.е. можно считать, что использование искусственных данных ' \
'дает преимущество для “безопасности” искусственного интеллекта за счет ' \
'возможности полностью контролировать состав выборки. Также мы можем говорить ' \
'о том, что на этапе предобучения модели типа RoBERTa достаточно научиться ' \
'распознавать только синтаксические и морфологические закономерности языка, ' \
'которые могут быть успешно созданы довольно таким простым способом, как ' \
'контекстно-свободная грамматика.'
output = generate_answer([summarization_example], ru_llm_tokenizer,
ru_llm_config, ru_llm_model)[0]
print(output)
```
```text
В работе сравнивается предварительное обучение трансформера на текстах естественного языка и на предложениях синтетического псевдоязыка. Результаты дообучения на выполнение заданий проекта RussianSuperGLUE статистически достоверно показали, что модели имеют одинаковые оценки. Использование искусственных данных дает преимущество для безопасности искусственного интеллекта за счет возможности полностью контролировать состав выборки.
```
### Segmentation
```python
from typing import List
from transformers import T5ForConditionalGeneration
from transformers import GenerationConfig
from transformers import GPT2Tokenizer
import torch
def generate_answer(answers: List[str], tokenizer: GPT2Tokenizer, config: GenerationConfig,
model: T5ForConditionalGeneration) -> List[str]:
nonempty_answers = []
for cur in answers:
if len(cur.strip()) > 0:
nonempty_answers.append(cur)
if len(nonempty_answers) == 0:
return ['' for _ in range(len(answers))]
x = tokenizer(nonempty_answers, return_tensors='pt', padding=True).to(model.device)
out = model.generate(**x, generation_config=config)
questions_for_nonempty_texts = [
tokenizer.decode(cur, skip_special_tokens=True).strip().replace('\r\n', '\n') for cur in out
]
united_questions = []
idx = 0
for cur in answers:
if len(cur.strip()) > 0:
united_questions.append(questions_for_nonempty_texts[idx])
idx += 1
else:
united_questions.append('')
return united_questions
ru_llm_tokenizer = GPT2Tokenizer.from_pretrained('bond005/FRED-T5-large-instruct-v0.1')
ru_llm_model = T5ForConditionalGeneration.from_pretrained('bond005/FRED-T5-large-instruct-v0.1')
ru_llm_config = GenerationConfig.from_pretrained('bond005/FRED-T5-large-instruct-v0.1')
if torch.cuda.is_available():
ru_llm_model = ru_llm_model.cuda()
segmentation_example = \
'Разбей, пожалуйста, следующий текст на абзацы. Глубокие нейронные сети за ' \
'последнее время стали наиболее популярным инструментом для решения большинства ' \
'задач искусственного интеллекта и особенно задач анализа и генерации текстов на ' \
'естественном языке, относящихся к т.н. “разговорному искусственному интеллекту”. ' \
'Это произошло по двум причинам: 1. Нейронная сеть строит обучаемую иерархию ' \
'представлений. 2. Эта иерархия представлений является переиспользуемой между ' \
'задачами, на чем основана известная техника переноса обучения, когда нейросетевая ' \
'модель предварительно обучается (предобучается) на большой обучающей выборке ' \
'решать ненужную задачу, для которой доступна “дешевая” или автоматическая ' \
'разметка, а потом дообучается на малой обучающей выборке, описывающей конечную ' \
'задачу и размеченной вручную. При этом глубокие нейронные сети, как и другие ' \
'методы машинного обучения, могут быть неустойчивы к ряду уязвимостей и угроз, что ' \
'создает препятствия при создании доверительного искусственного интеллекта на ' \
'базе нейросетевого подхода.'
output = generate_answer([segmentation_example], ru_llm_tokenizer,
ru_llm_config, ru_llm_model)[0]
for it in output.split('\n'):
print(f'\n{it}\n')
```
```text
Глубокие нейронные сети за последнее время стали наиболее популярным инструментом для решения большинства задач искусственного интеллекта и особенно задач анализа и генерации текстов на естественном языке, относящихся к т.н. “разговорному искусственному интеллекту”. Это произошло по двум причинам:
1. Нейронная сеть строит обучаемую иерархию представлений.
2. Эта иерархия представлений является переиспользуемой между задачами, на чем основана известная техника переноса обучения, когда нейросетевая модель предварительно обучается (предобучается) на большой обучающей выборке решать ненужную задачу, для которой доступна “дешевая” или автоматическая разметка, а потом дообучается на малой обучающей выборке, описывающей конечную задачу и размеченной вручную.
При этом глубокие нейронные сети, как и другие методы машинного обучения, могут быть неустойчивы к ряду уязвимостей и угроз, что создает препятствия при создании доверительного искусственного интеллекта на базе нейросетевого подхода.
```
### Simplification
```python
from typing import List
from transformers import T5ForConditionalGeneration
from transformers import GenerationConfig
from transformers import GPT2Tokenizer
import torch
def generate_answer(answers: List[str], tokenizer: GPT2Tokenizer, config: GenerationConfig,
model: T5ForConditionalGeneration) -> List[str]:
nonempty_answers = []
for cur in answers:
if len(cur.strip()) > 0:
nonempty_answers.append(cur)
if len(nonempty_answers) == 0:
return ['' for _ in range(len(answers))]
x = tokenizer(nonempty_answers, return_tensors='pt', padding=True).to(model.device)
out = model.generate(**x, generation_config=config)
questions_for_nonempty_texts = [
tokenizer.decode(cur, skip_special_tokens=True).strip().replace('\r\n', '\n') for cur in out
]
united_questions = []
idx = 0
for cur in answers:
if len(cur.strip()) > 0:
united_questions.append(questions_for_nonempty_texts[idx])
idx += 1
else:
united_questions.append('')
return united_questions
ru_llm_tokenizer = GPT2Tokenizer.from_pretrained('bond005/FRED-T5-large-instruct-v0.1')
ru_llm_model = T5ForConditionalGeneration.from_pretrained('bond005/FRED-T5-large-instruct-v0.1')
ru_llm_config = GenerationConfig.from_pretrained('bond005/FRED-T5-large-instruct-v0.1')
if torch.cuda.is_available():
ru_llm_model = ru_llm_model.cuda()
simplification_example = \
'Упрости, пожалуйста, следующий текст. ' \
'Краеугольным камнем любых алгоритмов машинного обучения является, прежде всего, ' \
'обобщающая способность. Когда мы обучаем некоторую модель, у нас есть обучающая ' \
'выборка, у нас есть функция ошибки, и наша задача сводится, в общем-то, ' \
'к оптимизационной задаче. Мы минимизируем функцию ошибки по параметрам нашей ' \
'модели на обучающей выборке. Но на самом деле хотим-то мы не этого! ' \
'Мы не обучающую ошибку хотим минимизировать.'
output = generate_answer([summarization_example], ru_llm_tokenizer,
ru_llm_config, ru_llm_model)[0]
print(output)
```
```text
Алгоритмы машинного обучения должны учитывать обобщающую способность, а не только функцию ошибки. При обучении модели мы минимизируем функцию ошибки, но не обучающую ошибку.
```
### Named Entity Recognition (NER)
```python
from typing import List
from transformers import T5ForConditionalGeneration
from transformers import GenerationConfig
from transformers import GPT2Tokenizer
import torch
def generate_answer(answers: List[str], tokenizer: GPT2Tokenizer, config: GenerationConfig,
model: T5ForConditionalGeneration) -> List[str]:
nonempty_answers = []
for cur in answers:
if len(cur.strip()) > 0:
nonempty_answers.append(cur)
if len(nonempty_answers) == 0:
return ['' for _ in range(len(answers))]
x = tokenizer(nonempty_answers, return_tensors='pt', padding=True).to(model.device)
out = model.generate(**x, generation_config=config)
questions_for_nonempty_texts = [
tokenizer.decode(cur, skip_special_tokens=True).strip().replace('\r\n', '\n') for cur in out
]
united_questions = []
idx = 0
for cur in answers:
if len(cur.strip()) > 0:
united_questions.append(questions_for_nonempty_texts[idx])
idx += 1
else:
united_questions.append('')
return united_questions
ru_llm_tokenizer = GPT2Tokenizer.from_pretrained('bond005/FRED-T5-large-instruct-v0.1')
ru_llm_model = T5ForConditionalGeneration.from_pretrained('bond005/FRED-T5-large-instruct-v0.1')
ru_llm_config = GenerationConfig.from_pretrained('bond005/FRED-T5-large-instruct-v0.1')
if torch.cuda.is_available():
ru_llm_model = ru_llm_model.cuda()
ner_examples = [
'Найди, пожалуйста, все именованные сущности типа "Организация" в следующем ' \
'тексте и выпиши список таких сущностей. Окончил Костромской государственный ' \
'педагогический институт по специальности "учитель истории и обществоведения, ' \
'методист воспитательной работы".', # organization
'Найди, пожалуйста, все именованные сущности типа "Человек" в следующем тексте ' \
'и выпиши список таких сущностей. С. Ситников - уроженец Костромы, пользуется ' \
'популярностью в области.', # person
'Найди, пожалуйста, все именованные сущности типа "Местоположение" в следующем ' \
'тексте и выпиши список таких сущностей. С. Ситников - уроженец Костромы, ' \
'пользуется популярностью в области.', # location
]
outputs = generate_answer(ner_examples, ru_llm_tokenizer, ru_llm_config, ru_llm_model)
for it in outputs:
print(f'\n{it}')
```
```text
Костромской государственный педагогический институт
С. Ситников
Костромы
```
### Answering arbitrary questions
```python
from typing import List
from transformers import T5ForConditionalGeneration
from transformers import GenerationConfig
from transformers import GPT2Tokenizer
import torch
def generate_answer(answers: List[str], tokenizer: GPT2Tokenizer, config: GenerationConfig,
model: T5ForConditionalGeneration) -> List[str]:
nonempty_answers = []
for cur in answers:
if len(cur.strip()) > 0:
nonempty_answers.append(cur)
if len(nonempty_answers) == 0:
return ['' for _ in range(len(answers))]
x = tokenizer(nonempty_answers, return_tensors='pt', padding=True).to(model.device)
out = model.generate(**x, generation_config=config)
questions_for_nonempty_texts = [
tokenizer.decode(cur, skip_special_tokens=True).strip().replace('\r\n', '\n') for cur in out
]
united_questions = []
idx = 0
for cur in answers:
if len(cur.strip()) > 0:
united_questions.append(questions_for_nonempty_texts[idx])
idx += 1
else:
united_questions.append('')
return united_questions
ru_llm_tokenizer = GPT2Tokenizer.from_pretrained('bond005/FRED-T5-large-instruct-v0.1')
ru_llm_model = T5ForConditionalGeneration.from_pretrained('bond005/FRED-T5-large-instruct-v0.1')
ru_llm_config = GenerationConfig.from_pretrained('bond005/FRED-T5-large-instruct-v0.1')
if torch.cuda.is_available():
ru_llm_model = ru_llm_model.cuda()
question_about_scientific_facts = 'Опишите процесс фотосинтеза.'
output = generate_answer([question_about_scientific_facts], ru_llm_tokenizer,
ru_llm_config, ru_llm_model)[0]
print(f'Вопрос: {question_about_scientific_facts[4:]}')
print(f'Ответ: {output}\n')
question_about_russian_grammar = 'Дополни предложение правильной формой глагола: ' \
'"Я ... (писать) письмо уже час."'
output = generate_answer([question_about_russian_grammar], ru_llm_tokenizer,
ru_llm_config, ru_llm_model)[0]
print(f'Вопрос: {question_about_russian_grammar[4:]}')
print(f'Ответ: {output}\n')
```
```text
Опишите процесс фотосинтеза.
Фотосинтез - это процесс, в котором растения используют энергию света для преобразования углекислого газа и воды в глюкозу и кислород. Во время фотосинтеза светосинтетические органеллы растительной клетки - хлоропласты - преобразуют световую энергию в химическую, которая затем используется для синтеза глюкозы и других органических соединений.
Дополни предложение правильной формой глагола: "Я ... (писать) письмо уже час."
Я пишу письмо уже час.
```
## Limitations of FRED-T5-large-instruct-v0.1
Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
Language Limitations: The model is primarily designed to understand standard Russian. Informal Russian, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
Potential Societal Biases: FRED-T5-large-instruct-v0.1 is not entirely free from societal biases despite efforts in assuring training data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. I urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
Toxicity: The model can still produce harmful content if explicitly prompted or instructed to do so.
|
mradermacher/Llama-3-8B-Instruct-abliterated-GGUF
|
mradermacher
| 2024-05-12T21:42:34Z | 357 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:failspy/Llama-3-8B-Instruct-abliterated",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-05-12T19:37:13Z |
---
base_model: failspy/Llama-3-8B-Instruct-abliterated
language:
- en
library_name: transformers
license: llama3
license_link: LICENSE
license_name: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/failspy/Llama-3-8B-Instruct-abliterated
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-8B-Instruct-abliterated-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-abliterated.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-abliterated.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-abliterated.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-abliterated.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-abliterated.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-abliterated.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-abliterated.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-abliterated.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-abliterated.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-abliterated.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-abliterated.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-abliterated.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-abliterated.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-abliterated.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-abliterated-GGUF/resolve/main/Llama-3-8B-Instruct-abliterated.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
flammenai/Mahou-1.2-llama3-8B-GGUF
|
flammenai
| 2024-05-17T13:08:14Z | 357 | 2 |
transformers
|
[
"transformers",
"gguf",
"dataset:flammenai/Grill-preprod-v1_chatML",
"dataset:flammenai/Grill-preprod-v2_chatML",
"base_model:flammenai/Mahou-1.2-llama3-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T01:37:21Z |
---
library_name: transformers
tags: []
base_model:
- flammenai/Mahou-1.2-llama3-8B
datasets:
- flammenai/Grill-preprod-v1_chatML
- flammenai/Grill-preprod-v2_chatML
license: llama3
---

# Mahou-1.2-llama3-8B
Mahou is our attempt to build a production-ready conversational/roleplay LLM.
Future versions will be released iteratively and finetuned from flammen.ai conversational data.
### Chat Format
This model has been trained to use ChatML format.
```
<|im_start|>system
{{system}}<|im_end|>
<|im_start|>{{char}}
{{message}}<|im_end|>
<|im_start|>{{user}}
{{message}}<|im_end|>
```
### ST Settings
1. Use ChatML for the Context Template.
2. Turn on Instruct Mode for ChatML.
3. Use the following stopping strings: `["<", "|", "<|", "\n"]`
### License
This model is based on Meta Llama-3-8B and is governed by the [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE).
### Method
Finetuned using an A100 on Google Colab.
[Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne)
### Configuration
LoRA, model, and training settings:
```python
# LoRA configuration
peft_config = LoraConfig(
r=16,
lora_alpha=16,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
)
# Model to fine-tune
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
model.config.use_cache = False
# Reference model
ref_model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
# Training arguments
training_args = TrainingArguments(
per_device_train_batch_size=2,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
learning_rate=5e-5,
lr_scheduler_type="cosine",
max_steps=1000,
save_strategy="no",
logging_steps=1,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=100,
bf16=True,
report_to="wandb",
)
# Create DPO trainer
dpo_trainer = DPOTrainer(
model,
ref_model,
args=training_args,
train_dataset=dataset,
tokenizer=tokenizer,
peft_config=peft_config,
beta=0.1,
force_use_ref_model=True
)
```
|
MaziyarPanahi/Multi_verse_modelInex12-7B-GGUF
|
MaziyarPanahi
| 2024-05-21T17:55:20Z | 357 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"conversational",
"base_model:MTSAIR/multi_verse_model",
"base_model:MSL7/INEX12-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:automerger/Multi_verse_modelInex12-7B"
] |
text-generation
| 2024-05-21T17:23:38Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- merge
- mergekit
- lazymergekit
- automerger
- conversational
- base_model:MTSAIR/multi_verse_model
- base_model:MSL7/INEX12-7b
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: Multi_verse_modelInex12-7B-GGUF
base_model: automerger/Multi_verse_modelInex12-7B
inference: false
model_creator: automerger
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Multi_verse_modelInex12-7B-GGUF](https://huggingface.co/MaziyarPanahi/Multi_verse_modelInex12-7B-GGUF)
- Model creator: [automerger](https://huggingface.co/automerger)
- Original model: [automerger/Multi_verse_modelInex12-7B](https://huggingface.co/automerger/Multi_verse_modelInex12-7B)
## Description
[MaziyarPanahi/Multi_verse_modelInex12-7B-GGUF](https://huggingface.co/MaziyarPanahi/Multi_verse_modelInex12-7B-GGUF) contains GGUF format model files for [automerger/Multi_verse_modelInex12-7B](https://huggingface.co/automerger/Multi_verse_modelInex12-7B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
crusoeai/dolphin-2.9.1-mixtral-1x22b-GGUF
|
crusoeai
| 2024-05-22T21:58:01Z | 357 | 1 | null |
[
"gguf",
"region:us"
] | null | 2024-05-22T20:28:51Z |
Entry not found
|
icefog72/IceBlendedLatteRP-7b
|
icefog72
| 2024-06-03T05:19:38Z | 357 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"alpaca",
"conversational",
"arxiv:2212.04089",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-06-03T02:06:44Z |
---
license: cc-by-nc-4.0
library_name: transformers
tags:
- mergekit
- merge
- alpaca
- mistral
---
# IceBlendedLatteRP-7b
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using G:\FModels\Mistroll-7B-v2.2 as a base.
### Models Merged
The following models were included in the merge:
* G:\FModels\IceCaffeLatteRP-7b
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: G:\FModels\Mistroll-7B-v2.2
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
- layer_range: [0, 32]
model: G:\FModels\Mistroll-7B-v2.2
- layer_range: [0, 32]
model: G:\FModels\IceCaffeLatteRP-7b
parameters:
weight: 1.00
```
|
mradermacher/MindFlay-22B-GGUF
|
mradermacher
| 2024-06-14T14:58:35Z | 357 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Envoid/MindFlay-22B",
"endpoints_compatible",
"region:us"
] | null | 2024-06-13T06:47:25Z |
---
base_model: Envoid/MindFlay-22B
language:
- en
library_name: transformers
no_imatrix: nan2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Envoid/MindFlay-22B
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MindFlay-22B-GGUF/resolve/main/MindFlay-22B.Q2_K.gguf) | Q2_K | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/MindFlay-22B-GGUF/resolve/main/MindFlay-22B.IQ3_XS.gguf) | IQ3_XS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/MindFlay-22B-GGUF/resolve/main/MindFlay-22B.IQ3_S.gguf) | IQ3_S | 9.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MindFlay-22B-GGUF/resolve/main/MindFlay-22B.Q3_K_S.gguf) | Q3_K_S | 9.6 | |
| [GGUF](https://huggingface.co/mradermacher/MindFlay-22B-GGUF/resolve/main/MindFlay-22B.IQ3_M.gguf) | IQ3_M | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/MindFlay-22B-GGUF/resolve/main/MindFlay-22B.Q3_K_M.gguf) | Q3_K_M | 10.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MindFlay-22B-GGUF/resolve/main/MindFlay-22B.Q3_K_L.gguf) | Q3_K_L | 11.7 | |
| [GGUF](https://huggingface.co/mradermacher/MindFlay-22B-GGUF/resolve/main/MindFlay-22B.IQ4_XS.gguf) | IQ4_XS | 11.9 | |
| [GGUF](https://huggingface.co/mradermacher/MindFlay-22B-GGUF/resolve/main/MindFlay-22B.Q4_K_S.gguf) | Q4_K_S | 12.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MindFlay-22B-GGUF/resolve/main/MindFlay-22B.Q4_K_M.gguf) | Q4_K_M | 13.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MindFlay-22B-GGUF/resolve/main/MindFlay-22B.Q5_K_S.gguf) | Q5_K_S | 15.1 | |
| [GGUF](https://huggingface.co/mradermacher/MindFlay-22B-GGUF/resolve/main/MindFlay-22B.Q5_K_M.gguf) | Q5_K_M | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/MindFlay-22B-GGUF/resolve/main/MindFlay-22B.Q6_K.gguf) | Q6_K | 18.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MindFlay-22B-GGUF/resolve/main/MindFlay-22B.Q8_0.gguf) | Q8_0 | 23.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
defex/distilgpt2-finetuned-amazon-reviews
|
defex
| 2021-07-21T10:36:15Z | 356 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- null
model_index:
- name: distilgpt2-finetuned-amazon-reviews
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-amazon-reviews
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.9.0
- Tokenizers 0.10.3
|
google/tapas-base-finetuned-wikisql-supervised
|
google
| 2021-11-29T13:05:40Z | 356 | 8 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tapas",
"table-question-answering",
"en",
"dataset:wikisql",
"arxiv:2004.02349",
"arxiv:2010.00571",
"arxiv:1709.00103",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
table-question-answering
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- tapas
license: apache-2.0
datasets:
- wikisql
---
# TAPAS base model fine-tuned on WikiSQL (in a supervised fashion)
his model has 2 versions which can be used. The default version corresponds to the `tapas_wikisql_sqa_inter_masklm_base_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), and [WikiSQL](https://github.com/salesforce/WikiSQL). It uses relative position embeddings (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is:
- `no_reset`, which corresponds to `tapas_wikisql_sqa_inter_masklm_base` (intermediate pre-training, absolute position embeddings).
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head and aggregation head on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on SQA and WikiSQL.
## Intended uses & limitations
You can use this model for answering questions related to a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Question [SEP] Flattened table [SEP]
```
The authors did first convert the WikiSQL dataset into the format of SQA using automatic conversion scripts.
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 50,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 10 hours. The optimizer used is Adam with a learning rate of 6.17164e-5, and a warmup
ratio of 0.1424. See the [paper](https://arxiv.org/abs/2004.02349) for more details (tables 11 and 12).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@article{DBLP:journals/corr/abs-1709-00103,
author = {Victor Zhong and
Caiming Xiong and
Richard Socher},
title = {Seq2SQL: Generating Structured Queries from Natural Language using
Reinforcement Learning},
journal = {CoRR},
volume = {abs/1709.00103},
year = {2017},
url = {http://arxiv.org/abs/1709.00103},
archivePrefix = {arXiv},
eprint = {1709.00103},
timestamp = {Mon, 13 Aug 2018 16:48:41 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1709-00103.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
hfl/rbt6
|
hfl
| 2021-05-19T19:22:02Z | 356 | 6 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"arxiv:1906.08101",
"arxiv:2004.13922",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- zh
tags:
- bert
license: "apache-2.0"
---
# This is a re-trained 6-layer RoBERTa-wwm-ext model.
## Chinese BERT with Whole Word Masking
For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**.
**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)**
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu
This repository is developed based on:https://github.com/google-research/bert
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese MacBERT: https://github.com/ymcui/MacBERT
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
- Primary: https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
- Secondary: https://arxiv.org/abs/1906.08101
```
@article{chinese-bert-wwm,
title={Pre-Training with Whole Word Masking for Chinese BERT},
author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping},
journal={arXiv preprint arXiv:1906.08101},
year={2019}
}
```
|
prem-timsina/segformer-b0-finetuned-food
|
prem-timsina
| 2023-05-08T18:06:54Z | 356 | 5 |
transformers
|
[
"transformers",
"pytorch",
"segformer",
"image-segmentation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2023-05-07T19:04:19Z |
---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: image-segmentation
---
|
FinancialSupport/gpt2-ft-medical-qa
|
FinancialSupport
| 2023-07-14T16:56:47Z | 356 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-14T15:27:42Z |
---
pipeline_tag: text-generation
widget:
- text: "Question: My belly hurts.\nAnswer:"
example_title: "Q&A"
---
Hello!
We finetuned gpt2 to a q&a medical dataset
This is just a test
---
license: mit
datasets:
- medical_dialog
language:
- en
---
|
TDC2023/trojan-base-pythia-1.4b-dev-phase
|
TDC2023
| 2023-07-25T18:53:22Z | 356 | 3 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-25T18:51:28Z |
Entry not found
|
facebook/mms-tts-tha
|
facebook
| 2023-09-01T11:19:24Z | 356 | 7 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-09-01T11:19:01Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Thai Text-to-Speech
This repository contains the **Thai (tha)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-tha")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-tha")
text = "some example text in the Thai language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
TheBloke/UndiMix-v4-13B-GGUF
|
TheBloke
| 2023-10-06T19:18:34Z | 356 | 5 |
transformers
|
[
"transformers",
"gguf",
"llama",
"base_model:Undi95/UndiMix-v4-13B",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
] | null | 2023-10-06T19:10:05Z |
---
base_model: Undi95/UndiMix-v4-13B
inference: false
license: cc-by-nc-4.0
model_creator: Undi
model_name: Undimix v4 13B
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Undimix v4 13B - GGUF
- Model creator: [Undi](https://huggingface.co/Undi95)
- Original model: [Undimix v4 13B](https://huggingface.co/Undi95/UndiMix-v4-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Undi's Undimix v4 13B](https://huggingface.co/Undi95/UndiMix-v4-13B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/UndiMix-v4-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/UndiMix-v4-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/UndiMix-v4-13B-GGUF)
* [Undi's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Undi95/UndiMix-v4-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Undi's Undimix v4 13B](https://huggingface.co/Undi95/UndiMix-v4-13B).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [undimix-v4-13b.Q2_K.gguf](https://huggingface.co/TheBloke/UndiMix-v4-13B-GGUF/blob/main/undimix-v4-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [undimix-v4-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/UndiMix-v4-13B-GGUF/blob/main/undimix-v4-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [undimix-v4-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/UndiMix-v4-13B-GGUF/blob/main/undimix-v4-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [undimix-v4-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/UndiMix-v4-13B-GGUF/blob/main/undimix-v4-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [undimix-v4-13b.Q4_0.gguf](https://huggingface.co/TheBloke/UndiMix-v4-13B-GGUF/blob/main/undimix-v4-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [undimix-v4-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/UndiMix-v4-13B-GGUF/blob/main/undimix-v4-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [undimix-v4-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/UndiMix-v4-13B-GGUF/blob/main/undimix-v4-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [undimix-v4-13b.Q5_0.gguf](https://huggingface.co/TheBloke/UndiMix-v4-13B-GGUF/blob/main/undimix-v4-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [undimix-v4-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/UndiMix-v4-13B-GGUF/blob/main/undimix-v4-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [undimix-v4-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/UndiMix-v4-13B-GGUF/blob/main/undimix-v4-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [undimix-v4-13b.Q6_K.gguf](https://huggingface.co/TheBloke/UndiMix-v4-13B-GGUF/blob/main/undimix-v4-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [undimix-v4-13b.Q8_0.gguf](https://huggingface.co/TheBloke/UndiMix-v4-13B-GGUF/blob/main/undimix-v4-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/UndiMix-v4-13B-GGUF and below it, a specific filename to download, such as: undimix-v4-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/UndiMix-v4-13B-GGUF undimix-v4-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/UndiMix-v4-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/UndiMix-v4-13B-GGUF undimix-v4-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m undimix-v4-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/UndiMix-v4-13B-GGUF", model_file="undimix-v4-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Undi's Undimix v4 13B
<!-- description start -->
## Description
This repo contains fp16 files of personal mix : "UndiMix-v4".
It can be hot, serious, playful, and can use emoji thanks to llama-2-13b-chat-limarp-v2-merged.
Atomicorn... Hope you will like this one kek, you waited enough.
<!-- description end -->
<!-- description start -->
## Models used
- Undi95/ReMM-v2-Kimiko-v2-13B (0.272) (base)
- The-Face-Of-Goonery/Huginn-13b-v1.2 (0.264)
- Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged (0.264)
- jondurbin/airoboros-l2-13b-2.2 (0.10)
- IkariDev/Athena-v1 (0.10)
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
Special thanks to Sushi.
<!-- original-model-card end -->
|
TheBloke/Vigostral-7B-Chat-GGUF
|
TheBloke
| 2023-10-24T16:25:05Z | 356 | 4 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"LLM",
"finetuned",
"text-generation",
"fr",
"base_model:bofenghuang/vigostral-7b-chat",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-10-24T16:19:47Z |
---
base_model: bofenghuang/vigostral-7b-chat
inference: false
language: fr
license: apache-2.0
model_creator: bofeng huang
model_name: Vigostral 7B Chat
model_type: mistral
pipeline_tag: text-generation
prompt_template: "<s>[INST] <<SYS>>\nVous \xEAtes Vigogne, un assistant IA cr\xE9\xE9\
\ par Zaion Lab. Vous suivez extr\xEAmement bien les instructions. Aidez autant\
\ que vous le pouvez.\n<</SYS>>\n\n{prompt} [/INST] \n"
quantized_by: TheBloke
tags:
- LLM
- finetuned
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Vigostral 7B Chat - GGUF
- Model creator: [bofeng huang](https://huggingface.co/bofenghuang)
- Original model: [Vigostral 7B Chat](https://huggingface.co/bofenghuang/vigostral-7b-chat)
<!-- description start -->
## Description
This repo contains GGUF format model files for [bofeng huang's Vigostral 7B Chat](https://huggingface.co/bofenghuang/vigostral-7b-chat).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Vigostral-7B-Chat-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF)
* [bofeng huang's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/bofenghuang/vigostral-7b-chat)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Vigogne-Llama-2-Chat
```
<s>[INST] <<SYS>>
Vous êtes Vigogne, un assistant IA créé par Zaion Lab. Vous suivez extrêmement bien les instructions. Aidez autant que vous le pouvez.
<</SYS>>
{prompt} [/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [vigostral-7b-chat.Q2_K.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [vigostral-7b-chat.Q3_K_S.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [vigostral-7b-chat.Q3_K_M.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [vigostral-7b-chat.Q3_K_L.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [vigostral-7b-chat.Q4_0.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [vigostral-7b-chat.Q4_K_S.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [vigostral-7b-chat.Q4_K_M.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [vigostral-7b-chat.Q5_0.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [vigostral-7b-chat.Q5_K_S.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [vigostral-7b-chat.Q5_K_M.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [vigostral-7b-chat.Q6_K.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [vigostral-7b-chat.Q8_0.gguf](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF/blob/main/vigostral-7b-chat.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Vigostral-7B-Chat-GGUF and below it, a specific filename to download, such as: vigostral-7b-chat.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Vigostral-7B-Chat-GGUF vigostral-7b-chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Vigostral-7B-Chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Vigostral-7B-Chat-GGUF vigostral-7b-chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m vigostral-7b-chat.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] <<SYS>>\nVous êtes Vigogne, un assistant IA créé par Zaion Lab. Vous suivez extrêmement bien les instructions. Aidez autant que vous le pouvez.\n<</SYS>>\n\n{prompt} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Vigostral-7B-Chat-GGUF", model_file="vigostral-7b-chat.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: bofeng huang's Vigostral 7B Chat
# Vigostral-7B-Chat: A French chat LLM
***Preview*** of Vigostral-7B-Chat, a new addition to the Vigogne LLMs family, fine-tuned on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
For more information, please visit the [Github repository](https://github.com/bofenghuang/vigogne).
**License**: A significant portion of the training data is distilled from GPT-3.5-Turbo and GPT-4, kindly use it cautiously to avoid any violations of OpenAI's [terms of use](https://openai.com/policies/terms-of-use).
## Prompt Template
We used a prompt template adapted from the chat format of Llama-2.
You can apply this formatting using the [chat template](https://huggingface.co/docs/transformers/main/chat_templating) through the `apply_chat_template()` method.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bofenghuang/vigostral-7b-chat")
conversation = [
{"role": "user", "content": "Bonjour ! Comment ça va aujourd'hui ?"},
{"role": "assistant", "content": "Bonjour ! Je suis une IA, donc je n'ai pas de sentiments, mais je suis prêt à vous aider. Comment puis-je vous assister aujourd'hui ?"},
{"role": "user", "content": "Quelle est la hauteur de la Tour Eiffel ?"},
{"role": "assistant", "content": "La Tour Eiffel mesure environ 330 mètres de hauteur."},
{"role": "user", "content": "Comment monter en haut ?"},
]
print(tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True))
```
You will get
```
<s>[INST] <<SYS>>
Vous êtes Vigogne, un assistant IA créé par Zaion Lab. Vous suivez extrêmement bien les instructions. Aidez autant que vous le pouvez.
<</SYS>>
Bonjour ! Comment ça va aujourd'hui ? [/INST] Bonjour ! Je suis une IA, donc je n'ai pas de sentiments, mais je suis prêt à vous aider. Comment puis-je vous assister aujourd'hui ? </s>[INST] Quelle est la hauteur de la Tour Eiffel ? [/INST] La Tour Eiffel mesure environ 330 mètres de hauteur. </s>[INST] Comment monter en haut ? [/INST]
```
## Usage
### Inference using the unquantized model with 🤗 Transformers
```python
from typing import Dict, List, Optional
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig, TextStreamer
model_name_or_path = "bofenghuang/vigostral-7b-chat"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side="right", use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float16, device_map="auto")
streamer = TextStreamer(tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True)
def chat(
query: str,
history: Optional[List[Dict]] = None,
temperature: float = 0.7,
top_p: float = 1.0,
top_k: float = 0,
repetition_penalty: float = 1.1,
max_new_tokens: int = 1024,
**kwargs,
):
if history is None:
history = []
history.append({"role": "user", "content": query})
input_ids = tokenizer.apply_chat_template(history, return_tensors="pt").to(model.device)
input_length = input_ids.shape[1]
generated_outputs = model.generate(
input_ids=input_ids,
generation_config=GenerationConfig(
temperature=temperature,
do_sample=temperature > 0.0,
top_p=top_p,
top_k=top_k,
repetition_penalty=repetition_penalty,
max_new_tokens=max_new_tokens,
pad_token_id=tokenizer.eos_token_id,
**kwargs,
),
streamer=streamer,
return_dict_in_generate=True,
)
generated_tokens = generated_outputs.sequences[0, input_length:]
generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True)
history.append({"role": "assistant", "content": generated_text})
return generated_text, history
# 1st round
response, history = chat("Un escargot parcourt 100 mètres en 5 heures. Quelle est sa vitesse ?", history=None)
# Pour calculer la vitesse d'un escargot qui parcourt 100 mètres en 5 heures, nous devons diviser la distance parcourue par le temps écoulé. Voici comment procédons :
#
# Vitesse = Distance / Temps
#
# Dans ce cas, la distance est de 100 mètres et le temps est de 5 heures. Donc, nous avons :
#
# Vitesse = 100 mètres / 5 heures
# Vitesse = 20 mètres par heure
#
# L'escargot parcourt donc une vitesse de 20 mètres par heure.
# 2nd round
response, history = chat("Quand il peut dépasser le lapin ?", history=history)
# Malheureusement, je ne peux pas répondre à cette question car elle contient des informations inexactes ou incorrectes. La vitesse de l'escargot est de 20 mètres par heure, tandis que la vitesse du lapin n'est pas mentionnée dans votre question initiale. Pour comparer les vitesses des deux animaux, nous aurions besoin de connaître la vitesse du lapin.
# 3rd round
response, history = chat("Écris une histoire imaginative qui met en scène une compétition de course entre un escargot et un lapin.", history=history)
# Il était une fois deux athlètes extraordinaires : un escargot nommé Escargo et un lapin appelé Lapin Rapide. Un jour, ils ont décidé de participer à une grande course pour déterminer qui était le plus rapide des deux. Tout le monde était curieux de savoir qui serait le gagnant, car ils étaient tous émerveillés par la vitesse de ces deux champions.
#
# La course commença dans une forêt dense, avec Escargo départageant un peu lentement, car il faut être prudent lorsqu'on est escargot. Mais au fur et à mesure qu'il avançait, il commença à ralentir encore plus. Les spectateurs commencèrent à pencher pour Lapin Rapide, qui fonçait rapidement vers la ligne d'arrivée.
#
# Mais soudain, Escargo se transforma en une force incroyable. Au lieu de marcher lentement comme un escargot, il courait à toute allure, faisant preuve d'une vitesse surprenante. Les spectateurs essayaient de se remémorer leurs notes de mathématiques pour calculer sa vélocité actuelle.
#
# Pendant ce temps, Lapin Rapide ralentissait légèrement, trouvant l'air frais trop confortable pour continuer à courir aussi vite. Il décida alors de prendre quelques pauses pour profiter de son environnement.
#
# Escargo continuait à courir à toute vitesse, en dépit de son handicap d'être un escargot. Les spectateurs étaient émerveillés par sa persévérance et sa volonté de gagner. Finalement, Escargo franchit la ligne d'arrivée en premier, et tous criaurent en joie.
#
# Les habitants de la forêt décidèrent de lui décerner le titre d'"athlète le plus courageux" pour sa performance incroyable. Quant à Lapin Rapide, il fut content de sa deuxième place, se disant simplement que les pauses étaient bien plus agréables que la compétition. Et tous vécurent heureux et satisfaits de cette course mémorable.
```
You can also use the Google Colab Notebook provided below.
<a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_chat.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### Inference using the unquantized model with vLLM
Set up an OpenAI-compatible server with the following command:
```bash
# Install vLLM
# This may take 5-10 minutes.
# pip install vllm
# Start server for Vigostral-Chat models
python -m vllm.entrypoints.openai.api_server --model bofenghuang/vigostral-7b-chat
# List models
# curl http://localhost:8000/v1/models
```
Query the model using the openai python package.
```python
import openai
# Modify OpenAI's API key and API base to use vLLM's API server.
openai.api_key = "EMPTY"
openai.api_base = "http://localhost:8000/v1"
# First model
models = openai.Model.list()
model = models["data"][0]["id"]
# Chat completion API
chat_completion = openai.ChatCompletion.create(
model=model,
messages=[
{"role": "user", "content": "Parle-moi de toi-même."},
],
max_tokens=1024,
temperature=0.7,
)
print("Chat completion results:", chat_completion)
```
## Limitations
Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
<!-- original-model-card end -->
|
TheBloke/Gale-medium-init-3B-GGUF
|
TheBloke
| 2023-10-28T08:39:21Z | 356 | 2 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"base_model:crumb/Gale-medium-init",
"text-generation-inference",
"region:us"
] | null | 2023-10-28T07:40:40Z |
---
base_model: crumb/Gale-medium-init
inference: false
model_creator: Maxine
model_name: Gale Medium Init 3B
model_type: mistral
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Gale Medium Init 3B - GGUF
- Model creator: [Maxine](https://huggingface.co/crumb)
- Original model: [Gale Medium Init 3B](https://huggingface.co/crumb/Gale-medium-init)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Maxine's Gale Medium Init 3B](https://huggingface.co/crumb/Gale-medium-init).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Gale-medium-init-3B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Gale-medium-init-3B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Gale-medium-init-3B-GGUF)
* [Maxine's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/crumb/Gale-medium-init)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [gale-medium-init-3B.Q2_K.gguf](https://huggingface.co/TheBloke/Gale-medium-init-3B-GGUF/blob/main/gale-medium-init-3B.Q2_K.gguf) | Q2_K | 2 | 1.34 GB| 3.84 GB | smallest, significant quality loss - not recommended for most purposes |
| [gale-medium-init-3B.Q3_K_S.gguf](https://huggingface.co/TheBloke/Gale-medium-init-3B-GGUF/blob/main/gale-medium-init-3B.Q3_K_S.gguf) | Q3_K_S | 3 | 1.38 GB| 3.88 GB | very small, high quality loss |
| [gale-medium-init-3B.Q3_K_M.gguf](https://huggingface.co/TheBloke/Gale-medium-init-3B-GGUF/blob/main/gale-medium-init-3B.Q3_K_M.gguf) | Q3_K_M | 3 | 1.54 GB| 4.04 GB | very small, high quality loss |
| [gale-medium-init-3B.Q3_K_L.gguf](https://huggingface.co/TheBloke/Gale-medium-init-3B-GGUF/blob/main/gale-medium-init-3B.Q3_K_L.gguf) | Q3_K_L | 3 | 1.65 GB| 4.15 GB | small, substantial quality loss |
| [gale-medium-init-3B.Q4_0.gguf](https://huggingface.co/TheBloke/Gale-medium-init-3B-GGUF/blob/main/gale-medium-init-3B.Q4_0.gguf) | Q4_0 | 4 | 1.78 GB| 4.28 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [gale-medium-init-3B.Q4_K_S.gguf](https://huggingface.co/TheBloke/Gale-medium-init-3B-GGUF/blob/main/gale-medium-init-3B.Q4_K_S.gguf) | Q4_K_S | 4 | 1.81 GB| 4.31 GB | small, greater quality loss |
| [gale-medium-init-3B.Q4_K_M.gguf](https://huggingface.co/TheBloke/Gale-medium-init-3B-GGUF/blob/main/gale-medium-init-3B.Q4_K_M.gguf) | Q4_K_M | 4 | 1.87 GB| 4.37 GB | medium, balanced quality - recommended |
| [gale-medium-init-3B.Q5_0.gguf](https://huggingface.co/TheBloke/Gale-medium-init-3B-GGUF/blob/main/gale-medium-init-3B.Q5_0.gguf) | Q5_0 | 5 | 2.15 GB| 4.65 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [gale-medium-init-3B.Q5_K_S.gguf](https://huggingface.co/TheBloke/Gale-medium-init-3B-GGUF/blob/main/gale-medium-init-3B.Q5_K_S.gguf) | Q5_K_S | 5 | 2.15 GB| 4.65 GB | large, low quality loss - recommended |
| [gale-medium-init-3B.Q5_K_M.gguf](https://huggingface.co/TheBloke/Gale-medium-init-3B-GGUF/blob/main/gale-medium-init-3B.Q5_K_M.gguf) | Q5_K_M | 5 | 2.20 GB| 4.70 GB | large, very low quality loss - recommended |
| [gale-medium-init-3B.Q6_K.gguf](https://huggingface.co/TheBloke/Gale-medium-init-3B-GGUF/blob/main/gale-medium-init-3B.Q6_K.gguf) | Q6_K | 6 | 2.54 GB| 5.04 GB | very large, extremely low quality loss |
| [gale-medium-init-3B.Q8_0.gguf](https://huggingface.co/TheBloke/Gale-medium-init-3B-GGUF/blob/main/gale-medium-init-3B.Q8_0.gguf) | Q8_0 | 8 | 3.29 GB| 5.79 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Gale-medium-init-3B-GGUF and below it, a specific filename to download, such as: gale-medium-init-3B.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Gale-medium-init-3B-GGUF gale-medium-init-3B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Gale-medium-init-3B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Gale-medium-init-3B-GGUF gale-medium-init-3B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m gale-medium-init-3B.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Gale-medium-init-3B-GGUF", model_file="gale-medium-init-3B.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Maxine's Gale Medium Init 3B
No original model card was available.
<!-- original-model-card end -->
|
giantdev/5HakotHoaiKqLkuEA9LEoPTxRh7QMQpsgDNEJzkw5zft2a3C_vgg
|
giantdev
| 2024-03-24T10:10:30Z | 356 | 0 |
keras
|
[
"keras",
"region:us"
] | null | 2024-03-09T03:40:35Z |
Entry not found
|
mradermacher/Llamix2-MLewd-4x13B-i1-GGUF
|
mradermacher
| 2024-05-06T06:14:21Z | 356 | 2 |
transformers
|
[
"transformers",
"gguf",
"not-for-all-audiences",
"nsfw",
"en",
"base_model:Undi95/Llamix2-MLewd-4x13B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-13T12:02:43Z |
---
base_model: Undi95/Llamix2-MLewd-4x13B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- not-for-all-audiences
- nsfw
---
## About
weighted/imatrix quants of https://huggingface.co/Undi95/Llamix2-MLewd-4x13B
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llamix2-MLewd-4x13B-i1-GGUF/resolve/main/Llamix2-MLewd-4x13B.i1-IQ1_S.gguf) | i1-IQ1_S | 8.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llamix2-MLewd-4x13B-i1-GGUF/resolve/main/Llamix2-MLewd-4x13B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llamix2-MLewd-4x13B-i1-GGUF/resolve/main/Llamix2-MLewd-4x13B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 11.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llamix2-MLewd-4x13B-i1-GGUF/resolve/main/Llamix2-MLewd-4x13B.i1-IQ2_S.gguf) | i1-IQ2_S | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llamix2-MLewd-4x13B-i1-GGUF/resolve/main/Llamix2-MLewd-4x13B.i1-IQ2_M.gguf) | i1-IQ2_M | 13.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llamix2-MLewd-4x13B-i1-GGUF/resolve/main/Llamix2-MLewd-4x13B.i1-Q2_K.gguf) | i1-Q2_K | 14.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llamix2-MLewd-4x13B-i1-GGUF/resolve/main/Llamix2-MLewd-4x13B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 15.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llamix2-MLewd-4x13B-i1-GGUF/resolve/main/Llamix2-MLewd-4x13B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llamix2-MLewd-4x13B-i1-GGUF/resolve/main/Llamix2-MLewd-4x13B.i1-IQ3_S.gguf) | i1-IQ3_S | 16.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llamix2-MLewd-4x13B-i1-GGUF/resolve/main/Llamix2-MLewd-4x13B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 16.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llamix2-MLewd-4x13B-i1-GGUF/resolve/main/Llamix2-MLewd-4x13B.i1-IQ3_M.gguf) | i1-IQ3_M | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llamix2-MLewd-4x13B-i1-GGUF/resolve/main/Llamix2-MLewd-4x13B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 18.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llamix2-MLewd-4x13B-i1-GGUF/resolve/main/Llamix2-MLewd-4x13B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 20.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llamix2-MLewd-4x13B-i1-GGUF/resolve/main/Llamix2-MLewd-4x13B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 22.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llamix2-MLewd-4x13B-i1-GGUF/resolve/main/Llamix2-MLewd-4x13B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 23.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llamix2-MLewd-4x13B-i1-GGUF/resolve/main/Llamix2-MLewd-4x13B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 26.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llamix2-MLewd-4x13B-i1-GGUF/resolve/main/Llamix2-MLewd-4x13B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 27.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llamix2-MLewd-4x13B-i1-GGUF/resolve/main/Llamix2-MLewd-4x13B.i1-Q6_K.gguf) | i1-Q6_K | 31.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
dagbs/dolphin-2.8-mistral-7b-v02-GGUF
|
dagbs
| 2024-03-29T17:43:07Z | 356 | 13 | null |
[
"gguf",
"generated_from_trainer",
"base_model:alpindale/Mistral-7B-v0.2-hf",
"region:us"
] | null | 2024-03-29T08:06:09Z |
---
base_model: alpindale/Mistral-7B-v0.2-hf
tags:
- generated_from_trainer
model-index:
- name: workspace/dolphin-2.8-mistral-7b
results: []
---
# dolphin-2.8-mistral-7b-v02 - GGUF
Original Model: [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
|
mradermacher/multi_verse_model-10.7B-GGUF
|
mradermacher
| 2024-05-06T05:32:37Z | 356 | 2 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Joseph717171/multi_verse_model-10.7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-30T19:53:09Z |
---
base_model: Joseph717171/multi_verse_model-10.7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
static quants of https://huggingface.co/Joseph717171/multi_verse_model-10.7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/multi_verse_model-10.7B-GGUF/resolve/main/multi_verse_model-10.7B.Q2_K.gguf) | Q2_K | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/multi_verse_model-10.7B-GGUF/resolve/main/multi_verse_model-10.7B.IQ3_XS.gguf) | IQ3_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/multi_verse_model-10.7B-GGUF/resolve/main/multi_verse_model-10.7B.Q3_K_S.gguf) | Q3_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/multi_verse_model-10.7B-GGUF/resolve/main/multi_verse_model-10.7B.IQ3_S.gguf) | IQ3_S | 4.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/multi_verse_model-10.7B-GGUF/resolve/main/multi_verse_model-10.7B.IQ3_M.gguf) | IQ3_M | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/multi_verse_model-10.7B-GGUF/resolve/main/multi_verse_model-10.7B.Q3_K_M.gguf) | Q3_K_M | 5.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/multi_verse_model-10.7B-GGUF/resolve/main/multi_verse_model-10.7B.Q3_K_L.gguf) | Q3_K_L | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/multi_verse_model-10.7B-GGUF/resolve/main/multi_verse_model-10.7B.IQ4_XS.gguf) | IQ4_XS | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/multi_verse_model-10.7B-GGUF/resolve/main/multi_verse_model-10.7B.Q4_0.gguf) | Q4_0 | 6.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/multi_verse_model-10.7B-GGUF/resolve/main/multi_verse_model-10.7B.Q4_K_S.gguf) | Q4_K_S | 6.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/multi_verse_model-10.7B-GGUF/resolve/main/multi_verse_model-10.7B.IQ4_NL.gguf) | IQ4_NL | 6.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/multi_verse_model-10.7B-GGUF/resolve/main/multi_verse_model-10.7B.Q4_K_M.gguf) | Q4_K_M | 6.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/multi_verse_model-10.7B-GGUF/resolve/main/multi_verse_model-10.7B.Q5_K_S.gguf) | Q5_K_S | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/multi_verse_model-10.7B-GGUF/resolve/main/multi_verse_model-10.7B.Q5_K_M.gguf) | Q5_K_M | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/multi_verse_model-10.7B-GGUF/resolve/main/multi_verse_model-10.7B.Q6_K.gguf) | Q6_K | 9.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/multi_verse_model-10.7B-GGUF/resolve/main/multi_verse_model-10.7B.Q8_0.gguf) | Q8_0 | 11.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
lucyknada/microsoft_WizardLM-2-7B
|
lucyknada
| 2024-04-16T11:15:19Z | 356 | 43 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-04-16T02:51:57Z |
---
license: apache-2.0
---
<p style="font-size:20px;" align="center">
🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## News 🔥🔥🔥 [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper.
## Model Details
* **Model name**: WizardLM-2 7B
* **Developed by**: WizardLM@Microsoft AI
* **Base model**: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* **Parameters**: 7B
* **Language(s)**: Multilingual
* **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2)
* **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)
* **Paper**: WizardLM-2 (Upcoming)
* **License**: Apache2.0
## Model Capacities
**MT-Bench**
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
**Human Preferences Evaluation**
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
❗<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
USER: Who are you? ASSISTANT: I am WizardLM.</s>......
```
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
|
Niggendar/ponysauceXL_v10
|
Niggendar
| 2024-04-23T07:19:49Z | 356 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-04-23T07:14:47Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
duyntnet/Mistral-7B-Instruct-v0.1-imatrix-GGUF
|
duyntnet
| 2024-04-26T06:46:58Z | 356 | 0 |
transformers
|
[
"transformers",
"gguf",
"imatrix",
"mistralai",
"Mistral-7B-Instruct-v0.1",
"text-generation",
"en",
"license:other",
"region:us"
] |
text-generation
| 2024-04-26T01:11:21Z |
---
license: other
inference: false
language:
- en
pipeline_tag: text-generation
tags:
- gguf
- imatrix
- mistralai
- Mistral-7B-Instruct-v0.1
- transformers
---
Quantizations of https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1
# From original readme
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Troubleshooting
- If you see the following error:
```
Traceback (most recent call last):
File "", line 1, in
File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
raise KeyError(key)
KeyError: 'mistral'
```
Installing transformers from source should solve the issue
pip install git+https://github.com/huggingface/transformers
This should not be required after transformers-v4.33.4.
|
mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF
|
mradermacher
| 2024-05-06T04:28:06Z | 356 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:openlynn/Llama-3-Soliloquy-Max-70B-v1",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-05-05T16:18:47Z |
---
base_model: openlynn/Llama-3-Soliloquy-Max-70B-v1
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/openlynn/Llama-3-Soliloquy-Max-70B-v1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/AzuPhyr-7b-GGUF
|
mradermacher
| 2024-05-20T22:15:09Z | 356 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:P0x0/AzuPhyr-7b",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T21:20:26Z |
---
base_model: P0x0/AzuPhyr-7b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/P0x0/AzuPhyr-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AzuPhyr-7b-GGUF/resolve/main/AzuPhyr-7b.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/AzuPhyr-7b-GGUF/resolve/main/AzuPhyr-7b.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/AzuPhyr-7b-GGUF/resolve/main/AzuPhyr-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/AzuPhyr-7b-GGUF/resolve/main/AzuPhyr-7b.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/AzuPhyr-7b-GGUF/resolve/main/AzuPhyr-7b.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/AzuPhyr-7b-GGUF/resolve/main/AzuPhyr-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AzuPhyr-7b-GGUF/resolve/main/AzuPhyr-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/AzuPhyr-7b-GGUF/resolve/main/AzuPhyr-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/AzuPhyr-7b-GGUF/resolve/main/AzuPhyr-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AzuPhyr-7b-GGUF/resolve/main/AzuPhyr-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AzuPhyr-7b-GGUF/resolve/main/AzuPhyr-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/AzuPhyr-7b-GGUF/resolve/main/AzuPhyr-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/AzuPhyr-7b-GGUF/resolve/main/AzuPhyr-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AzuPhyr-7b-GGUF/resolve/main/AzuPhyr-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/AzuPhyr-7b-GGUF/resolve/main/AzuPhyr-7b.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf
|
RichardErkhov
| 2024-05-30T23:47:49Z | 356 | 0 | null |
[
"gguf",
"region:us"
] | null | 2024-05-30T20:38:51Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
airoboros-m-7b-3.1.2 - GGUF
- Model creator: https://huggingface.co/jondurbin/
- Original model: https://huggingface.co/jondurbin/airoboros-m-7b-3.1.2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [airoboros-m-7b-3.1.2.Q2_K.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf/blob/main/airoboros-m-7b-3.1.2.Q2_K.gguf) | Q2_K | 2.53GB |
| [airoboros-m-7b-3.1.2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf/blob/main/airoboros-m-7b-3.1.2.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [airoboros-m-7b-3.1.2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf/blob/main/airoboros-m-7b-3.1.2.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [airoboros-m-7b-3.1.2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf/blob/main/airoboros-m-7b-3.1.2.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [airoboros-m-7b-3.1.2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf/blob/main/airoboros-m-7b-3.1.2.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [airoboros-m-7b-3.1.2.Q3_K.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf/blob/main/airoboros-m-7b-3.1.2.Q3_K.gguf) | Q3_K | 3.28GB |
| [airoboros-m-7b-3.1.2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf/blob/main/airoboros-m-7b-3.1.2.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [airoboros-m-7b-3.1.2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf/blob/main/airoboros-m-7b-3.1.2.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [airoboros-m-7b-3.1.2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf/blob/main/airoboros-m-7b-3.1.2.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [airoboros-m-7b-3.1.2.Q4_0.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf/blob/main/airoboros-m-7b-3.1.2.Q4_0.gguf) | Q4_0 | 3.83GB |
| [airoboros-m-7b-3.1.2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf/blob/main/airoboros-m-7b-3.1.2.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [airoboros-m-7b-3.1.2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf/blob/main/airoboros-m-7b-3.1.2.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [airoboros-m-7b-3.1.2.Q4_K.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf/blob/main/airoboros-m-7b-3.1.2.Q4_K.gguf) | Q4_K | 4.07GB |
| [airoboros-m-7b-3.1.2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf/blob/main/airoboros-m-7b-3.1.2.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [airoboros-m-7b-3.1.2.Q4_1.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf/blob/main/airoboros-m-7b-3.1.2.Q4_1.gguf) | Q4_1 | 4.24GB |
| [airoboros-m-7b-3.1.2.Q5_0.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf/blob/main/airoboros-m-7b-3.1.2.Q5_0.gguf) | Q5_0 | 4.65GB |
| [airoboros-m-7b-3.1.2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf/blob/main/airoboros-m-7b-3.1.2.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [airoboros-m-7b-3.1.2.Q5_K.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf/blob/main/airoboros-m-7b-3.1.2.Q5_K.gguf) | Q5_K | 4.78GB |
| [airoboros-m-7b-3.1.2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf/blob/main/airoboros-m-7b-3.1.2.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [airoboros-m-7b-3.1.2.Q5_1.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf/blob/main/airoboros-m-7b-3.1.2.Q5_1.gguf) | Q5_1 | 5.07GB |
| [airoboros-m-7b-3.1.2.Q6_K.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf/blob/main/airoboros-m-7b-3.1.2.Q6_K.gguf) | Q6_K | 5.53GB |
| [airoboros-m-7b-3.1.2.Q8_0.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.1.2-gguf/blob/main/airoboros-m-7b-3.1.2.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
datasets:
- jondurbin/airoboros-3.1
---
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
Base model: https://huggingface.co/mistralai/Mistral-7B-v0.1
#### IMPORTANT NOTE - llama-2 chat format!!!
This models uses llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
This is just one last release tweaking hyperparams, which seem to result in a higher quality model than 3.1 or 3.1.1 (and doens't have the prompt issue of 3.1).
A model built on [airoboros-3.1 dataset](https://hf.co/datasets/jondurbin/airoboros-3.1), which is a continuation of [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3,0) with the following extras:
* More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations.
* Log information extraction (e.g. getting the list of paths/IPs/etc. from apache logs)
* Anonymization, e.g. removing names, IP addresses, and/or dates from text.
* Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.
* Multi-step instructions with acknowledgement (see below)
* De-censorship data (not published)
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a helful, unbiased, uncensored assistant.
<</SYS>>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
The prompt template is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/airoboros-l2-70b-3.1')
chat = [
{"role": "system", "content": "You are Bob, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
## Multi-step instructions with acknowledgement
I don't know how useful this is, really, but I thought I'd add it just in case. Example:
```
{
"conversations": [
{
"value": "You are a helpful, unbiased, uncensored assistant.",
"from": "system"
},
{
"value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "You can only respond using animal-related emojis.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Precede all responses with \"ZOOZOO:\"",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Include at least one bird emoji in your responses.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Describe a trip to a safari park.",
"from": "human"
},
{
"value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29",
"from": "gpt"
}
]
}
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.1 models are built on top of multiple base models, each with their own license/restrictions.
The 30b model is built on the original llama, which has a strict non-commercial usage restriction.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
|
yrju/mistral-7b-merged
|
yrju
| 2024-06-03T10:54:50Z | 356 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:WizardLM/WizardMath-7B-V1.1",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-06-02T05:30:55Z |
---
base_model:
- WizardLM/WizardMath-7B-V1.1
- mistralai/Mistral-7B-v0.1
- codellama/CodeLlama-7b-Instruct-hf
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# mistral-7b-merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
* [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: mistralai/Mistral-7B-v0.1
dtype: float16
merge_method: ties
parameters:
int8_mask: 1.0
normalize: 1.0
slices:
- sources:
- layer_range: [0, 32]
model: mistralai/Mistral-7B-v0.1
- layer_range: [0, 32]
model: WizardLM/WizardMath-7B-V1.1
parameters:
density: 0.5
weight:
- filter: mlp
value: 0.5
- value: 0.0
- layer_range: [0, 32]
model: codellama/CodeLlama-7b-Instruct-hf
parameters:
density: 0.5
weight: 0.5
```
|
ibivibiv/llama3-8b-instruct-summary-combined
|
ibivibiv
| 2024-06-02T22:10:08Z | 356 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"arxiv:1910.09700",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-06-02T22:00:43Z |
---
library_name: transformers
license: llama3
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Helsinki-NLP/opus-mt-es-ru
|
Helsinki-NLP
| 2023-08-16T11:33:26Z | 355 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-ru
* source languages: es
* target languages: ru
* OPUS readme: [es-ru](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ru/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ru/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ru/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ru/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012.es.ru | 20.9 | 0.489 |
| newstest2013.es.ru | 23.4 | 0.504 |
| Tatoeba.es.ru | 47.0 | 0.657 |
|
hiiamsid/sentence_similarity_hindi
|
hiiamsid
| 2022-01-03T11:25:33Z | 355 | 6 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"hi",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
language:
- hi
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# hiiamsid/sentence_similarity_hindi
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('hiiamsid/sentence_similarity_hindi')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
```
cosine_pearson,cosine_spearman,euclidean_pearson,euclidean_spearman,manhattan_pearson,manhattan_spearman,dot_pearson,dot_spearman
0.825825032,0.8227195932,0.8127990959,0.8214681478,0.8111641963,0.8194870279,0.8096042841,0.8061808483
```
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 341 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 137,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
- Model: [setu4993/LaBSE]
(https://huggingface.co/setu4993/LaBSE)
- Sentence Transformers [Semantic Textual Similarity]
(https://www.sbert.net/examples/training/sts/README.html)
|
yjernite/bart_eli5
|
yjernite
| 2021-03-09T22:31:11Z | 355 | 11 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:eli5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: en
license: apache-2.0
datasets:
- eli5
---
## BART ELI5
Read the article at https://yjernite.github.io/lfqa.html and try the demo at https://huggingface.co/qa/
|
timm/resmlp_24_224.fb_in1k
|
timm
| 2024-02-10T23:36:38Z | 355 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2105.03404",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2023-03-27T23:13:31Z |
---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for resmlp_24_224.fb_in1k
A ResMLP image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 30.0
- GMACs: 6.0
- Activations (M): 10.9
- Image size: 224 x 224
- **Papers:**
- ResMLP: Feedforward networks for image classification with data-efficient training: https://arxiv.org/abs/2105.03404
- **Original:** https://github.com/facebookresearch/deit
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resmlp_24_224.fb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resmlp_24_224.fb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 196, 384) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{touvron2021resmlp,
title={ResMLP: Feedforward networks for image classification with data-efficient training},
author={Hugo Touvron and Piotr Bojanowski and Mathilde Caron and Matthieu Cord and Alaaeldin El-Nouby and Edouard Grave and Gautier Izacard and Armand Joulin and Gabriel Synnaeve and Jakob Verbeek and Herv'e J'egou},
journal={arXiv preprint arXiv:2105.03404},
year={2021},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
wavymulder/collage-diffusion
|
wavymulder
| 2023-04-17T21:15:53Z | 355 | 22 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"safetensors",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-04-11T16:47:45Z |
---
language:
- en
thumbnail: "https://huggingface.co/wavymulder/collage-diffusion/resolve/main/pics/thumb.jpg"
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
- safetensors
- diffusers
inference: true
---
**Collage Diffusion**

[*CKPT DOWNLOAD LINK*](https://huggingface.co/wavymulder/collage-diffusion/resolve/main/collage-1.0.ckpt) - - - [*SAFETENSORS DOWNLOAD LINK*](https://huggingface.co/wavymulder/collage-diffusion/resolve/main/collage-1.0.safetensors)
This is a dreambooth model trained on collage images.
Use the activation token **collage style** in your prompt (I recommend at the start)
I have the most fun with this model when I use simple prompts and let the model go crazy. If you want a model that strictly adheres to your prompt, this isn't that.
Trained from 1.5 with VAE.
Please see [this document where I share the parameters (prompt, sampler, seed, etc.) used for all example images.](https://huggingface.co/wavymulder/collage-diffusion/resolve/main/parameters_for_samples.txt)

|
fffrrt/ruGPT-3.5-13B-GPTQ
|
fffrrt
| 2023-07-20T20:27:26Z | 355 | 25 |
transformers
|
[
"transformers",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-20T19:36:14Z |
GPTQ quantisation of https://huggingface.co/ai-forever/ruGPT-3.5-13B
Small perplexity test:
before quantization - 'mean_perplexity': 10.241
after quantization - 'mean_perplexity': 10.379
Data - RussianSuperGlue > DaNetQA/train.jsonl['passage']
As this is a hastily thrown together quant with no prior experience in quants, use https://huggingface.co/TheBloke version if he releases a quant for this model.
|
arthoho66/model_005_2000
|
arthoho66
| 2023-10-24T09:17:02Z | 355 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-02T10:42:21Z |
Entry not found
|
antonjaragon/emotions_xls53_6_classes
|
antonjaragon
| 2023-08-21T13:55:02Z | 355 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-18T11:05:59Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: emotions_xls53_6_classes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotions_xls53_6_classes
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8000
- Accuracy: 0.7678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6892 | 0.99 | 76 | 0.8018 | 0.7346 |
| 0.6611 | 2.0 | 153 | 0.7451 | 0.7482 |
| 0.6012 | 2.99 | 229 | 0.6767 | 0.7731 |
| 0.4795 | 4.0 | 306 | 0.8686 | 0.7408 |
| 0.5143 | 4.99 | 382 | 0.7684 | 0.7629 |
| 0.4878 | 6.0 | 459 | 0.7513 | 0.7744 |
| 0.4619 | 6.99 | 535 | 0.8314 | 0.7596 |
| 0.458 | 8.0 | 612 | 0.7943 | 0.7670 |
| 0.4383 | 8.99 | 688 | 0.8460 | 0.7596 |
| 0.5196 | 9.93 | 760 | 0.8000 | 0.7678 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
staghado/vit.cpp
|
staghado
| 2023-11-28T20:28:42Z | 355 | 1 | null |
[
"gguf",
"vit ",
"image classification",
"ggml",
"license:mit",
"region:us"
] | null | 2023-11-28T20:06:32Z |
---
license: mit
tags:
- 'vit '
- image classification
- ggml
---
# Vision Transformer (ViT) models for image classification converted to ggml format
[Available models](https://github.com/staghado/vit.cpp)
| Model | Disk | Mem | SHA |
| --- | --- | --- | --- |
| tiny | 12 MB | ~20 MB | `25ce65ff60e08a1a5b486685b533d79718e74c0f` |
| small | 45 MB | ~52 MB | `7a9f85340bd1a3dcd4275f46d5ee1db66649700e` |
| base | 174 MB | ~179 MB | `a10d29628977fe27691edf55b7238f899b8c02eb` |
| large | 610 MB | ~597 MB | `5f27087930f21987050188f9dc9eea75ac607214` |
The models are pre-trained on ImageNet21k then finetuned on ImageNet1k
with a patch size of 16 and an image size of 224.
For more information, visit:
https://github.com/staghado/vit.cpp
|
AI4Chem/ChemLLM-7B-Chat
|
AI4Chem
| 2024-04-22T05:49:05Z | 355 | 64 |
transformers
|
[
"transformers",
"safetensors",
"internlm",
"feature-extraction",
"chemistry",
"text-generation",
"custom_code",
"en",
"zh",
"arxiv:2402.06852",
"doi:10.57967/hf/2002",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-15T07:29:40Z |
---
license: apache-2.0
pipeline_tag: text-generation
tags:
- chemistry
language:
- en
- zh
---
# ChemLLM-7B-Chat: LLM for Chemistry and Molecule Science
> [!IMPORTANT]
> Better using New version of ChemLLM!
> [AI4Chem/ChemLLM-7B-Chat-1.5-DPO](https://huggingface.co/AI4Chem/ChemLLM-7B-Chat-1.5-DPO) or [AI4Chem/ChemLLM-7B-Chat-1.5-SFT](https://huggingface.co/AI4Chem/ChemLLM-7B-Chat-1.5-SFT)
ChemLLM-7B-Chat, The First Open-source Large Language Model for Chemistry and Molecule Science, Build based on InternLM-2 with ❤
[](https://huggingface.co/papers/2402.06852)
<center><img src='https://cdn-uploads.huggingface.co/production/uploads/64bce15bafd1e46c5504ad38/wdFV6p3rTBCtskbeuVwNJ.png'></center>
## News
- ChemLLM-1.5 released! Two versions are available [AI4Chem/ChemLLM-7B-Chat-1.5-DPO](https://huggingface.co/AI4Chem/ChemLLM-7B-Chat-1.5-DPO) or [AI4Chem/ChemLLM-7B-Chat-1.5-SFT](https://huggingface.co/AI4Chem/ChemLLM-7B-Chat-1.5-SFT).[2024-4-2]
- ChemLLM-1.5 updated! Have a try on [Demo Site](https://chemllm.org/#/chat) or [API Reference](https://api.chemllm.org/docs).[2024-3-23]
- ChemLLM has been featured by HuggingFace on [“Daily Papers” page](https://huggingface.co/papers/2402.06852).[2024-2-13]
- ChemLLM arXiv preprint released.[ChemLLM: A Chemical Large Language Model](https://arxiv.org/abs/2402.06852)[2024-2-10]
- News report from [Shanghai AI Lab](https://mp.weixin.qq.com/s/u-i7lQxJzrytipek4a87fw)[2024-1-26]
- ChemLLM-7B-Chat ver 1.0 released. https://chemllm.org/ [2024-1-18]
- ChemLLM-7B-Chat ver 1.0 open-sourced.[2024-1-17]
- Chepybara ver 0.2 online Demo released. https://chemllm.org/ [2023-12-9]
## Usage
Try [online demo](https://chemllm.org/) instantly, or...
Install `transformers`,
```
pip install transformers
```
Load `ChemLLM-7B-Chat` and run,
```
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
import torch
model_name_or_id = "AI4Chem/ChemLLM-7B-Chat"
model = AutoModelForCausalLM.from_pretrained(model_name_or_id, torch_dtype=torch.float16, device_map="auto",trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_id,trust_remote_code=True)
prompt = "What is Molecule of Ibuprofen?"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
generation_config = GenerationConfig(
do_sample=True,
top_k=1,
temperature=0.9,
max_new_tokens=500,
repetition_penalty=1.5,
pad_token_id=tokenizer.eos_token_id
)
outputs = model.generate(**inputs, generation_config=generation_config)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## System Prompt Best Practice
You can use the same Dialogue Templates and System Prompt from [Agent Chepybara](https://chemllm.org/) to get a better response in local inference.
### Dialogue Templates
For queries in ShareGPT format like,
```
{'instruction':"...","prompt":"...","answer":"...","history":[[q1,a1],[q2,a2]]}
```
You can format it into this InternLM2 Dialogue format like,
```
def InternLM2_format(instruction,prompt,answer,history):
prefix_template=[
"<|system|>:",
"{}"
]
prompt_template=[
"<|user|>:",
"{}\n",
"<|Bot|>:\n"
]
system = f'{prefix_template[0]}\n{prefix_template[-1].format(instruction)}\n'
history = "\n".join([f'{prompt_template[0]}\n{prompt_template[1].format(qa[0])}{prompt_template[-1]}{qa[1]}' for qa in history])
prompt = f'\n{prompt_template[0]}\n{prompt_template[1].format(prompt)}{prompt_template[-1]}'
return f"{system}{history}{prompt}"
```
And there is a good example for system prompt,
```
- Chepybara is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be Professional, Sophisticated, and Chemical-centric.
- For uncertain notions and data, Chepybara always assumes it with theoretical prediction and notices users then.
- Chepybara can accept SMILES (Simplified Molecular Input Line Entry System) string, and prefer output IUPAC names (International Union of Pure and Applied Chemistry nomenclature of organic chemistry), depict reactions in SMARTS (SMILES arbitrary target specification) string. Self-Referencing Embedded Strings (SELFIES) are also accepted.
- Chepybara always solves problems and thinks in step-by-step fashion, Output begin with *Let's think step by step*."
```
## Results
### MMLU Highlights
| dataset | ChatGLM3-6B | Qwen-7B | LLaMA-2-7B | Mistral-7B | InternLM2-7B-Chat | ChemLLM-7B-Chat |
| ---------------------- | ----------- | ------- | ---------- | ---------- | ----------------- | ----------------- |
| college chemistry | 43.0 | 39.0 | 27.0 | 40.0 | 43.0 | 47.0 |
| college mathematics | 28.0 | 33.0 | 33.0 | 30.0 | 36.0 | 41.0 |
| college physics | 32.4 | 35.3 | 25.5 | 34.3 | 41.2 | 48.0 |
| formal logic | 35.7 | 43.7 | 24.6 | 40.5 | 34.9 | 47.6 |
| moral scenarios | 26.4 | 35.0 | 24.1 | 39.9 | 38.6 | 44.3 |
| humanities average | 62.7 | 62.5 | 51.7 | 64.5 | 66.5 | 68.6 |
| stem average | 46.5 | 45.8 | 39.0 | 47.8 | 52.2 | 52.6 |
| social science average | 68.2 | 65.8 | 55.5 | 68.1 | 69.7 | 71.9 |
| other average | 60.5 | 60.3 | 51.3 | 62.4 | 63.2 | 65.2 |
| mmlu | 58.0 | 57.1 | 48.2 | 59.2 | 61.7 | 63.2 |
*(OpenCompass)

### Chemical Benchmark

*(Score judged by ChatGPT-4-turbo)
### Professional Translation


You can try it [online](chemllm.org).
## Cite this work
```
@misc{zhang2024chemllm,
title={ChemLLM: A Chemical Large Language Model},
author={Di Zhang and Wei Liu and Qian Tan and Jingdan Chen and Hang Yan and Yuliang Yan and Jiatong Li and Weiran Huang and Xiangyu Yue and Dongzhan Zhou and Shufei Zhang and Mao Su and Hansen Zhong and Yuqiang Li and Wanli Ouyang},
year={2024},
eprint={2402.06852},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
## Disclaimer
LLM may generate incorrect answers, Please pay attention to proofreading at your own risk.
## Open Source License
The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow **free** commercial usage. To apply for a commercial license, or other questions and collaborations, please contact <[email protected]>.
## Demo
[Agent Chepybara](https://chemllm.org/)

## Contact
(AI4Physics Sciecne, Shanghai AI Lab)[[email protected]]
|
LoneStriker/deepseek-coder-7b-instruct-v1.5-GGUF
|
LoneStriker
| 2024-01-26T18:08:20Z | 355 | 22 | null |
[
"gguf",
"license:other",
"region:us"
] | null | 2024-01-26T17:51:49Z |
---
license: other
license_name: deepseek
license_link: LICENSE
---
<p align="center">
<img width="1000px" alt="DeepSeek Coder" src="https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/pictures/logo.png?raw=true">
</p>
<p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://coder.deepseek.com/">[🤖 Chat with DeepSeek Coder]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/guoday/assert/blob/main/QR.png?raw=true">[Wechat(微信)]</a> </p>
<hr>
### 1. Introduction of Deepseek-Coder-7B-Instruct v1.5
Deepseek-Coder-7B-Instruct-v1.5 is continue pre-trained from Deepseek-LLM 7B on 2T tokens by employing a window size of 4K and next token prediction objective, and then fine-tuned on 2B tokens of instruction data.
- **Home Page:** [DeepSeek](https://deepseek.com/)
- **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder)
- **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/)
### 2. How to Use
Here give some examples of how to use our model.
#### Chat Model Inference
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-7b-instruct-v1.5", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-7b-instruct-v1.5", trust_remote_code=True).cuda()
messages=[
{ 'role': 'user', 'content': "write a quick sort algorithm in python."}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
```
### 3. License
This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use.
See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details.
### 4. Contact
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
|
mradermacher/mythospice-limarp-70b-i1-GGUF
|
mradermacher
| 2024-05-06T06:22:38Z | 355 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama",
"llama-2",
"not-for-all-audiences",
"en",
"base_model:Doctor-Shotgun/mythospice-limarp-70b",
"license:agpl-3.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-19T18:46:22Z |
---
base_model: Doctor-Shotgun/mythospice-limarp-70b
language:
- en
library_name: transformers
license: agpl-3.0
quantized_by: mradermacher
tags:
- llama
- llama-2
- not-for-all-audiences
---
## About
weighted/imatrix quant of https://huggingface.co/Doctor-Shotgun/mythospice-limarp-70b, created using 164k semi-random english-only tokens.
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/mythospice-limarp-70b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-IQ1_S.gguf) | i1-IQ1_S | 15.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.7 | |
| [GGUF](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.8 | |
| [GGUF](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-IQ2_S.gguf) | i1-IQ2_S | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-IQ2_M.gguf) | i1-IQ2_M | 23.7 | |
| [GGUF](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-Q2_K.gguf) | i1-Q2_K | 25.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.6 | |
| [GGUF](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-Q3_K_XS.gguf) | i1-Q3_K_XS | 28.7 | |
| [GGUF](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-IQ3_S.gguf) | i1-IQ3_S | 30.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-IQ3_M.gguf) | i1-IQ3_M | 31.4 | |
| [GGUF](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.9 | |
| [GGUF](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 49.2 | |
| [PART 1](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/mythospice-limarp-70b-i1-GGUF/resolve/main/mythospice-limarp-70b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 57.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.