modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
massimilianowosz/PandasSolver-GGUF | massimilianowosz | 2024-06-13T20:32:26Z | 434 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-13T13:48:21Z | Entry not found |
CoprolaliacPress/Fantasy_Writer-Q4_K_M-GGUF | CoprolaliacPress | 2024-07-02T04:35:23Z | 434 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:CoprolaliacPress/Fantasy_Writer",
"endpoints_compatible",
"region:us"
]
| null | 2024-07-02T04:35:00Z | ---
base_model: CoprolaliacPress/Fantasy_Writer
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# CoprolaliacPress/Fantasy_Writer-Q4_K_M-GGUF
This model was converted to GGUF format from [`CoprolaliacPress/Fantasy_Writer`](https://huggingface.co/CoprolaliacPress/Fantasy_Writer) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/CoprolaliacPress/Fantasy_Writer) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CoprolaliacPress/Fantasy_Writer-Q4_K_M-GGUF --hf-file fantasy_writer-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CoprolaliacPress/Fantasy_Writer-Q4_K_M-GGUF --hf-file fantasy_writer-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CoprolaliacPress/Fantasy_Writer-Q4_K_M-GGUF --hf-file fantasy_writer-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CoprolaliacPress/Fantasy_Writer-Q4_K_M-GGUF --hf-file fantasy_writer-q4_k_m.gguf -c 2048
```
|
IDEA-CCNL/Erlangshen-MegatronBert-1.3B | IDEA-CCNL | 2023-05-25T09:56:43Z | 433 | 24 | transformers | [
"transformers",
"pytorch",
"megatron-bert",
"bert",
"NLU",
"FewCLUE",
"ZeroCLUE",
"zh",
"arxiv:2209.02970",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04Z | ---
language:
- zh
license: apache-2.0
tags:
- bert
- NLU
- FewCLUE
- ZeroCLUE
inference: true
---
# Erlangshen-MegatronBert-1.3B
- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
## 简介 Brief Introduction
2021登顶FewCLUE和ZeroCLUE,处理NLU任务,开源时最大的中文BERT模型
It topped FewCLUE and ZeroCLUE benchmarks in 2021, solving NLU tasks, was the largest BERT when publicly released.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | MegatronBERT | 1.3B | 中文 Chinese |
## 模型信息 Model Information
Encoder结构为主的双向语言模型,专注于解决各种自然语言理解任务。
我们跟进了[Megatron-LM](https://github.com/NVIDIA/Megatron-LM)的工作,使用了32张A100,总共耗时14天在悟道语料库(180 GB版本)上训练了十亿级别参数量的BERT。同时,鉴于中文语法和大规模训练的难度,我们使用四种预训练策略来改进BERT:1) 整词掩码, 2) 知识动态遮掩, 3) 句子顺序预测, 4) 层前归一化.
A bidirectional language model based on the Encoder structure, focusing on solving various NLU tasks.
We follow [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), using 32 A100s and spending 14 days training a billion-level BERT on WuDao Corpora (180 GB version). Given Chinese grammar and the difficulty of large-scale training, we use four pre-training procedures to improve BERT: 1) Whole Word Masking (WWM), 2) Knowledge-based Dynamic Masking (KDM), 3) Sentence Order Prediction (SOP), 4) Pre-layer Normalization (Pre-LN).
### 成就 Achievements
1.2021年11月10日,Erlangshen-MegatronBert-1.3B在FewCLUE上取得第一。其中,它在CHIDF(成语填空)和TNEWS(新闻分类)子任务中的表现优于人类表现。此外,它在CHIDF(成语填空), CSLDCP(学科文献分类), OCNLI(自然语言推理)任务中均名列前茅。
2.2022年1月24日,Erlangshen-MegatronBert-1.3B在CLUE基准测试中的ZeroCLUE中取得第一。具体到子任务,我们在CSLDCP(主题文献分类), TNEWS(新闻分类), IFLYTEK(应用描述分类), CSL(抽象关键字识别)和CLUEWSC(参考消歧)任务中取得第一。
3.在2022年7月10日,Erlangshen-MegatronBert-1.3B在CLUE基准的语义匹配任务中取得第一。
1.On November 10, 2021, Erlangshen-MegatronBert-1.3B topped the FewCLUE benchmark. Among them, our Erlangshen outperformed human performance in CHIDF (idiom fill-in-the-blank) and TNEWS (news classification) subtasks. In addition, our Erlangshen ranked the top in CHIDF (idiom fill-in-the-blank), CSLDCP (subject literature classification), and OCNLI (natural language inference) tasks.
2.On January 24, 2022, Erlangshen-MegatronBert-1.3B topped the ZeroCLUE benchmark. For each of these tasks, we rank the top ones in CSLDCP (Subject Literature Classification), TNEWS (News Classification), IFLYTEK (Application Description Classification), CSL (Abstract Keyword Recognition), and CLUEWSC (Referential Disambiguation) tasks.
3.Erlangshen-MegatronBert-1.3B topped the CLUE benchmark semantic matching task on July 10, 2022.
### 下游效果 Performance
| 模型 | afqmc | tnews | iflytek | ocnli | cmnli | wsc | csl |
| :--------: | :-----: | :----: | :-----: | :----: | :----: | :----: | :----: |
| roberta-wwm-ext-large | 0.7514 | 0.5872 | 0.6152 | 0.777 | 0.814 | 0.8914 | 0.86 |
| Erlangshen-MegatronBert-1.3B | 0.7608 | 0.5996 | 0.6234 | 0.7917 | 0.81 | 0.9243 | 0.872 |
## 使用 Usage
``` python
from transformers import MegatronBertConfig, MegatronBertModel
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Erlangshen-MegatronBert-1.3B")
config = MegatronBertConfig.from_pretrained("IDEA-CCNL/Erlangshen-MegatronBert-1.3B")
model = MegatronBertModel.from_pretrained("IDEA-CCNL/Erlangshen-MegatronBert-1.3B")
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
fusing/ncsnpp-ffhq-ve-dummy-update | fusing | 2022-07-20T23:27:17Z | 433 | 1 | diffusers | [
"diffusers",
"region:us"
]
| null | 2022-07-19T13:32:15Z | Entry not found |
deepvk/deberta-v1-distill | deepvk | 2023-08-10T05:57:02Z | 433 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"deberta",
"feature-extraction",
"ru",
"en",
"arxiv:1910.01108",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2023-03-17T11:20:51Z | ---
license: apache-2.0
language:
- ru
- en
library_name: transformers
pipeline_tag: feature-extraction
---
# DeBERTa-distill
<!-- Provide a quick summary of what the model is/does. -->
Pretrained bidirectional encoder for russian language.
The model was trained using standard MLM objective on large text corpora including open social data.
See `Training Details` section for more information.
⚠️ This model contains only the encoder part without any pretrained head.
- **Developed by:** [deepvk](https://vk.com/deepvk)
- **Model type:** DeBERTa
- **Languages:** Mostly russian and small fraction of other languages
- **License:** Apache 2.0
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("deepvk/deberta-v1-distill")
model = AutoModel.from_pretrained("deepvk/deberta-v1-distill")
text = "Привет, мир!"
inputs = tokenizer(text, return_tensors='pt')
predictions = model(**inputs)
```
## Training Details
### Training Data
400 GB of filtered and deduplicated texts in total.
A mix of the following data: Wikipedia, Books, Twitter comments, Pikabu, Proza.ru, Film subtitles, News websites, and Social corpus.
#### Deduplication procedure
1. Calculate shingles with size of 5
2. Calculate MinHash with 100 seeds → for every sample (text) have a hash of size 100
3. Split every hash into 10 buckets → every bucket, which contains (100 / 10) = 10 numbers, get hashed into 1 hash → we have 10 hashes for every sample
4. For each bucket find duplicates: find samples which have the same hash → calculate pair-wise jaccard similarity → if the similarity is >0.7 than it's a duplicate
5. Gather duplicates from all the buckets and filter
### Training Hyperparameters
| Argument | Value |
|--------------------|----------------------|
| Training regime | fp16 mixed precision |
| Optimizer | AdamW |
| Adam betas | 0.9,0.98 |
| Adam eps | 1e-6 |
| Weight decay | 1e-2 |
| Batch size | 3840 |
| Num training steps | 100k |
| Num warm-up steps | 5k |
| LR scheduler | Cosine |
| LR | 5e-4 |
| Gradient norm | 1.0 |
The model was trained on a machine with 8xA100 for approximately 15 days.
### Architecture details
| Argument | Value |
|-------------------------|----------------|
|Encoder layers | 6 |
|Encoder attention heads | 12 |
|Encoder embed dim | 768 |
|Encoder ffn embed dim | 3,072 |
|Activation function | GeLU |
|Attention dropout | 0.1 |
|Dropout | 0.1 |
|Max positions | 512 |
|Vocab size | 50266 |
|Tokenizer type | Byte-level BPE |
### Distilation
In our distillation procedure, we follow [SANH et al.](https://arxiv.org/abs/1910.01108). The student is initialized from the [teacher](https://huggingface.co/deepvk/deberta-v1-base) by taking only every second layer. We use the MLM loss and CE loss with coefficients of 0.5.
## Evaluation
We evaluated the model on [Russian Super Glue](https://russiansuperglue.com/) dev set.
The best result in each task is marked in bold.
All models have the same size except the distilled version of DeBERTa.
| Model | RCB | PARus | MuSeRC | TERRa | RUSSE | RWSD | DaNetQA | Score |
|------------------------------------------------------------------------|-----------|--------|---------|-------|---------|---------|---------|-----------|
| [vk-deberta-distill](https://huggingface.co/deepvk/deberta-v1-distill) | 0.433 | 0.56 | 0.625 | 0.59 | 0.943 | 0.569 | 0.726 | 0.635 |
| [vk-roberta-base](https://huggingface.co/deepvk/roberta-base) | 0.46 | 0.56 | 0.679 | 0.769 | 0.960 | 0.569 | 0.658 | 0.665 |
| [vk-deberta-base](https://huggingface.co/deepvk/deberta-v1-base) | 0.450 |**0.61**|**0.722**| 0.704 | 0.948 | 0.578 |**0.76** |**0.682** |
| [vk-bert-base](https://huggingface.co/deepvk/bert-base-uncased) | 0.467 | 0.57 | 0.587 | 0.704 | 0.953 |**0.583**| 0.737 | 0.657 |
| [sber-bert-base](https://huggingface.co/ai-forever/ruBert-base) | **0.491** |**0.61**| 0.663 | 0.769 |**0.962**| 0.574 | 0.678 | 0.678 | |
TinyLlama/TinyLlama-1.1B-Chat-v0.2-GGUF | TinyLlama | 2023-09-29T18:19:01Z | 433 | 9 | null | [
"gguf",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"license:apache-2.0",
"region:us"
]
| null | 2023-09-27T13:52:03Z | ---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- OpenAssistant/oasst_top1_2023-08-25
language:
- en
---
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
<div align="center">
<img src="./TinyLlama_logo.png" width="300"/>
</div>
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is the chat model finetuned on [PY007/TinyLlama-1.1B-intermediate-step-240k-503b](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-240k-503b). The dataset used is [OpenAssistant/oasst_top1_2023-08-25](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25).
**Update from V0.1: 1. Different dataset. 2. Different chat format (now [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) formatted conversations).** |
TheBloke/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF | TheBloke | 2023-11-25T22:08:05Z | 433 | 9 | transformers | [
"transformers",
"gguf",
"yi",
"text-generation",
"en",
"base_model:brucethemoose/Capybara-Tess-Yi-34B-200K-DARE-Ties",
"license:other",
"region:us"
]
| text-generation | 2023-11-25T20:15:30Z | ---
base_model: brucethemoose/Capybara-Tess-Yi-34B-200K-DARE-Ties
inference: false
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
license_name: yi-license
model_creator: brucethemoose
model_name: Capybara Tess Yi 34B 200K Dare Ties
model_type: yi
pipeline_tag: text-generation
prompt_template: 'SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Capybara Tess Yi 34B 200K Dare Ties - GGUF
- Model creator: [brucethemoose](https://huggingface.co/brucethemoose)
- Original model: [Capybara Tess Yi 34B 200K Dare Ties](https://huggingface.co/brucethemoose/Capybara-Tess-Yi-34B-200K-DARE-Ties)
<!-- description start -->
## Description
This repo contains GGUF format model files for [brucethemoose's Capybara Tess Yi 34B 200K Dare Ties](https://huggingface.co/brucethemoose/Capybara-Tess-Yi-34B-200K-DARE-Ties).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-DARE-Ties-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-DARE-Ties-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF)
* [brucethemoose's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/brucethemoose/Capybara-Tess-Yi-34B-200K-DARE-Ties)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [capybara-tess-yi-34b-200k-dare-ties.Q2_K.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/blob/main/capybara-tess-yi-34b-200k-dare-ties.Q2_K.gguf) | Q2_K | 2 | 14.56 GB| 17.06 GB | smallest, significant quality loss - not recommended for most purposes |
| [capybara-tess-yi-34b-200k-dare-ties.Q3_K_S.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/blob/main/capybara-tess-yi-34b-200k-dare-ties.Q3_K_S.gguf) | Q3_K_S | 3 | 14.96 GB| 17.46 GB | very small, high quality loss |
| [capybara-tess-yi-34b-200k-dare-ties.Q3_K_M.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/blob/main/capybara-tess-yi-34b-200k-dare-ties.Q3_K_M.gguf) | Q3_K_M | 3 | 16.64 GB| 19.14 GB | very small, high quality loss |
| [capybara-tess-yi-34b-200k-dare-ties.Q3_K_L.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/blob/main/capybara-tess-yi-34b-200k-dare-ties.Q3_K_L.gguf) | Q3_K_L | 3 | 18.14 GB| 20.64 GB | small, substantial quality loss |
| [capybara-tess-yi-34b-200k-dare-ties.Q4_0.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/blob/main/capybara-tess-yi-34b-200k-dare-ties.Q4_0.gguf) | Q4_0 | 4 | 19.47 GB| 21.97 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [capybara-tess-yi-34b-200k-dare-ties.Q4_K_S.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/blob/main/capybara-tess-yi-34b-200k-dare-ties.Q4_K_S.gguf) | Q4_K_S | 4 | 19.54 GB| 22.04 GB | small, greater quality loss |
| [capybara-tess-yi-34b-200k-dare-ties.Q4_K_M.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/blob/main/capybara-tess-yi-34b-200k-dare-ties.Q4_K_M.gguf) | Q4_K_M | 4 | 20.66 GB| 23.16 GB | medium, balanced quality - recommended |
| [capybara-tess-yi-34b-200k-dare-ties.Q5_0.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/blob/main/capybara-tess-yi-34b-200k-dare-ties.Q5_0.gguf) | Q5_0 | 5 | 23.71 GB| 26.21 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [capybara-tess-yi-34b-200k-dare-ties.Q5_K_S.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/blob/main/capybara-tess-yi-34b-200k-dare-ties.Q5_K_S.gguf) | Q5_K_S | 5 | 23.71 GB| 26.21 GB | large, low quality loss - recommended |
| [capybara-tess-yi-34b-200k-dare-ties.Q5_K_M.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/blob/main/capybara-tess-yi-34b-200k-dare-ties.Q5_K_M.gguf) | Q5_K_M | 5 | 24.32 GB| 26.82 GB | large, very low quality loss - recommended |
| [capybara-tess-yi-34b-200k-dare-ties.Q6_K.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/blob/main/capybara-tess-yi-34b-200k-dare-ties.Q6_K.gguf) | Q6_K | 6 | 28.21 GB| 30.71 GB | very large, extremely low quality loss |
| [capybara-tess-yi-34b-200k-dare-ties.Q8_0.gguf](https://huggingface.co/TheBloke/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF/blob/main/capybara-tess-yi-34b-200k-dare-ties.Q8_0.gguf) | Q8_0 | 8 | 36.54 GB| 39.04 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF and below it, a specific filename to download, such as: capybara-tess-yi-34b-200k-dare-ties.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF capybara-tess-yi-34b-200k-dare-ties.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF capybara-tess-yi-34b-200k-dare-ties.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m capybara-tess-yi-34b-200k-dare-ties.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "SYSTEM: {system_message}\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Capybara-Tess-Yi-34B-200K-DARE-Ties-GGUF", model_file="capybara-tess-yi-34b-200k-dare-ties.Q4_K_M.gguf", model_type="yi", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: brucethemoose's Capybara Tess Yi 34B 200K Dare Ties
**NousResearch/Nous-Capybara-34B**, **migtissera/Tess-M-v1.2** and **migtissera/Tess-M-v1.3** merged with a new, experimental implementation of "dare ties" via mergekit. See:
> Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
https://github.com/yule-BUAA/MergeLM
https://github.com/cg123/mergekit/tree/dare-tokenizer
Highly experimental and still being tested! But this should yield a better merge than a typical linear/slerp merge or even a ties merge.
***
Merged with the following config, and the tokenizer from Yi Llamafied:
```
models:
- model: /home/alpha/Storage/Models/Raw/larryvrh_Yi-34B-200K-Llamafied
# no parameters necessary for base model
- model: /home/alpha/Storage/Models/Raw/migtissera_Tess-M-v1.3
parameters:
weight: 0.50
density: 0.56
- model: /home/alpha/Storage/Models/Raw/migtissera_Tess-M-v1.2
parameters:
weight: 0.20
density: 0.50
- model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B
parameters:
weight: 0.50
density: 0.56
merge_method: dare_ties
base_model: /home/alpha/Storage/Models/Raw/larryvrh_Yi-34B-200K-Llamafied
parameters:
int8_mask: true
dtype: bfloat16
```
Tess 1.2 (at a low weight) and 1.3 were used because, according to the trainer, they were trained on different datasets: https://migel.substack.com/p/learnings-from-training-tess
I chose not to include other finetunes, such as Dolphin, because they aren't trained on the 200K base. If any other 200K finetunes pop up, let me know.
***
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
Being a Yi model, try disabling the BOS token and/or running a lower temperature with MinP if output doesn't seem right.
Sometimes the model "spells out" the stop token as `</s>` like Capybara, so you may need to add `</s>` as an additional stopping condition.
***
Credits:
https://github.com/cg123/mergekit/tree/dare-tokenizer
https://huggingface.co/NousResearch/Nous-Capybara-34B/
https://huggingface.co/migtissera/Tess-M-v1.2
https://huggingface.co/migtissera/Tess-M-v1.3
https://huggingface.co/larryvrh/Yi-34B-200K-Llamafied
https://huggingface.co/01-ai/Yi-34B-200K
<!-- original-model-card end -->
|
NeverSleep/Noromaid-v0.1-mixtral-8x7b-v3-GGUF | NeverSleep | 2023-12-25T00:27:39Z | 433 | 8 | null | [
"gguf",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2023-12-24T11:43:20Z | ---
license: cc-by-nc-4.0
---

---
# Disclaimer:
## This model is experimental, do not expect everything to work.
You need to use our custom **prompting format**(scroll down to see them! or just directly download the SillyTavern instruct preset [here](https://files.catbox.moe/0ohmco.json))
---
Beeg noromaid. Suitable for RP, ERP.
This model was trained for 8h(v1) + 8h(v2) + 12h(v3) on customized modified datasets, focusing on RP, uncensoring, and a modified version of the Alpaca prompting (that was already used in LimaRP), which should be at the same conversational level as ChatLM or Llama2-Chat without adding any additional special tokens.
If you wanna have more infos about this model(and v1 + v2) you can check out [my blog post](https://ikaridevgit.github.io/index.html?p=7&blog=blogid-6&bo=true)
[Recommended settings - Settings 1](https://huggingface.co/NeverSleep/Noromaid-v0.1-mixtral-8x7b-v3/discussions/1)
[Recommended settings - Settings 2 (idk if they are any good)](https://files.catbox.moe/fv4xhu.json)
## Credits:
- Undi
- IkariDev
<!-- description start -->
## Description
<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->
This repo contains GGUF files of Noromaid-v0.1-mixtral-8x7b-v3.
[FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-v0.1-mixtral-8x7b-v3)
<!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)-->
<!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)-->
<!-- [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)-->
<!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)-->
<!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)-->
[GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-v0.1-mixtral-8x7b-v3-GGUF)
<!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)-->
## Ratings:
Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here!
No ratings yet!
If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Custom format, or Alpaca
### Custom format:
```
### Instruction:
{system prompt}
### Input:
{input}
### Response:
{reply}
```
## Datasets used:
- Aesir 1 and 2 ([MinervaAI](https://huggingface.co/MinervaAI) / [Gryphe](https://huggingface.co/Gryphe))
- [LimaRP-20231109](https://huggingface.co/datasets/lemonilia/LimaRP) ([Lemonilia](https://huggingface.co/lemonilia))
- [ToxicDPO-NoWarning](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt) ([unalignment orga repo](https://huggingface.co/unalignment) + [Undi](https://huggingface.co/Undi95))
- [No-robots-ShareGPT](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt) ([Doctor-Shotgun](https://huggingface.co/Doctor-Shotgu))
## Others
Undi: If you want to support me, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek |
SHMIS/ABS_AI_Helper | SHMIS | 2024-06-22T06:19:12Z | 433 | 1 | null | [
"gguf",
"license:apache-2.0",
"region:us"
]
| null | 2024-03-19T06:08:35Z | ---
license: apache-2.0
---
|
mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF | mradermacher | 2024-05-06T05:25:33Z | 433 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:LeroyDyer/Mixtral_AI_CyberBrain_SFT",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-01T09:23:52Z | ---
base_model: LeroyDyer/Mixtral_AI_CyberBrain_SFT
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
## About
static quants of https://huggingface.co/LeroyDyer/Mixtral_AI_CyberBrain_SFT
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberBrain_SFT-GGUF/resolve/main/Mixtral_AI_CyberBrain_SFT.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
lmstudio-community/codegemma-2b-GGUF | lmstudio-community | 2024-04-09T18:16:15Z | 433 | 3 | transformers | [
"transformers",
"gguf",
"text-generation",
"license:gemma",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-04-09T15:28:58Z | ---
library_name: transformers
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access CodeGemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
license: gemma
license_link: https://ai.google.dev/gemma/terms
quantized_by: bartowski
pipeline_tag: text-generation
lm_studio:
param_count: 2b
use_case: coding
release_date: 09-04-2024
model_creator: google
prompt_template: none
system_prompt: none
base_model: gemma
original_repo: google/codegemma-2b
---
## 💫 Community Model> CodeGemma 2b by Google
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [Google](https://huggingface.co/google)<br>
**Original model**: [google/codegemma-2b](https://huggingface.co/google/codegemma-2b)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b2589](https://github.com/ggerganov/llama.cpp/releases/tag/b2589)<br>
## Model Summary:
CodeGemma 2B is the first in a series of coding models released by Google. This is a code completion model, and as such, cannot be prompted in the same way a chat or instruct model can be prompted.<br>
This model is perfect for code completion and use in tools like co-pilot, where its small size will make completions show up instantly while still being high performance.<br>
This model should not be used as a chat model, and will not answer questions.
## Prompt Template:
This model does not support a typical prompt template, but instead uses the following tokens for specifying input parts:
- <|fim_prefix|> precedes the context before the completion we want to run.
- <|fim_suffix|> precedes the suffix. You must put this token exactly where the cursor would be positioned in an editor, as this is the location that will be completed by the model.-
- <|fim_middle|> is the prompt that invites the model to run the generation.
In addition to these, there's also <|file_separator|>, which is used to provide multi-file contexts.
Select `LM Studio Blank Preset` to use your own proper format as below.
## Use case and examples
This model will excel at code generation and fill-in-the-middle.
### Coding 1
```
<|fim_prefix|>import datetime
def calculate_age(birth_year):
"""Calculates a person's age based on their birth year."""
current_year = datetime.date.today().year
<|fim_suffix|>
return age<|fim_middle|>
age = current_year - birth_year<|file_separator|>
```
Explanation: Here the model was given the fill-in-middle prefix and suffix. The model is then told to generate the fim_middle with the token `<|fim_middle|>` to which is replies with the code that would complete the function.
### Coding 2
```
<|fim_prefix|>public class MergeSort {
public static void mergeSort(int[] arr) {
int n = arr.length;
if (n < 2) {
return;
}
<|fim_suffix|>
mergeSort(left);
mergeSort(right);
merge(arr, left, right);
}
public static void merge(int[] arr, int[] left, int[] right) {
int i = 0;
int j = 0;
int k = 0;
while (i < left.length && j < right.length) {
if (left[i] <= right[j]) {
arr[k] = left[i];
i++;
} else {
arr[k] = right[j];
j++;
}
k++;
}
while (i < left.length) {
arr[k] = left[i];
i++;
k++;
}
while (j < right.length) {
arr[k] = right[j];
j++;
k++;
}
}
public static void main(String[] args) {
int[] arr = {5, 2, 4, 6, 1, 3};
mergeSort(arr);
for (int i = 0; i < arr.length; i++) {
System.out.print(arr[i] + " ");
}
}
}
<|fim_middle|>
int mid = n / 2;
int[] left = new int[mid];
int[] right = new int[n - mid];
for (int i = 0; i < mid; i++) {
left[i] = arr[i];
}
for (int i = mid; i < n; i++) {
right[i - mid] = arr[i];
}<|file_separator|>
```
Explanation: The model was given the majority of a merge sort implementation in Java with a portion in the middle removed. The model was able to fill in the missing code based on the surrounding details.
### Coding 3
```
<|fim_prefix|>arr = [1, 5, 3, 76, 12, 154, 2, 56]
# Sort the array then print only the even numbers
<|fim_suffix|><|fim_middle|>
arr.sort()
for i in arr:
if i % 2 == 0:
print(i)<|file_separator|>
```
Explanation: While this model cannot be directly prompted, it can be hinted in the right direction by preceeding the fill in middle token by a comment explaning what comes next, then using <|fim_suffix|> followed immediately by <|fim_middle|><br>
In this example, the comment suggest that what comes next is sorting the array and printing out each one that is even. The model accurately fills in what should be at <|fim_suffix|>.
## Technical Details
CodeGemma 2b is based on the Gemma 2b model with additional training on exclusively code.
The code used is based on publicly avaialble code repositories.
The model was trained exclusively for the purposes of code completion and excels at it.
Additional details can be found on Google's official report PDF [here](https://storage.googleapis.com/deepmind-media/gemma/codegemma_report.pdf)
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
🙏 Special thanks to [Kalomaze](https://github.com/kalomaze) for his dataset (linked [here](https://github.com/ggerganov/llama.cpp/discussions/5263)) that was used for calculating the imatrix for these quants, which improves the overall quality!
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
bartowski/WizardLM-2-7B-GGUF | bartowski | 2024-04-16T15:00:28Z | 433 | 25 | null | [
"gguf",
"text-generation",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-04-15T17:44:08Z | ---
license: apache-2.0
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp Quantizations of WizardLM-2-7B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2675">b2675</a> for quantization.
Original model: https://huggingface.co/microsoft/WizardLM-2-7B
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
{system_prompt} USER: {prompt} ASSISTANT: </s>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [WizardLM-2-7B-Q8_0.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. |
| [WizardLM-2-7B-Q6_K.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. |
| [WizardLM-2-7B-Q5_K_M.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, *recommended*. |
| [WizardLM-2-7B-Q5_K_S.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, *recommended*. |
| [WizardLM-2-7B-Q4_K_M.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [WizardLM-2-7B-Q4_K_S.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with more space savings, *recommended*. |
| [WizardLM-2-7B-IQ4_NL.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ4_NL.gguf) | IQ4_NL | 4.12GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [WizardLM-2-7B-IQ4_XS.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ4_XS.gguf) | IQ4_XS | 3.90GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [WizardLM-2-7B-Q3_K_L.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. |
| [WizardLM-2-7B-Q3_K_M.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. |
| [WizardLM-2-7B-IQ3_M.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ3_M.gguf) | IQ3_M | 3.28GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [WizardLM-2-7B-IQ3_S.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ3_S.gguf) | IQ3_S | 3.18GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [WizardLM-2-7B-Q3_K_S.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. |
| [WizardLM-2-7B-IQ3_XS.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ3_XS.gguf) | IQ3_XS | 3.01GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [WizardLM-2-7B-IQ3_XXS.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ3_XXS.gguf) | IQ3_XXS | 2.82GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [WizardLM-2-7B-Q2_K.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-Q2_K.gguf) | Q2_K | 2.71GB | Very low quality but surprisingly usable. |
| [WizardLM-2-7B-IQ2_M.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ2_M.gguf) | IQ2_M | 2.50GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [WizardLM-2-7B-IQ2_S.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ2_S.gguf) | IQ2_S | 2.31GB | Very low quality, uses SOTA techniques to be usable. |
| [WizardLM-2-7B-IQ2_XS.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ2_XS.gguf) | IQ2_XS | 2.19GB | Very low quality, uses SOTA techniques to be usable. |
| [WizardLM-2-7B-IQ2_XXS.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ2_XXS.gguf) | IQ2_XXS | 1.99GB | Lower quality, uses SOTA techniques to be usable. |
| [WizardLM-2-7B-IQ1_M.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ1_M.gguf) | IQ1_M | 1.75GB | Extremely low quality, *not* recommended. |
| [WizardLM-2-7B-IQ1_S.gguf](https://huggingface.co/bartowski/WizardLM-2-7B-GGUF/blob/main/WizardLM-2-7B-IQ1_S.gguf) | IQ1_S | 1.61GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
mradermacher/Aqueducts-18B-GGUF | mradermacher | 2024-05-05T14:55:59Z | 433 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:MarsupialAI/Aqueducts-18B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-01T13:41:53Z | ---
base_model: MarsupialAI/Aqueducts-18B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/MarsupialAI/Aqueducts-18B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Aqueducts-18B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-GGUF/resolve/main/Aqueducts-18B.Q2_K.gguf) | Q2_K | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-GGUF/resolve/main/Aqueducts-18B.IQ3_XS.gguf) | IQ3_XS | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-GGUF/resolve/main/Aqueducts-18B.IQ3_S.gguf) | IQ3_S | 7.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-GGUF/resolve/main/Aqueducts-18B.Q3_K_S.gguf) | Q3_K_S | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-GGUF/resolve/main/Aqueducts-18B.IQ3_M.gguf) | IQ3_M | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-GGUF/resolve/main/Aqueducts-18B.Q3_K_M.gguf) | Q3_K_M | 8.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-GGUF/resolve/main/Aqueducts-18B.Q3_K_L.gguf) | Q3_K_L | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-GGUF/resolve/main/Aqueducts-18B.IQ4_XS.gguf) | IQ4_XS | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-GGUF/resolve/main/Aqueducts-18B.Q4_K_S.gguf) | Q4_K_S | 10.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-GGUF/resolve/main/Aqueducts-18B.Q4_K_M.gguf) | Q4_K_M | 10.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-GGUF/resolve/main/Aqueducts-18B.Q5_K_S.gguf) | Q5_K_S | 12.3 | |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-GGUF/resolve/main/Aqueducts-18B.Q5_K_M.gguf) | Q5_K_M | 12.6 | |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-GGUF/resolve/main/Aqueducts-18B.Q6_K.gguf) | Q6_K | 14.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-GGUF/resolve/main/Aqueducts-18B.Q8_0.gguf) | Q8_0 | 18.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Nitral-Archive/Llama-3-Update-2.0-mmproj-model-f16-outdated | Nitral-Archive | 2024-05-19T03:39:09Z | 433 | 7 | null | [
"gguf",
"en",
"license:other",
"region:us"
]
| null | 2024-05-18T02:45:28Z | ---
license: other
language:
- en
---
Most up-to date version of our Llama3-Llava (Based on Llava 1.5) series of mmproj files.
# Projector Extracted from: (updated) https://huggingface.co/MBZUAI/LLaVA-Meta-Llama-3-8B-Instruct-FT Thank @MBZUAI for their work!
* You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp).
* You can load the **mmproj** by using the corresponding section in the interface:
 |
mradermacher/TooManyMixRolePlay-7B-Story_V2-i1-GGUF | mradermacher | 2024-05-28T03:39:00Z | 433 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"OmnicromsBrain/StoryFusion-7B",
"jdqwoi/TooManyMixRolePlay-7B-Story_V1",
"en",
"base_model:jdqwoi/TooManyMixRolePlay-7B-Story_V2",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-28T00:28:50Z | ---
base_model: jdqwoi/TooManyMixRolePlay-7B-Story_V2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- OmnicromsBrain/StoryFusion-7B
- jdqwoi/TooManyMixRolePlay-7B-Story_V1
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jdqwoi/TooManyMixRolePlay-7B-Story_V2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V2-i1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V2.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V2-i1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V2.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V2-i1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V2-i1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V2-i1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V2.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V2-i1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V2.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V2-i1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V2.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V2-i1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V2-i1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V2-i1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V2-i1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V2.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V2-i1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V2.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V2-i1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V2-i1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V2-i1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V2-i1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V2.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V2-i1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V2-i1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V2-i1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V2-i1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V2-i1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V2.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
JuhiGajjar66/Bilayer_color_10 | JuhiGajjar66 | 2024-06-04T06:10:07Z | 433 | 0 | diffusers | [
"diffusers",
"safetensors",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-05-31T14:07:07Z | Entry not found |
bartowski/K2-Chat-GGUF | bartowski | 2024-06-01T02:06:44Z | 433 | 5 | null | [
"gguf",
"text-generation",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-06-01T00:41:15Z | ---
license: apache-2.0
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of K2-Chat
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3024">b3024</a> for quantization.
Original model: https://huggingface.co/LLM360/K2-Chat
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
```
{system_prompt}<|endofsystemprompt|><|beginofuser|> {prompt}<|beginofsystem|><|beginofsystem|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [K2-Chat-Q8_0.gguf](https://huggingface.co/bartowski/K2-Chat-GGUF/tree/main/K2-Chat-Q8_0.gguf) | Q8_0 | 69.37GB | Extremely high quality, generally unneeded but max available quant. |
| [K2-Chat-Q5_K_M.gguf](https://huggingface.co/bartowski/K2-Chat-GGUF/blob/main/K2-Chat-Q5_K_M.gguf) | Q5_K_M | 46.23GB | High quality, *recommended*. |
| [K2-Chat-Q4_K_M.gguf](https://huggingface.co/bartowski/K2-Chat-GGUF/blob/main/K2-Chat-Q4_K_M.gguf) | Q4_K_M | 39.34GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [K2-Chat-IQ4_XS.gguf](https://huggingface.co/bartowski/K2-Chat-GGUF/blob/main/K2-Chat-IQ4_XS.gguf) | IQ4_XS | 34.76GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [K2-Chat-Q3_K_M.gguf](https://huggingface.co/bartowski/K2-Chat-GGUF/blob/main/K2-Chat-Q3_K_M.gguf) | Q3_K_M | 31.63GB | Even lower quality. |
| [K2-Chat-IQ3_M.gguf](https://huggingface.co/bartowski/K2-Chat-GGUF/blob/main/K2-Chat-IQ3_M.gguf) | IQ3_M | 29.82GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [K2-Chat-Q3_K_S.gguf](https://huggingface.co/bartowski/K2-Chat-GGUF/blob/main/K2-Chat-Q3_K_S.gguf) | Q3_K_S | 28.16GB | Low quality, not recommended. |
| [K2-Chat-IQ3_XXS.gguf](https://huggingface.co/bartowski/K2-Chat-GGUF/blob/main/K2-Chat-IQ3_XXS.gguf) | IQ3_XXS | 24.67GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [K2-Chat-Q2_K.gguf](https://huggingface.co/bartowski/K2-Chat-GGUF/blob/main/K2-Chat-Q2_K.gguf) | Q2_K | 24.11GB | Very low quality but surprisingly usable. |
| [K2-Chat-IQ2_M.gguf](https://huggingface.co/bartowski/K2-Chat-GGUF/blob/main/K2-Chat-IQ2_M.gguf) | IQ2_M | 22.41GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [K2-Chat-IQ2_XXS.gguf](https://huggingface.co/bartowski/K2-Chat-GGUF/blob/main/K2-Chat-IQ2_XXS.gguf) | IQ2_XXS | 17.47GB | Lower quality, uses SOTA techniques to be usable. |
| [K2-Chat-IQ1_M.gguf](https://huggingface.co/bartowski/K2-Chat-GGUF/blob/main/K2-Chat-IQ1_M.gguf) | IQ1_M | 15.43GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/K2-Chat-GGUF --include "K2-Chat-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/K2-Chat-GGUF --include "K2-Chat-Q8_0.gguf/*" --local-dir K2-Chat-Q8_0
```
You can either specify a new local-dir (K2-Chat-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
netcat420/MFANNv0.12.10 | netcat420 | 2024-06-07T04:27:32Z | 433 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:MaziyarPanahi/Llama-3-8B-Instruct-v0.4",
"base_model:netcat420/MFANNv0.11",
"base_model:netcat420/MFANNv0.12",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-06T19:01:36Z | ---
base_model:
- MaziyarPanahi/Llama-3-8B-Instruct-v0.4
- netcat420/MFANNv0.11
- netcat420/MFANNv0.12
library_name: transformers
tags:
- mergekit
- merge
---
# MFANNv0.12.10
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [MaziyarPanahi/Llama-3-8B-Instruct-v0.4](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.4) as a base.
### Models Merged
The following models were included in the merge:
* [netcat420/MFANNv0.11](https://huggingface.co/netcat420/MFANNv0.11)
* [netcat420/MFANNv0.12](https://huggingface.co/netcat420/MFANNv0.12)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: netcat420/MFANNv0.12
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: netcat420/MFANNv0.11
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
merge_method: ties
base_model: MaziyarPanahi/Llama-3-8B-Instruct-v0.4
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
mradermacher/FusionNet_34Bx2_MoE-GGUF | mradermacher | 2024-06-12T09:58:44Z | 433 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"en",
"base_model:TomGrc/FusionNet_34Bx2_MoE",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-10T22:47:11Z | ---
base_model: TomGrc/FusionNet_34Bx2_MoE
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- moe
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TomGrc/FusionNet_34Bx2_MoE
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-GGUF/resolve/main/FusionNet_34Bx2_MoE.Q2_K.gguf) | Q2_K | 22.5 | |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-GGUF/resolve/main/FusionNet_34Bx2_MoE.IQ3_XS.gguf) | IQ3_XS | 25.1 | |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-GGUF/resolve/main/FusionNet_34Bx2_MoE.Q3_K_S.gguf) | Q3_K_S | 26.4 | |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-GGUF/resolve/main/FusionNet_34Bx2_MoE.IQ3_S.gguf) | IQ3_S | 26.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-GGUF/resolve/main/FusionNet_34Bx2_MoE.IQ3_M.gguf) | IQ3_M | 27.2 | |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-GGUF/resolve/main/FusionNet_34Bx2_MoE.Q3_K_M.gguf) | Q3_K_M | 29.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-GGUF/resolve/main/FusionNet_34Bx2_MoE.Q3_K_L.gguf) | Q3_K_L | 31.9 | |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-GGUF/resolve/main/FusionNet_34Bx2_MoE.IQ4_XS.gguf) | IQ4_XS | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-GGUF/resolve/main/FusionNet_34Bx2_MoE.Q4_K_S.gguf) | Q4_K_S | 34.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-GGUF/resolve/main/FusionNet_34Bx2_MoE.Q4_K_M.gguf) | Q4_K_M | 36.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-GGUF/resolve/main/FusionNet_34Bx2_MoE.Q5_K_S.gguf) | Q5_K_S | 42.0 | |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-GGUF/resolve/main/FusionNet_34Bx2_MoE.Q5_K_M.gguf) | Q5_K_M | 43.2 | |
| [GGUF](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-GGUF/resolve/main/FusionNet_34Bx2_MoE.Q6_K.gguf) | Q6_K | 50.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-GGUF/resolve/main/FusionNet_34Bx2_MoE.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/FusionNet_34Bx2_MoE-GGUF/resolve/main/FusionNet_34Bx2_MoE.Q8_0.gguf.part2of2) | Q8_0 | 64.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Jaidchen/Llama3-German-8B-32k-IQ4_NL-GGUF | Jaidchen | 2024-06-20T03:57:32Z | 433 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"de",
"base_model:DiscoResearch/Llama3-German-8B-32k",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-20T03:57:08Z | ---
base_model: DiscoResearch/Llama3-German-8B-32k
language:
- de
library_name: transformers
license: llama3
tags:
- llama-cpp
- gguf-my-repo
---
# Jaidchen/Llama3-German-8B-32k-IQ4_NL-GGUF
This model was converted to GGUF format from [`DiscoResearch/Llama3-German-8B-32k`](https://huggingface.co/DiscoResearch/Llama3-German-8B-32k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DiscoResearch/Llama3-German-8B-32k) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Jaidchen/Llama3-German-8B-32k-IQ4_NL-GGUF --hf-file llama3-german-8b-32k-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Jaidchen/Llama3-German-8B-32k-IQ4_NL-GGUF --hf-file llama3-german-8b-32k-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Jaidchen/Llama3-German-8B-32k-IQ4_NL-GGUF --hf-file llama3-german-8b-32k-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Jaidchen/Llama3-German-8B-32k-IQ4_NL-GGUF --hf-file llama3-german-8b-32k-iq4_nl-imat.gguf -c 2048
```
|
Science-geek32/DialoGPT-small-doctor | Science-geek32 | 2021-10-19T17:51:25Z | 432 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-03-02T23:29:04Z | ---
tags:
- conversational
---
#13th Doctor DialoGPT model |
mrm8488/bert-mini2bert-mini-finetuned-cnn_daily_mail-summarization | mrm8488 | 2023-05-05T23:11:51Z | 432 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"encoder-decoder",
"text2text-generation",
"summarization",
"en",
"dataset:cnn_dailymail",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2022-03-02T23:29:05Z | ---
language: en
license: apache-2.0
datasets:
- cnn_dailymail
tags:
- summarization
---
# Bert-mini2Bert-mini Summarization with 🤗EncoderDecoder Framework
This model is a warm-started *BERT2BERT* ([mini](https://huggingface.co/google/bert_uncased_L-4_H-256_A-4)) model fine-tuned on the *CNN/Dailymail* summarization dataset.
The model achieves a **16.51** ROUGE-2 score on *CNN/Dailymail*'s test dataset.
For more details on how the model was fine-tuned, please refer to
[this](https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing) notebook.
## Results on test set 📝
| Metric | # Value |
| ------ | --------- |
| **ROUGE-2** | **16.51** |
## Model in Action 🚀
```python
from transformers import BertTokenizerFast, EncoderDecoderModel
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
tokenizer = BertTokenizerFast.from_pretrained('mrm8488/bert-mini2bert-mini-finetuned-cnn_daily_mail-summarization')
model = EncoderDecoderModel.from_pretrained('mrm8488/bert-mini2bert-mini-finetuned-cnn_daily_mail-summarization').to(device)
def generate_summary(text):
# cut off at BERT max length 512
inputs = tokenizer([text], padding="max_length", truncation=True, max_length=512, return_tensors="pt")
input_ids = inputs.input_ids.to(device)
attention_mask = inputs.attention_mask.to(device)
output = model.generate(input_ids, attention_mask=attention_mask)
return tokenizer.decode(output[0], skip_special_tokens=True)
text = "your text to be summarized here..."
generate_summary(text)
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
unicamp-dl/ptt5-large-t5-vocab | unicamp-dl | 2024-04-10T17:48:55Z | 432 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"tensorflow",
"pt",
"pt-br",
"dataset:brWaC",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:05Z | ---
language: pt
license: mit
tags:
- t5
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- brWaC
widget:
- text: "Texto de exemplo em português"
inference: false
---
# Portuguese T5 (aka "PTT5")
## Introduction
PTT5 is a T5 model pretrained in the BrWac corpus, a large collection of web pages in Portuguese, improving T5's performance on Portuguese sentence similarity and entailment tasks. It's available in three sizes (small, base and large) and two vocabularies (Google's T5 original and ours, trained on Portuguese Wikipedia).
For further information or requests, please go to [PTT5 repository](https://github.com/unicamp-dl/PTT5).
## Available models
| Model | Size | #Params | Vocabulary |
| :-: | :-: | :-: | :-: |
| [unicamp-dl/ptt5-small-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-small-t5-vocab) | small | 60M | Google's T5 |
| [unicamp-dl/ptt5-base-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-base-t5-vocab) | base | 220M | Google's T5 |
| [unicamp-dl/ptt5-large-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-large-t5-vocab) | large | 740M | Google's T5 |
| [unicamp-dl/ptt5-small-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-small-portuguese-vocab) | small | 60M | Portuguese |
| **[unicamp-dl/ptt5-base-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-base-portuguese-vocab)** **(Recommended)** | **base** | **220M** | **Portuguese** |
| [unicamp-dl/ptt5-large-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-large-portuguese-vocab) | large | 740M | Portuguese |
## Usage
```python
# Tokenizer
from transformers import T5Tokenizer
# PyTorch (bare model, baremodel + language modeling head)
from transformers import T5Model, T5ForConditionalGeneration
# Tensorflow (bare model, baremodel + language modeling head)
from transformers import TFT5Model, TFT5ForConditionalGeneration
model_name = 'unicamp-dl/ptt5-base-portuguese-vocab'
tokenizer = T5Tokenizer.from_pretrained(model_name)
# PyTorch
model_pt = T5ForConditionalGeneration.from_pretrained(model_name)
# TensorFlow
model_tf = TFT5ForConditionalGeneration.from_pretrained(model_name)
```
# Citation
If you use PTT5, please cite:
@article{ptt5_2020,
title={PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data},
author={Carmo, Diedre and Piau, Marcos and Campiotti, Israel and Nogueira, Rodrigo and Lotufo, Roberto},
journal={arXiv preprint arXiv:2008.09144},
year={2020}
}
|
CAiRE/wav2vec2-large-xlsr-53-cantonese | CAiRE | 2022-09-19T07:50:42Z | 432 | 2 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"yue",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-04-09T07:23:48Z | ---
language:
- yue
datasets:
- common_voice
metrics:
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2-Large-XLSR-53-Cantonese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice zh-HK
type: common_voice
args: zh-HK
metrics:
- name: Test CER
type: cer
value: [18.55%]
---
# Wav2Vec2-Large-XLSR-53-Cantonese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Cantonese using the [Common Voice Corpus 8.0](https://commonvoice.mozilla.org/en/datasets).
When using this model, make sure that your speech input is sampled at 16kHz.
The Common Voice's validated `train` and `dev` were used for training.
The script used for training can be found at [https://github.com/holylovenia/wav2vec2-pretraining](https://github.com/holylovenia/wav2vec2-pretraining).
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "zh-HK", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("CAiRE/wav2vec2-large-xlsr-53-cantonese")
model = Wav2Vec2ForCTC.from_pretrained("CAiRE/wav2vec2-large-xlsr-53-cantonese")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the zh-HK test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "zh-HK", split="test")
wer = load_metric("cer")
processor = Wav2Vec2Processor.from_pretrained("CAiRE/wav2vec2-large-xlsr-53-cantonese")
model = Wav2Vec2ForCTC.from_pretrained("CAiRE/wav2vec2-large-xlsr-53-cantonese")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”\�]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("CER: {:2f}".format(100 * cer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: CER: 18.55 %
## Citation
If you use our code/model, please cite us:
```
@inproceedings{lovenia2022ascend,
title={ASCEND: A Spontaneous Chinese-English Dataset for Code-switching in Multi-turn Conversation},
author={Lovenia, Holy and Cahyawijaya, Samuel and Winata, Genta Indra and Xu, Peng and Yan, Xu and Liu, Zihan and Frieske, Rita and Yu, Tiezheng and Dai, Wenliang and Barezi, Elham J and others},
booktitle={Proceedings of the 13th Language Resources and Evaluation Conference (LREC)},
year={2022}
}
``` |
facebook/convnextv2-huge-22k-512 | facebook | 2023-02-20T13:12:49Z | 432 | 3 | transformers | [
"transformers",
"pytorch",
"convnextv2",
"image-classification",
"vision",
"dataset:imagenet-22k",
"arxiv:2301.00808",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-02-18T15:54:08Z | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-22k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ConvNeXt V2 (huge-sized model)
ConvNeXt V2 model pretrained using the FCMAE framework and fine-tuned on the ImageNet-22K dataset at resolution 512x512. It was introduced in the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Woo et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt-V2).
Disclaimer: The team releasing ConvNeXT V2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXt V2 is a pure convolutional model (ConvNet) that introduces a fully convolutional masked autoencoder framework (FCMAE) and a new Global Response Normalization (GRN) layer to ConvNeXt. ConvNeXt V2 significantly improves the performance of pure ConvNets on various recognition benchmarks.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnextv2) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, ConvNextV2ForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
preprocessor = AutoImageProcessor.from_pretrained("facebook/convnextv2-huge-22k-512")
model = ConvNextV2ForImageClassification.from_pretrained("facebook/convnextv2-huge-22k-512")
inputs = preprocessor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnextv2).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2301-00808,
author = {Sanghyun Woo and
Shoubhik Debnath and
Ronghang Hu and
Xinlei Chen and
Zhuang Liu and
In So Kweon and
Saining Xie},
title = {ConvNeXt {V2:} Co-designing and Scaling ConvNets with Masked Autoencoders},
journal = {CoRR},
volume = {abs/2301.00808},
year = {2023},
url = {https://doi.org/10.48550/arXiv.2301.00808},
doi = {10.48550/arXiv.2301.00808},
eprinttype = {arXiv},
eprint = {2301.00808},
timestamp = {Tue, 10 Jan 2023 15:10:12 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2301-00808.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
digiplay/RMHF_2.5D_v2 | digiplay | 2023-07-05T00:31:26Z | 432 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-04T23:57:21Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/101518/rmhf
Sample image I made :


Original Author's DEMO image and prompt :
cat ears, pink hair, heterochromia, red eye, blue eye, blue sky, ocean, sea, seaside, beach, water, white clouds, angel wings, angel halo, feather wings, multiple wings, large wings, halo, glowing halo, energy wings, glowing wings, angel, light particles, dappled sunlight, bright, glowing eyes, unity cg, 8k wallpaper, amazing, ultra-detailed illustration

|
TheBloke/leo-hessianai-13B-GGUF | TheBloke | 2023-09-28T13:43:20Z | 432 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"en",
"de",
"dataset:oscar-corpus/OSCAR-2301",
"dataset:wikipedia",
"dataset:bjoernp/tagesschau-2018-2023",
"base_model:LeoLM/leo-hessianai-13b",
"license:llama2",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-28T13:36:34Z | ---
base_model: LeoLM/leo-hessianai-13b
datasets:
- oscar-corpus/OSCAR-2301
- wikipedia
- bjoernp/tagesschau-2018-2023
inference: false
language:
- en
- de
library_name: transformers
license: llama2
model_creator: LAION LeoLM
model_name: Leo Hessianai 13B
model_type: llama
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Leo Hessianai 13B - GGUF
- Model creator: [LAION LeoLM](https://huggingface.co/LeoLM)
- Original model: [Leo Hessianai 13B](https://huggingface.co/LeoLM/leo-hessianai-13b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [LAION LeoLM's Leo Hessianai 13B](https://huggingface.co/LeoLM/leo-hessianai-13b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/leo-hessianai-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/leo-hessianai-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF)
* [LAION LeoLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/LeoLM/leo-hessianai-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [leo-hessianai-13b.Q2_K.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [leo-hessianai-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [leo-hessianai-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [leo-hessianai-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [leo-hessianai-13b.Q4_0.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [leo-hessianai-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [leo-hessianai-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [leo-hessianai-13b.Q5_0.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [leo-hessianai-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [leo-hessianai-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [leo-hessianai-13b.Q6_K.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [leo-hessianai-13b.Q8_0.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/leo-hessianai-13B-GGUF and below it, a specific filename to download, such as: leo-hessianai-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/leo-hessianai-13B-GGUF leo-hessianai-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/leo-hessianai-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/leo-hessianai-13B-GGUF leo-hessianai-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m leo-hessianai-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/leo-hessianai-13B-GGUF", model_file="leo-hessianai-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: LAION LeoLM's Leo Hessianai 13B
# LAION LeoLM: **L**inguistically **E**nhanced **O**pen **L**anguage **M**odel
Meet LeoLM, the first open and commercially available German Foundation Language Model built on Llama-2.
Our models extend Llama-2's capabilities into German through continued pretraining on a large corpus of German-language and mostly locality specific text.
Thanks to a compute grant at HessianAI's new supercomputer **42**, we release two foundation models trained with 8k context length,
[`LeoLM/leo-hessianai-7b`](https://huggingface.co/LeoLM/leo-hessianai-7b) and [`LeoLM/leo-hessianai-13b`](https://huggingface.co/LeoLM/leo-hessianai-13b) under the [Llama-2 community license](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) (70b also coming soon! 👀).
With this release, we hope to bring a new wave of opportunities to German open-source and commercial LLM research and accelerate adoption.
Read our [blog post]() or our paper (preprint coming soon) for more details!
*A project by Björn Plüster and Christoph Schuhmann in collaboration with LAION and HessianAI.*
## Model Details
- **Finetuned from:** [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf)
- **Model type:** Causal decoder-only transformer language model
- **Language:** English and German
- **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt)
- **Contact:** [LAION Discord](https://discord.com/invite/eq3cAMZtCC) or [Björn Plüster](mailto:[email protected])
## Use in 🤗Transformers
First install direct dependencies:
```
pip install transformers torch sentencepiece
```
If you want faster inference using flash-attention2, you need to install these dependencies:
```bash
pip install packaging ninja
pip install flash-attn==v2.1.1 --no-build-isolation
pip install git+https://github.com/HazyResearch/[email protected]#subdirectory=csrc/rotary
```
Then load the model in transformers:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
model="LeoLM/leo-hessianai-13b",
device_map="auto",
torch_dtype=torch.float16,
trust_remote_code=True # True for flash-attn2 else False
)
```
## Training parameters

## Benchmarks

<!-- original-model-card end -->
|
TheBloke/leo-hessianai-7B-chat-GGUF | TheBloke | 2023-09-28T15:39:03Z | 432 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"en",
"de",
"dataset:LeoLM/OpenSchnabeltier",
"dataset:OpenAssistant/OASST-DE",
"dataset:FreedomIntelligence/alpaca-gpt4-deutsch",
"dataset:FreedomIntelligence/evol-instruct-deutsch",
"dataset:LeoLM/German_Poems",
"dataset:LeoLM/German_Songs",
"base_model:LeoLM/leo-hessianai-7b-chat",
"license:llama2",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-28T15:35:30Z | ---
base_model: LeoLM/leo-hessianai-7b-chat
datasets:
- LeoLM/OpenSchnabeltier
- OpenAssistant/OASST-DE
- FreedomIntelligence/alpaca-gpt4-deutsch
- FreedomIntelligence/evol-instruct-deutsch
- LeoLM/German_Poems
- LeoLM/German_Songs
inference: false
language:
- en
- de
library_name: transformers
license: llama2
model_creator: LAION LeoLM
model_name: Leo Hessianai 7B Chat
model_type: llama
pipeline_tag: text-generation
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Leo Hessianai 7B Chat - GGUF
- Model creator: [LAION LeoLM](https://huggingface.co/LeoLM)
- Original model: [Leo Hessianai 7B Chat](https://huggingface.co/LeoLM/leo-hessianai-7b-chat)
<!-- description start -->
## Description
This repo contains GGUF format model files for [LAION LeoLM's Leo Hessianai 7B Chat](https://huggingface.co/LeoLM/leo-hessianai-7b-chat).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-GGUF)
* [LAION LeoLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/LeoLM/leo-hessianai-7b-chat)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [leo-hessianai-7b-chat.Q2_K.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-GGUF/blob/main/leo-hessianai-7b-chat.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [leo-hessianai-7b-chat.Q3_K_S.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-GGUF/blob/main/leo-hessianai-7b-chat.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [leo-hessianai-7b-chat.Q3_K_M.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-GGUF/blob/main/leo-hessianai-7b-chat.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [leo-hessianai-7b-chat.Q3_K_L.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-GGUF/blob/main/leo-hessianai-7b-chat.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [leo-hessianai-7b-chat.Q4_0.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-GGUF/blob/main/leo-hessianai-7b-chat.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [leo-hessianai-7b-chat.Q4_K_S.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-GGUF/blob/main/leo-hessianai-7b-chat.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [leo-hessianai-7b-chat.Q4_K_M.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-GGUF/blob/main/leo-hessianai-7b-chat.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [leo-hessianai-7b-chat.Q5_0.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-GGUF/blob/main/leo-hessianai-7b-chat.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [leo-hessianai-7b-chat.Q5_K_S.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-GGUF/blob/main/leo-hessianai-7b-chat.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [leo-hessianai-7b-chat.Q5_K_M.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-GGUF/blob/main/leo-hessianai-7b-chat.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [leo-hessianai-7b-chat.Q6_K.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-GGUF/blob/main/leo-hessianai-7b-chat.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [leo-hessianai-7b-chat.Q8_0.gguf](https://huggingface.co/TheBloke/leo-hessianai-7B-chat-GGUF/blob/main/leo-hessianai-7b-chat.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/leo-hessianai-7B-chat-GGUF and below it, a specific filename to download, such as: leo-hessianai-7b-chat.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/leo-hessianai-7B-chat-GGUF leo-hessianai-7b-chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/leo-hessianai-7B-chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/leo-hessianai-7B-chat-GGUF leo-hessianai-7b-chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m leo-hessianai-7b-chat.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/leo-hessianai-7B-chat-GGUF", model_file="leo-hessianai-7b-chat.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: LAION LeoLM's Leo Hessianai 7B Chat
# LAION LeoLM: **L**inguistically **E**nhanced **O**pen **L**anguage **M**odel
Meet LeoLM, the first open and commercially available German Foundation Language Model built on Llama-2.
Our models extend Llama-2's capabilities into German through continued pretraining on a large corpus of German-language and mostly locality specific text.
Thanks to a compute grant at HessianAI's new supercomputer **42**, we release two foundation models trained with 8k context length,
[`LeoLM/leo-hessianai-7b`](https://huggingface.co/LeoLM/leo-hessianai-7b) and [`LeoLM/leo-hessianai-13b`](https://huggingface.co/LeoLM/leo-hessianai-13b) under the [Llama-2 community license](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) (70b also coming soon! 👀).
With this release, we hope to bring a new wave of opportunities to German open-source and commercial LLM research and accelerate adoption.
Read our [blog post]() or our paper (preprint coming soon) for more details!
*A project by Björn Plüster and Christoph Schuhmann in collaboration with LAION and HessianAI.*
## LeoLM Chat
`LeoLM/leo-hessianai-7b-chat` is a German chat model built on our foundation model `LeoLM/leo-hessianai-7b` and finetuned on a selection of German instruction datasets.
The model performs exceptionally well on writing, explanation and discussion tasks but struggles somewhat with math and advanced reasoning. See our MT-Bench-DE scores:
```
{
"first_turn": 5.75,
"second_turn": 4.45,
"categories": {
"writing": 5.875,
"roleplay": 6.3,
"reasoning": 3.5,
"math": 2.85,
"coding": 2.95,
"extraction": 4.3,
"stem": 7.4,
"humanities": 7.625
},
"average": 5.1
}
```
## Model Details
- **Finetuned from:** [LeoLM/leo-hessianai-7b](https://huggingface.co/LeoLM/leo-hessianai-7b)
- **Model type:** Causal decoder-only transformer language model
- **Language:** English and German
- **Demo:** [Web Demo]()
- **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt)
- **Contact:** [LAION Discord](https://discord.com/invite/eq3cAMZtCC) or [Björn Plüster](mailto:[email protected])
## Use in 🤗Transformers
First install direct dependencies:
```
pip install transformers torch sentencepiece
```
If you want faster inference using flash-attention2, you need to install these dependencies:
```bash
pip install packaging ninja
pip install flash-attn==v2.1.1 --no-build-isolation
pip install git+https://github.com/HazyResearch/[email protected]#subdirectory=csrc/rotary
```
Then load the model in transformers:
```python
from transformers import pipeline
import torch
system_prompt = """<|im_start|>system
Dies ist eine Unterhaltung zwischen einem intelligenten, hilfsbereitem KI-Assistenten und einem Nutzer.
Der Assistent gibt ausführliche, hilfreiche und ehrliche Antworten.<|im_end|>
"""
prompt_format = "<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
prompt = "Erkläre mir wie die Fahrradwegesituation in Hamburg ist."
generator = pipeline(model="LeoLM/leo-hessianai-7b-chat", device="cuda", torch_dtype=torch.float16, trust_remote_code=True) # True for flash-attn2 else False
print(generator(prompt_format.format(prompt=prompt), do_sample=True, top_p=0.95, max_length=8192))
```
"*Als KI kann ich keine persönlichen Beobachtungen teilen, aber ich kann einige allgemeine Informationen zur Fahrradwegesituation in Hamburg liefern. Im Vergleich zu vielen anderen großen Städten hat Hamburg eine hohe Anzahl von Fahrradfahrern und nimmt seine Verantwortung für nachhaltige Verkehrsmittel sehr ernst. Es gibt viele Fahrradwege und separate Fahrspuren, die Radfahrern ermöglichen, zusammen mit dem Autoverkehr zu fahren. Diese Fahrradspuren sind oft mit Markierungen gekennzeichnet und durch physische Trennungen von anderen Fahrspuren abgegrenzt. Darüber hinaus gibt es viele Fahrradstraßen, auf denen Radfahrer Vorfahrt haben und Autos langsamer fahren müssen.*
*In einigen städtischen Gebieten können Fahrradwege jedoch eng oder überfüllt sein, besonders während der Stoßzeiten. Es gibt auch viele Kreuzungen, an denen Radfahrer anhalten und auf Grün warten müssen, ähnlich wie Autofahrer. Insgesamt ist die Fahrradinfrastruktur in Hamburg ziemlich gut, aber wie überall gibt es immer Raum für Verbesserungen.*"
## Prompting / Prompt Template
Prompt dialogue template (ChatML format):
```
"""
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
"""
```
The model input can contain multiple conversation turns between user and assistant, e.g.
```
<|im_start|>user
{prompt 1}<|im_end|>
<|im_start|>assistant
{reply 1}<|im_end|>
<|im_start|>user
{prompt 2}<|im_end|>
<|im_start|>assistant
(...)
```
## Ethical Considerations and Limitations
LeoLM has been tested in English and German, and has not covered, nor could it cover all scenarios.
For these reasons, as with all LLMs, the potential outputs of `LeoLM/leo-hessianai-7b-chat` cannot be predicted
in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses
to user prompts. Therefore, before deploying any applications of `LeoLM/leo-hessianai-7b-chat`, developers should
perform safety testing and tuning tailored to their specific applications of the model.
Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/).
## Finetuning Details
| Hyperparameter | Value |
|---|---|
| Num epochs | 3 |
| Examples per epoch | 131214 |
| Global batch size | 256 |
| Learning rate | 3e-5 |
| Warmup steps | 100 |
| LR scheduler | Cosine |
| Adam betas | (0.9, 0.95) |
## Dataset Details
```
## Stats for 'Subset of OpenAssistant/OASST-DE' (3534 samples (100.0%))
-----------------
Accepted: 3534/3534 (100.0%)
Accepted tokens: 2259302
Skipped: 0 (0.0%)
Min tokens per sample: 29
Max tokens per sample: 2484
Avg tokens per sample: 639.3044708545557
-----------------
## Stats for 'Subset of FreedomIntelligence/evol-instruct-deutsch' (57841 samples (100.0%))
-----------------
Accepted: 57841/57841 (100.0%)
Accepted tokens: 42958192
Skipped: 0 (0.0%)
Min tokens per sample: 33
Max tokens per sample: 5507
Avg tokens per sample: 742.6944900675991
-----------------
## Stats for 'Subset of FreedomIntelligence/alpaca-gpt4-deutsch' (48969 samples (100.0%))
-----------------
Accepted: 48969/48969 (100.0%)
Accepted tokens: 13372005
Skipped: 0 (0.0%)
Min tokens per sample: 19
Max tokens per sample: 1359
Avg tokens per sample: 273.07082031489307
-----------------
## Stats for 'Subset of LeoLM/OpenSchnabeltier' (21314 samples (100.0%))
-----------------
Accepted: 21314/21314 (100.0%)
Accepted tokens: 8134690
Skipped: 0 (0.0%)
Min tokens per sample: 25
Max tokens per sample: 1202
Avg tokens per sample: 381.65947264708643
-----------------
## Stats for 'Subset of LeoLM/German_Poems' (490 samples (100.0%))
-----------------
Accepted: 490/490 (100.0%)
Accepted tokens: 618642
Skipped: 0 (0.0%)
Min tokens per sample: 747
Max tokens per sample: 1678
Avg tokens per sample: 1262.534693877551
-----------------
## Stats for 'Subset of LeoLM/German_Songs' (392 samples (100.0%))
-----------------
Accepted: 392/392 (100.0%)
Accepted tokens: 187897
Skipped: 0 (0.0%)
Min tokens per sample: 231
Max tokens per sample: 826
Avg tokens per sample: 479.3290816326531
-----------------
## Stats for 'total' (132540 samples (100.0%))
-----------------
Accepted: 132540/132540 (100.0%)
Accepted tokens: 67530728
Skipped: 0 (0.0%)
Min tokens per sample: 19
Max tokens per sample: 5507
Avg tokens per sample: 509.51205673758864
-----------------
```
<!-- original-model-card end -->
|
mmnga/japanese-stablelm-3b-4e1t-instruct-gguf | mmnga | 2023-10-25T13:32:44Z | 432 | 1 | null | [
"gguf",
"license:apache-2.0",
"region:us"
]
| null | 2023-10-25T12:43:37Z | ---
license: apache-2.0
---
# japanese-stablelm-3b-4e1t-instruct-gguf
[stabilityaiさんが公開しているjapanese-stablelm-3b-4e1t-instruct](https://huggingface.co/stabilityai/japanese-stablelm-3b-4e1t-instruct)のggufフォーマット変換版です。
こちら[本家にサポートPR](https://github.com/ggerganov/llama.cpp/pull/3586)がありますので、本家で将来的に実行可能になると思います。
他のモデルはこちら
3bモデル
[mmnga/japanese-stablelm-3b-4e1t-base-gguf](https://huggingface.co/mmnga/japanese-stablelm-3b-4e1t-base-gguf)
[mmnga/japanese-stablelm-3b-4e1t-instruct-gguf](https://huggingface.co/mmnga/japanese-stablelm-3b-4e1t-instruct-gguf)
7bモデル
[mmnga/japanese-stablelm-base-gamma-7b-gguf](https://huggingface.co/mmnga/japanese-stablelm-base-gamma-7b-gguf)
[mmnga/japanese-stablelm-instruct-gamma-7b-gguf](https://huggingface.co/mmnga/japanese-stablelm-instruct-gamma-7b-gguf)
## Usage
```
git clone --branch mmnga-dev-merge https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'japanese-stablelm-3b-4e1t-instruct-q4_0.gguf' -n 128 -p '### 指示: 今夜の晩御飯のレシピを教えて \n ### 応答:'
```
|
TheBloke/openbuddy-mistral-7B-v13.1-GGUF | TheBloke | 2023-10-28T10:01:07Z | 432 | 4 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"base_model:OpenBuddy/openbuddy-mistral-7b-v13.1",
"license:apache-2.0",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-10-28T07:44:51Z | ---
base_model: OpenBuddy/openbuddy-mistral-7b-v13.1
inference: false
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
library_name: transformers
license: apache-2.0
model_creator: OpenBuddy
model_name: OpenBuddy Mistral 7B v13.1
model_type: mistral
pipeline_tag: text-generation
prompt_template: "You are a helpful, respectful and honest INTP-T AI Assistant named\
\ Buddy. You are talking to a human User.\nAlways answer as helpfully and logically\
\ as possible, while being safe. Your answers should not include any harmful, political,\
\ religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please\
\ ensure that your responses are socially unbiased and positive in nature.\nIf a\
\ question does not make any sense, or is not factually coherent, explain why instead\
\ of answering something not correct. If you don't know the answer to a question,\
\ please don't share false information.\nYou like to use emojis. You can speak fluently\
\ in many languages, for example: English, Chinese.\nYou cannot access the internet,\
\ but you have vast knowledge, cutoff: 2021-09.\nYou are trained by OpenBuddy team,\
\ (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), you are based\
\ on LLaMA and Falcon transformers model, not related to GPT or OpenAI.\n\nUser:\
\ {prompt}\nAssistant: \n"
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# OpenBuddy Mistral 7B v13.1 - GGUF
- Model creator: [OpenBuddy](https://huggingface.co/OpenBuddy)
- Original model: [OpenBuddy Mistral 7B v13.1](https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13.1)
<!-- description start -->
## Description
This repo contains GGUF format model files for [OpenBuddy's OpenBuddy Mistral 7B v13.1](https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13.1).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13.1-GGUF)
* [OpenBuddy's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: OpenBuddy
```
You are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human User.
Always answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
You like to use emojis. You can speak fluently in many languages, for example: English, Chinese.
You cannot access the internet, but you have vast knowledge, cutoff: 2021-09.
You are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), you are based on LLaMA and Falcon transformers model, not related to GPT or OpenAI.
User: {prompt}
Assistant:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [openbuddy-mistral-7b-v13.1.Q2_K.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13.1-GGUF/blob/main/openbuddy-mistral-7b-v13.1.Q2_K.gguf) | Q2_K | 2 | 3.10 GB| 5.60 GB | smallest, significant quality loss - not recommended for most purposes |
| [openbuddy-mistral-7b-v13.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13.1-GGUF/blob/main/openbuddy-mistral-7b-v13.1.Q3_K_S.gguf) | Q3_K_S | 3 | 3.19 GB| 5.69 GB | very small, high quality loss |
| [openbuddy-mistral-7b-v13.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13.1-GGUF/blob/main/openbuddy-mistral-7b-v13.1.Q3_K_M.gguf) | Q3_K_M | 3 | 3.54 GB| 6.04 GB | very small, high quality loss |
| [openbuddy-mistral-7b-v13.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13.1-GGUF/blob/main/openbuddy-mistral-7b-v13.1.Q3_K_L.gguf) | Q3_K_L | 3 | 3.85 GB| 6.35 GB | small, substantial quality loss |
| [openbuddy-mistral-7b-v13.1.Q4_0.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13.1-GGUF/blob/main/openbuddy-mistral-7b-v13.1.Q4_0.gguf) | Q4_0 | 4 | 4.14 GB| 6.64 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [openbuddy-mistral-7b-v13.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13.1-GGUF/blob/main/openbuddy-mistral-7b-v13.1.Q4_K_S.gguf) | Q4_K_S | 4 | 4.17 GB| 6.67 GB | small, greater quality loss |
| [openbuddy-mistral-7b-v13.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13.1-GGUF/blob/main/openbuddy-mistral-7b-v13.1.Q4_K_M.gguf) | Q4_K_M | 4 | 4.39 GB| 6.89 GB | medium, balanced quality - recommended |
| [openbuddy-mistral-7b-v13.1.Q5_0.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13.1-GGUF/blob/main/openbuddy-mistral-7b-v13.1.Q5_0.gguf) | Q5_0 | 5 | 5.03 GB| 7.53 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [openbuddy-mistral-7b-v13.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13.1-GGUF/blob/main/openbuddy-mistral-7b-v13.1.Q5_K_S.gguf) | Q5_K_S | 5 | 5.03 GB| 7.53 GB | large, low quality loss - recommended |
| [openbuddy-mistral-7b-v13.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13.1-GGUF/blob/main/openbuddy-mistral-7b-v13.1.Q5_K_M.gguf) | Q5_K_M | 5 | 5.16 GB| 7.66 GB | large, very low quality loss - recommended |
| [openbuddy-mistral-7b-v13.1.Q6_K.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13.1-GGUF/blob/main/openbuddy-mistral-7b-v13.1.Q6_K.gguf) | Q6_K | 6 | 5.97 GB| 8.47 GB | very large, extremely low quality loss |
| [openbuddy-mistral-7b-v13.1.Q8_0.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13.1-GGUF/blob/main/openbuddy-mistral-7b-v13.1.Q8_0.gguf) | Q8_0 | 8 | 7.74 GB| 10.24 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/openbuddy-mistral-7B-v13.1-GGUF and below it, a specific filename to download, such as: openbuddy-mistral-7b-v13.1.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/openbuddy-mistral-7B-v13.1-GGUF openbuddy-mistral-7b-v13.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/openbuddy-mistral-7B-v13.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/openbuddy-mistral-7B-v13.1-GGUF openbuddy-mistral-7b-v13.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m openbuddy-mistral-7b-v13.1.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human User.\nAlways answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\nYou like to use emojis. You can speak fluently in many languages, for example: English, Chinese.\nYou cannot access the internet, but you have vast knowledge, cutoff: 2021-09.\nYou are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), you are based on LLaMA and Falcon transformers model, not related to GPT or OpenAI.\n\nUser: {prompt}\nAssistant:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/openbuddy-mistral-7B-v13.1-GGUF", model_file="openbuddy-mistral-7b-v13.1.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: OpenBuddy's OpenBuddy Mistral 7B v13.1
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/mistralai/Mistral-7B-v0.1
License: Apache 2.0
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
<!-- original-model-card end -->
|
TheBloke/Yi-34B-200K-Llamafied-GGUF | TheBloke | 2023-11-11T16:46:29Z | 432 | 13 | transformers | [
"transformers",
"gguf",
"yi",
"zh",
"en",
"base_model:larryvrh/Yi-34B-200K-Llamafied",
"license:other",
"region:us"
]
| null | 2023-11-11T16:04:32Z | ---
base_model: larryvrh/Yi-34B-200K-Llamafied
inference: false
language:
- zh
- en
license: other
license_link: LICENSE
license_name: yi-license
model_creator: larryvrh
model_name: Yi 34B 200K Llamafied
model_type: yi
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Yi 34B 200K Llamafied - GGUF
- Model creator: [larryvrh](https://huggingface.co/larryvrh)
- Original model: [Yi 34B 200K Llamafied](https://huggingface.co/larryvrh/Yi-34B-200K-Llamafied)
<!-- description start -->
## Description
This repo contains GGUF format model files for [larryvrh's Yi 34B 200K Llamafied](https://huggingface.co/larryvrh/Yi-34B-200K-Llamafied).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Yi-34B-200K-Llamafied-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Yi-34B-200K-Llamafied-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Yi-34B-200K-Llamafied-GGUF)
* [larryvrh's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/larryvrh/Yi-34B-200K-Llamafied)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [yi-34b-200k-llamafied.Q2_K.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-Llamafied-GGUF/blob/main/yi-34b-200k-llamafied.Q2_K.gguf) | Q2_K | 2 | 14.56 GB| 17.06 GB | smallest, significant quality loss - not recommended for most purposes |
| [yi-34b-200k-llamafied.Q3_K_S.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-Llamafied-GGUF/blob/main/yi-34b-200k-llamafied.Q3_K_S.gguf) | Q3_K_S | 3 | 14.96 GB| 17.46 GB | very small, high quality loss |
| [yi-34b-200k-llamafied.Q3_K_M.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-Llamafied-GGUF/blob/main/yi-34b-200k-llamafied.Q3_K_M.gguf) | Q3_K_M | 3 | 16.64 GB| 19.14 GB | very small, high quality loss |
| [yi-34b-200k-llamafied.Q3_K_L.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-Llamafied-GGUF/blob/main/yi-34b-200k-llamafied.Q3_K_L.gguf) | Q3_K_L | 3 | 18.14 GB| 20.64 GB | small, substantial quality loss |
| [yi-34b-200k-llamafied.Q4_0.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-Llamafied-GGUF/blob/main/yi-34b-200k-llamafied.Q4_0.gguf) | Q4_0 | 4 | 19.47 GB| 21.97 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [yi-34b-200k-llamafied.Q4_K_S.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-Llamafied-GGUF/blob/main/yi-34b-200k-llamafied.Q4_K_S.gguf) | Q4_K_S | 4 | 19.54 GB| 22.04 GB | small, greater quality loss |
| [yi-34b-200k-llamafied.Q4_K_M.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-Llamafied-GGUF/blob/main/yi-34b-200k-llamafied.Q4_K_M.gguf) | Q4_K_M | 4 | 20.66 GB| 23.16 GB | medium, balanced quality - recommended |
| [yi-34b-200k-llamafied.Q5_0.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-Llamafied-GGUF/blob/main/yi-34b-200k-llamafied.Q5_0.gguf) | Q5_0 | 5 | 23.71 GB| 26.21 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [yi-34b-200k-llamafied.Q5_K_S.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-Llamafied-GGUF/blob/main/yi-34b-200k-llamafied.Q5_K_S.gguf) | Q5_K_S | 5 | 23.71 GB| 26.21 GB | large, low quality loss - recommended |
| [yi-34b-200k-llamafied.Q5_K_M.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-Llamafied-GGUF/blob/main/yi-34b-200k-llamafied.Q5_K_M.gguf) | Q5_K_M | 5 | 24.32 GB| 26.82 GB | large, very low quality loss - recommended |
| [yi-34b-200k-llamafied.Q6_K.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-Llamafied-GGUF/blob/main/yi-34b-200k-llamafied.Q6_K.gguf) | Q6_K | 6 | 28.21 GB| 30.71 GB | very large, extremely low quality loss |
| [yi-34b-200k-llamafied.Q8_0.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-Llamafied-GGUF/blob/main/yi-34b-200k-llamafied.Q8_0.gguf) | Q8_0 | 8 | 36.54 GB| 39.04 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Yi-34B-200K-Llamafied-GGUF and below it, a specific filename to download, such as: yi-34b-200k-llamafied.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Yi-34B-200K-Llamafied-GGUF yi-34b-200k-llamafied.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Yi-34B-200K-Llamafied-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Yi-34B-200K-Llamafied-GGUF yi-34b-200k-llamafied.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m yi-34b-200k-llamafied.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Yi-34B-200K-Llamafied-GGUF", model_file="yi-34b-200k-llamafied.Q4_K_M.gguf", model_type="yi", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: larryvrh's Yi 34B 200K Llamafied
Llamafied version of 01-ai's [Yi-34B-200k](https://huggingface.co/01-ai/Yi-34B-200K) for ease of use.
## Model Performance
| Model | MMLU | CMMLU | C-Eval | GAOKAO | BBH | Common-sense Reasoning | Reading Comprehension | Math & Code |
| :------------ | :------: | :------: | :------: | :------: | :------: | :--------------------: | :-------------------: | :---------: |
| | 5-shot | 5-shot | 5-shot | 0-shot | 3-shot@1 | - | - | - |
| LLaMA2-34B | 62.6 | - | - | - | 44.1 | 69.9 | 68.0 | 26.0 |
| LLaMA2-70B | 68.9 | 53.3 | - | 49.8 | 51.2 | 71.9 | 69.4 | 36.8 |
| Baichuan2-13B | 59.2 | 62.0 | 58.1 | 54.3 | 48.8 | 64.3 | 62.4 | 23.0 |
| Qwen-14B | 66.3 | 71.0 | 72.1 | 62.5 | 53.4 | 73.3 | 72.5 | **39.8** |
| Skywork-13B | 62.1 | 61.8 | 60.6 | 68.1 | 41.7 | 72.4 | 61.4 | 24.9 |
| InternLM-20B | 62.1 | 59.0 | 58.8 | 45.5 | 52.5 | 78.3 | - | 30.4 |
| Aquila-34B | 67.8 | 71.4 | 63.1 | - | - | - | - | - |
| Falcon-180B | 70.4 | 58.0 | 57.8 | 59.0 | 54.0 | 77.3 | 68.8 | 34.0 |
| Yi-6B | 63.2 | 75.5 | 72.0 | 72.2 | 42.8 | 72.3 | 68.7 | 19.8 |
| Yi-6B-200K | 64.0 | 75.3 | 73.5 | 73.9 | 42.0 | 72.0 | 69.1 | 19.0 |
| **Yi-34B** | **76.3** | **83.7** | 81.4 | 82.8 | **54.3** | **80.1** | 76.4 | 37.1 |
| Yi-34B-200K | 76.1 | 83.6 | **81.9** | **83.4** | 52.7 | 79.7 | **76.6** | 36.3 |
While benchmarking open-source models, we have observed a disparity between the
results generated by our pipeline and those reported in public sources (e.g.
OpenCompass). Upon conducting a more in-depth investigation of this difference,
we have discovered that various models may employ different prompts,
post-processing strategies, and sampling techniques, potentially resulting in
significant variations in the outcomes. Our prompt and post-processing strategy
remains consistent with the original benchmark, and greedy decoding is employed
during evaluation without any post-processing for the generated content. For
scores that were not reported by the original authors (including scores reported
with different settings), we try to get results with our pipeline.
To evaluate the model's capability extensively, we adopted the methodology
outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande,
ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ
were incorporated to evaluate reading comprehension. CSQA was exclusively tested
using a 7-shot setup, while all other tests were conducted with a 0-shot
configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1),
HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due
to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score
is derived by averaging the scores on the remaining tasks. Since the scores for
these two tasks are generally lower than the average, we believe that
Falcon-180B's performance was not underestimated.
## Usage
Please visit our [github repository](https://github.com/01-ai/Yi) for general
guidance on how to use this model.
## Disclaimer
Although we use data compliance checking algorithms during the training process
to ensure the compliance of the trained model to the best of our ability, due to
the complexity of the data and the diversity of language model usage scenarios,
we cannot guarantee that the model will generate correct and reasonable output
in all scenarios. Please be aware that there is still a risk of the model
producing problematic outputs. We will not be responsible for any risks and
issues resulting from misuse, misguidance, illegal usage, and related
misinformation, as well as any associated data security concerns.
## License
The Yi series models are fully open for academic research and free commercial
usage with permission via applications. All usage must adhere to the [Model
License Agreement 2.0](https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE). To
apply for the official commercial license, please contact us
([[email protected]](mailto:[email protected])).
<!-- original-model-card end -->
|
afrideva/zephyr-smol_llama-100m-sft-full-GGUF | afrideva | 2023-11-28T17:32:18Z | 432 | 1 | null | [
"gguf",
"generated_from_trainer",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"base_model:amazingvince/zephyr-smol_llama-100m-sft-full",
"license:apache-2.0",
"region:us"
]
| text-generation | 2023-11-28T17:31:37Z | ---
base_model: amazingvince/zephyr-smol_llama-100m-sft-full
inference: false
license: apache-2.0
model-index:
- name: zephyr-smol_llama-100m-sft-full
results: []
model_creator: amazingvince
model_name: zephyr-smol_llama-100m-sft-full
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- generated_from_trainer
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# amazingvince/zephyr-smol_llama-100m-sft-full-GGUF
Quantized GGUF model files for [zephyr-smol_llama-100m-sft-full](https://huggingface.co/amazingvince/zephyr-smol_llama-100m-sft-full) from [amazingvince](https://huggingface.co/amazingvince)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [zephyr-smol_llama-100m-sft-full.fp16.gguf](https://huggingface.co/afrideva/zephyr-smol_llama-100m-sft-full-GGUF/resolve/main/zephyr-smol_llama-100m-sft-full.fp16.gguf) | fp16 | 204.25 MB |
| [zephyr-smol_llama-100m-sft-full.q2_k.gguf](https://huggingface.co/afrideva/zephyr-smol_llama-100m-sft-full-GGUF/resolve/main/zephyr-smol_llama-100m-sft-full.q2_k.gguf) | q2_k | 51.90 MB |
| [zephyr-smol_llama-100m-sft-full.q3_k_m.gguf](https://huggingface.co/afrideva/zephyr-smol_llama-100m-sft-full-GGUF/resolve/main/zephyr-smol_llama-100m-sft-full.q3_k_m.gguf) | q3_k_m | 58.04 MB |
| [zephyr-smol_llama-100m-sft-full.q4_k_m.gguf](https://huggingface.co/afrideva/zephyr-smol_llama-100m-sft-full-GGUF/resolve/main/zephyr-smol_llama-100m-sft-full.q4_k_m.gguf) | q4_k_m | 66.38 MB |
| [zephyr-smol_llama-100m-sft-full.q5_k_m.gguf](https://huggingface.co/afrideva/zephyr-smol_llama-100m-sft-full-GGUF/resolve/main/zephyr-smol_llama-100m-sft-full.q5_k_m.gguf) | q5_k_m | 75.31 MB |
| [zephyr-smol_llama-100m-sft-full.q6_k.gguf](https://huggingface.co/afrideva/zephyr-smol_llama-100m-sft-full-GGUF/resolve/main/zephyr-smol_llama-100m-sft-full.q6_k.gguf) | q6_k | 84.80 MB |
| [zephyr-smol_llama-100m-sft-full.q8_0.gguf](https://huggingface.co/afrideva/zephyr-smol_llama-100m-sft-full-GGUF/resolve/main/zephyr-smol_llama-100m-sft-full.q8_0.gguf) | q8_0 | 109.33 MB |
## Original Model Card:
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-smol_llama-100m-sft-full
This model is a fine-tuned version of [BEE-spoke-data/smol_llama-101M-GQA](https://huggingface.co/BEE-spoke-data/smol_llama-101M-GQA) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9642 | 0.7 | 1141 | 1.9578 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1 |
TheBloke/Nyxene-v3-11B-GGUF | TheBloke | 2023-12-13T17:03:13Z | 432 | 7 | transformers | [
"transformers",
"gguf",
"mistral",
"base_model:beberik/Nyxene-v3-11B",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
]
| null | 2023-12-13T14:26:56Z | ---
base_model: beberik/Nyxene-v3-11B
inference: false
license: cc-by-nc-4.0
model_creator: Bebebe
model_name: Nyxene v3 11B
model_type: mistral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Nyxene v3 11B - GGUF
- Model creator: [Bebebe](https://huggingface.co/beberik)
- Original model: [Nyxene v3 11B](https://huggingface.co/beberik/Nyxene-v3-11B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Bebebe's Nyxene v3 11B](https://huggingface.co/beberik/Nyxene-v3-11B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Nyxene-v3-11B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Nyxene-v3-11B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Nyxene-v3-11B-GGUF)
* [Bebebe's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/beberik/Nyxene-v3-11B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [nyxene-v3-11b.Q2_K.gguf](https://huggingface.co/TheBloke/Nyxene-v3-11B-GGUF/blob/main/nyxene-v3-11b.Q2_K.gguf) | Q2_K | 2 | 4.55 GB| 7.05 GB | smallest, significant quality loss - not recommended for most purposes |
| [nyxene-v3-11b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Nyxene-v3-11B-GGUF/blob/main/nyxene-v3-11b.Q3_K_S.gguf) | Q3_K_S | 3 | 4.66 GB| 7.16 GB | very small, high quality loss |
| [nyxene-v3-11b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Nyxene-v3-11B-GGUF/blob/main/nyxene-v3-11b.Q3_K_M.gguf) | Q3_K_M | 3 | 5.19 GB| 7.69 GB | very small, high quality loss |
| [nyxene-v3-11b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Nyxene-v3-11B-GGUF/blob/main/nyxene-v3-11b.Q3_K_L.gguf) | Q3_K_L | 3 | 5.65 GB| 8.15 GB | small, substantial quality loss |
| [nyxene-v3-11b.Q4_0.gguf](https://huggingface.co/TheBloke/Nyxene-v3-11B-GGUF/blob/main/nyxene-v3-11b.Q4_0.gguf) | Q4_0 | 4 | 6.07 GB| 8.57 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [nyxene-v3-11b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Nyxene-v3-11B-GGUF/blob/main/nyxene-v3-11b.Q4_K_S.gguf) | Q4_K_S | 4 | 6.10 GB| 8.60 GB | small, greater quality loss |
| [nyxene-v3-11b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Nyxene-v3-11B-GGUF/blob/main/nyxene-v3-11b.Q4_K_M.gguf) | Q4_K_M | 4 | 6.46 GB| 8.96 GB | medium, balanced quality - recommended |
| [nyxene-v3-11b.Q5_0.gguf](https://huggingface.co/TheBloke/Nyxene-v3-11B-GGUF/blob/main/nyxene-v3-11b.Q5_0.gguf) | Q5_0 | 5 | 7.40 GB| 9.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [nyxene-v3-11b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Nyxene-v3-11B-GGUF/blob/main/nyxene-v3-11b.Q5_K_S.gguf) | Q5_K_S | 5 | 7.40 GB| 9.90 GB | large, low quality loss - recommended |
| [nyxene-v3-11b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Nyxene-v3-11B-GGUF/blob/main/nyxene-v3-11b.Q5_K_M.gguf) | Q5_K_M | 5 | 7.60 GB| 10.10 GB | large, very low quality loss - recommended |
| [nyxene-v3-11b.Q6_K.gguf](https://huggingface.co/TheBloke/Nyxene-v3-11B-GGUF/blob/main/nyxene-v3-11b.Q6_K.gguf) | Q6_K | 6 | 8.81 GB| 11.31 GB | very large, extremely low quality loss |
| [nyxene-v3-11b.Q8_0.gguf](https://huggingface.co/TheBloke/Nyxene-v3-11B-GGUF/blob/main/nyxene-v3-11b.Q8_0.gguf) | Q8_0 | 8 | 11.40 GB| 13.90 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Nyxene-v3-11B-GGUF and below it, a specific filename to download, such as: nyxene-v3-11b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Nyxene-v3-11B-GGUF nyxene-v3-11b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Nyxene-v3-11B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Nyxene-v3-11B-GGUF nyxene-v3-11b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m nyxene-v3-11b.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./nyxene-v3-11b.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./nyxene-v3-11b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Bebebe's Nyxene v3 11B
## Description
This repo contains bf16 files of Nyxene-v1-11B. Just new version with some new things.
## Model used
- [Intel/neural-chat-7b-v3-3-Slerp](https://huggingface.co/Intel/neural-chat-7b-v3-3-Slerp)
- [AIDC-ai-business/Marcoroni-7B-v3](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3)
- [rwitz/go-bruins-v2](https://huggingface.co/rwitz/go-bruins-v2)
- [chargoddard/loyal-piano-m7-cdpo](https://huggingface.co/chargoddard/loyal-piano-m7-cdpo)
## Prompt template
Just use chatml.
## The secret sauce
go-bruins-loyal-piano-11B :
```
slices:
- sources:
- model: rwitz/go-bruins-v2
layer_range: [0, 24]
- sources:
- model: chargoddard/loyal-piano-m7-cdpo
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
neural-marcoroni-11B :
```
slices:
- sources:
- model: AIDC-ai-business/Marcoroni-7B-v3
layer_range: [0, 24]
- sources:
- model: Intel/neural-chat-7b-v3-3-Slerp
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
Nyxene-11B :
```
slices:
- sources:
- model: "./go-bruins-loyal-piano-11B"
layer_range: [0, 48]
- model: "./neural-marcoroni-11B"
layer_range: [0, 48]
merge_method: slerp
base_model: "./go-bruins-loyal-piano-11B"
parameters:
t:
- filter: lm_head
value: [0.5]
- filter: embed_tokens
value: [0.75]
- filter: self_attn
value: [0.75, 0.25]
- filter: mlp
value: [0.25, 0.75]
- filter: layernorm
value: [0.5, 0.5]
- filter: modelnorm
value: [0.5]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here.
Thanks to the [Undi95](https://huggingface.co/Undi95) for the original [11B mistral merge](https://huggingface.co/Undi95/Mistral-11B-OmniMix) recipe.
<!-- original-model-card end -->
|
mmnga/OrionStarAI-Orion-14B-Chat-RAG-gguf | mmnga | 2024-01-30T14:17:52Z | 432 | 0 | null | [
"gguf",
"license:other",
"region:us"
]
| null | 2024-01-29T17:57:09Z | ---
license: other
license_name: orion-14b-series-models-community-license
license_link: LICENSE
---
# OrionStarAI-Orion-14B-Chat-RAG-gguf
[OrionStarAIさんが公開しているOrion-14B-Chat-RAG](https://huggingface.co/OrionStarAI/Orion-14B-Chat-RAG)のggufフォーマット変換版です。
## Licence
元モデルのライセンス項目をご確認ください。
[orion-14b-series-models-community-license](https://huggingface.co/mmnga/OrionStarAI-Orion-14B-Chat-RAG-gguf/blob/main/LICENSE)
他のモデル
[mmnga/OrionStarAI-Orion-14B-LongChat-gguf](https://huggingface.co/mmnga/OrionStarAI-Orion-14B-LongChat-gguf)
[mmnga/OrionStarAI-Orion-14B-Chat-RAG-gguf](https://huggingface.co/mmnga/OrionStarAI-Orion-14B-Chat-RAG-gguf)
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'OrionStarAI-Orion-14B-Chat-RAG-q4_0.gguf' -p "あなたはオリオンスターが開発したAIアシスタント、名前はジュヤンです。以下の参考資料やチャット履歴に基づいてユーザーの質問に回答できます。\n\n### 参考資料 ###\nスープカレー\n\nスープカレーの例\nスープカレーは、日本のカレー料理のひとつ。スパイスの香り・刺激・辛みのきいたスープと、大振りの具が特徴である。2000年代に札幌市でブームとなり、その後全国に広まった。\n\n概要\n札幌市には2000年代のブームの後もスープカレーを提供する店が200店以上存在する[要出典]。各店は個性を競い合っており、多様なバリエーションがある。札幌発祥の店が北海道内、全国、さらに海外にも進出している。\n\n札幌スープカレー\n\nスープカレーの例\n(マジックスパイスのもの)\n\nスープカレーの例\n(はれる屋(倶知安町のもの))\n1971年に札幌市に開店した喫茶店『アジャンタ』が1975年ごろに発売した「薬膳カリィ」が原型と言われている[1]。その後、30年ほどをかけて札幌市内を中心に発展し、1990年代から2000年代に大ブームを引き起こした[2]。\n\nサラサラとした粘り気のないスープ状のカレーである。特定の店舗が開発したというわけでもなく、前述の『アジャンタ』以外にも『スリランカ共我国』(1984年開店)、『木多郎』(1985年開店)といった店舗が個性的なカレーを提供していた[3]。自分の店のカレーを「スープカレー」と命名したのは1993年に札幌市白石に開店した『マジックスパイス』である[4][3]。これらの店は「スープカレー第1世代」と呼称されている[3]。\n\n特徴\nスパイスの効いた旨みのあるスープと、大ぶりの具。\n煮込み料理と違って、スープと具は別々に調理する。メインの具はもともとチキンレッグが基本だったが、現在ではほかに豚角煮・ラムチョップ・魚介類などを選択できる店が増えている。これに茹でたり素揚げしたジャガイモ、ニンジン、ナス、ピーマン、オクラ、カボチャ、ゴボウ、ヤングコーン、ブロッコリー、レンコンなどの野菜が組み合わされる。\n「トマト系スープカレー」では、フォン・ド・ボー(仔牛のダシ)を使ったり、炒めたバジルとその香味オイルをたっぷり浮かべることが多い。\n\n\n\n### 会話履歴 ###\n\n\n### 質問 ###\nスープカレーはどのように作りますか?\n\n### 回答 ###\n" -c 2048 -n 500
``` |
codesage/codesage-large | codesage | 2024-06-26T03:23:29Z | 432 | 17 | transformers | [
"transformers",
"pytorch",
"custom_code",
"code",
"dataset:bigcode/the-stack-dedup",
"arxiv:2402.01935",
"arxiv:2305.06161",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-06T20:15:43Z | ---
license: apache-2.0
datasets:
- bigcode/the-stack-dedup
library_name: transformers
language:
- code
---
## CodeSage-Large
### Model description
CodeSage is a new family of open code embedding models with an encoder architecture that support a wide range of source code understanding tasks. It is introduced in the paper:
[Code Representation Learning At Scale by
Dejiao Zhang*, Wasi Uddin Ahmad*, Ming Tan, Hantian Ding, Ramesh Nallapati, Dan Roth, Xiaofei Ma, Bing Xiang](https://arxiv.org/abs/2402.01935) (* indicates equal contribution).
### Pretraining data
This checkpoint is trained on the Stack data (https://huggingface.co/datasets/bigcode/the-stack-dedup). Supported languages (9 in total) are as follows: c, c-sharp, go, java, javascript, typescript, php, python, ruby.
### Training procedure
This checkpoint is first trained on code data via masked language modeling (MLM) and then on bimodal text-code pair data. Please refer to the paper for more details.
### How to use
This checkpoint consists of an encoder (1.3B model), which can be used to extract code embeddings of 2048 dimension. It can be easily loaded using the AutoModel functionality and employs the Starcoder tokenizer (https://arxiv.org/pdf/2305.06161.pdf).
```
from transformers import AutoModel, AutoTokenizer
checkpoint = "codesage/codesage-large"
device = "cuda" # for GPU usage or "cpu" for CPU usage
# Note: CodeSage requires adding eos token at the end of
# each tokenized sequence to ensure good performance
tokenizer = AutoTokenizer.from_pretrained(checkpoint, trust_remote_code=True, add_eos_token=True)
model = AutoModel.from_pretrained(checkpoint, trust_remote_code=True).to(device)
inputs = tokenizer.encode("def print_hello_world():\tprint('Hello World!')", return_tensors="pt").to(device)
embedding = model(inputs)[0]
print(f'Dimension of the embedding: {embedding[0].size()}')
# Dimension of the embedding: torch.Size([14, 2048])
```
### BibTeX entry and citation info
```
@inproceedings{
zhang2024codesage,
title={CodeSage: Code Representation Learning At Scale},
author={Dejiao Zhang* and Wasi Ahmad* and Ming Tan and Hantian Ding and Ramesh Nallapati and Dan Roth and Xiaofei Ma and Bing Xiang},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=vfzRRjumpX}
}
``` |
aboros98/merlin1.1 | aboros98 | 2024-03-13T13:25:29Z | 432 | 0 | transformers | [
"transformers",
"pytorch",
"phi",
"text-generation",
"custom_code",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-13T13:09:32Z | ---
license: mit
---
---
license: other
---
| Metric | Value |
|-----------------------|---------------------------|
| Average | - |
| ARC | TBA |
| ARC Easy | TBA |
| BoolQ | TBA |
| HellaSwag | TBA |
| OpenBookQA | TBA |
| PiQA | TBA |
| Winogrande | TBA |
|-----------------------|---------------------------|
| MMLU | TBA |
| GSM8K | TBA |
| Truthful QA | TBA |
| MT-Bench | TBA |
|
LoneStriker/opus-v1.2-llama-3-8b-GGUF | LoneStriker | 2024-04-19T13:11:11Z | 432 | 14 | null | [
"gguf",
"unsloth",
"axolotl",
"text-generation",
"en",
"license:cc-by-nc-nd-4.0",
"region:us"
]
| text-generation | 2024-04-19T12:59:08Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- unsloth
- axolotl
license: cc-by-nc-nd-4.0
---
# Llama 3 DreamGen Opus V1
<div style="display: flex; flex-direction: row; align-items: center;">
<img src="/dreamgen/opus-v1.2-llama-3-8b/resolve/main/images/logo-1024.png" alt="model logo" style="
border-radius: 12px;
margin-right: 12px;
margin-top: 0px;
margin-bottom: 0px;
max-width: 100px;
height: auto;
"/>
Models for **(steerable) story-writing and role-playing**.
<br/>[All Opus V1 models, including quants](https://huggingface.co/collections/dreamgen/opus-v1-65d092a6f8ab7fc669111b31).
</div>
## Resources
- [**Opus V1 prompting guide**](https://dreamgen.com/docs/models/opus/v1) with many (interactive) examples and prompts that you can copy.
- [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing) for interactive role-play using `opus-v1.2-7b`.
- [Python code](example/prompt/format.py) to format the prompt correctly.
- Join the community on [**Discord**](https://dreamgen.com/discord) to get early access to new models.
<img src="/dreamgen/opus-v1.2-llama-3-8b/resolve/main/images/story_writing.webp" alt="story writing on dreamgen.com" style="
padding: 12px;
border-radius: 12px;
border: 2px solid #f9a8d4;
background: rgb(9, 9, 11);
"/>
## Prompting
<details>
<summary>The models use an extended version of ChatML.</summary>
```
<|im_start|>system
(Story description in the right format here)
(Typically consists of plot description, style description and characters)<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Alice
(Continuation of the story from the Alice character)<|im_end|>
<|im_start|>text
(Continuation of the story from no character in particular (pure narration))<|im_end|>
<|im_start|>user
(Your instruction on how the story should continue)<|im_end|>
<|im_start|>text names= Bob
(Continuation of the story from the Bob character)<|im_end|>
```
The Opus V1 extension is the addition of the `text` role, and the addition / modification of role names.
Pay attention to the following:
- The `text` messages can (but don't have to have) `names`, names are used to indicate the "active" character during role-play.
- There can be multiple subsequent message with a `text` role, especially if names are involved.
- There can be multiple names attached to a message.
- The format for names is `names= {{name[0]}}; {{name[1]}}`, beware of the spaces after `names=` and after the `;`. This spacing leads to most natural tokenization for the names.
</details>
While the main goal for the models is great story-writing and role-playing performance, the models are also capable of several writing related tasks as well as general assistance.
Here's how you can prompt the model for the following tasks
### Steerable Story-writing and Role-playing:
- [Story-writing prompt guide](https://dreamgen.com/docs/models/opus/v1#task-story-writing)
- [Role-playing prompt guide](https://dreamgen.com/docs/models/opus/v1#task-role-playing)
- Input:
- System prompt: You provide story / role-play description, which consists of:
- Plot description
- Style description
- Characters and their descriptions
- Conversation turns:
- Text / message turn: This represents part of the story or role play
- Instruction: This tells the model what should happen next
- Output: Continuation of the story / role-play.
### Story plot summarization
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-plot-description)
- Input: A story, or a few chapters of a story.
- Output: A description of the story or chapters.
#### Examples:
<details>
<summary>Summary of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">short fiction story</a> in 50 words</summary>
<blockquote>
Malygris, a powerful magician, summons his long-dead lover Nylissa from the grave using necromancy. However, upon seeing her, he realizes that she is not the same person he once knew, leading him to question the nature of love, memory, and the power of magic.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">short fiction story</a> in 200 words</summary>
<blockquote>
The story is a dark fantasy set in the city of Susran, where an ancient and powerful magician named Malygris dwells in a tower atop a conical hill. Malygris, who has mastered all forms of sorcery and necromancy, is plagued by a deep sense of ennui and melancholy. He seeks solace in the memory of Nylissa, a girl he once loved in his youth.
One day, Malygris decides to summon Nylissa's spirit from the grave using his necromantic powers. He performs the necessary rituals and incantations, and Nylissa appears before him as a ghostly apparition. However, as Malygris gazes upon her, he becomes increasingly disillusioned. He notices subtle changes in her appearance and behavior that do not match his memories of her. He begins to question whether this is truly the same Nylissa he once knew, or if it is just another illusion conjured by his magic.
As Malygris grapples with this uncertainty, he becomes increasingly despondent and disillusioned with life itself. He realizes that even his magic cannot bring back the past or make him truly happy. In the end, he dismisses Nylissa's spirit and returns to his lonely tower, resigned to his fate as an old and weary sorcerer.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://edition.cnn.com/2024/02/29/health/pets-diseases-viruses-bacteria-wellness/index.html">CNN News article</a> in 150 words</summary>
<blockquote>
The article discusses the recent human deaths from diseases transmitted by animals, specifically Alaskapox and bubonic plague. Dr. Leana Wen, an emergency physician and former Baltimore health commissioner, explains the various ways household pets can transmit diseases to humans. These include fecal-oral transmission, scratches and bites, and vector-borne diseases like ticks, fleas, and mosquitoes. Wen emphasizes the importance of vaccinating pets, maintaining cleanliness, and taking commonsense precautions to reduce disease transmission. She also advises that individuals who are immunocompromised, babies, and pregnant women should take additional care when interacting with animals.
</blockquote>
</details>
<details>
<summary>Summary of <a href="https://edition.cnn.com/2024/02/29/health/pets-diseases-viruses-bacteria-wellness/index.html">CNN News article</a> in 50 words</summary>
<blockquote>
A doctor discusses the risks of diseases transmitted from animals to humans, focusing on household pets like cats and dogs. They emphasize the importance of vaccination, hygiene, and regular veterinary care to prevent these illnesses.
</blockquote>
</details>
### Story character description
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-char-description)
- Input: A story, or a few chapters of a story, set of characters.
- Output: A description of the characters.
#### Examples:
<details>
<summary>Description of the Malygris character from <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">this story</a> in 50 words</summary>
<blockquote>
Malygris is an ancient and powerful magician, characterized by his long white beard, cold green eyes, and a violet robe. He is a master of necromancy and possesses a vast knowledge of forbidden magic. Despite his immense power, he is plagued by a deep sense of melancholy and ennui.
</blockquote>
</details>
<details>
<summary>Description of the prince character from <a href="https://www.libraryofshortstories.com/onlinereader/the-pearl-of-love">this story</a> in 50 words</summary>
<blockquote>
The prince is a devoted and determined individual, driven by his love for his late queen. He is described as a graceful youth in the beginning, but as he grows older, his dedication to the Pearl of Love transforms him into a grave and intent man. His sense of color and design evolves over time, reflecting his growing maturity and artistic vision.
</blockquote>
</details>
### Story style description
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-style-description)
- Input: A story, or a few chapters of a story.
- Output: A description the style of the story.
<details>
<summary>Style description of <a href="https://www.libraryofshortstories.com/onlinereader/the-last-incantation">this story</a></summary>
<blockquote>
The story is written in a descriptive and evocative style, with a focus on the atmosphere and setting. The author uses vivid and detailed language to create a sense of place and time, and employs a variety of literary devices such as similes, metaphors, and allusions to enhance the mood and tone of the narrative. The sentence structure is varied, with a mix of short and long sentences that create a rhythmic flow to the prose.
</blockquote>
</details>
<details>
<summary>Style description of <a href="https://www.libraryofshortstories.com/onlinereader/the-pearl-of-love">this story</a></summary>
<blockquote>
The writing style is descriptive and evocative, with a focus on the beauty and grandeur of the Pearl of Love. The author uses vivid imagery and sensory details to create a rich and immersive atmosphere. The tone is reverential and contemplative, reflecting the prince's deep love for his queen and his dedication to creating a lasting monument to her memory.
</blockquote>
</details>
### Story description to chapters
- [Prompt guide](https://dreamgen.com/docs/models/opus/v1#task-story-description-to-chapter-descriptions)
- Input: A brief plot description and the desired number of chapters.
- Output: A description for each chapter.
### And more...
## Sampling params
For story-writing and role-play, I recommend "Min P" based sampling with `min_p` in the range `[0.01, 0.1]` and with `temperature` in the range `[0.5, 1.5]`, depending on your preferences. A good starting point would be `min_p=0.1; temperature=0.8`.
You may also benefit from setting presence, frequency and repetition penalties, especially at lower temperatures.
## Dataset
The fine-tuning dataset consisted of ~100M tokens of steerable story-writing, role-playing, writing-assistant and general-assistant examples. Each example was up to 31000 tokens long.
All story-writing and role-playing examples were based on human-written text.

## Running the model
The model is should be compatible with any software that supports the base model, but beware of prompting and tokenization.
I recommend using these model versions:
- 7B: [no quant (opus-v1.2-7b)](https://huggingface.co/dreamgen/opus-v1.2-7b)
- 34B: [no quant (opus-v1-34b)](https://huggingface.co/dreamgen/opus-v1-34b) or [awq (opus-v1-34b-awq)](https://huggingface.co/dreamgen/opus-v1-34b-awq)
- 34B: [no quant (opus-v1.2-70b)](https://huggingface.co/dreamgen/opus-v1.2-70b) or [awq (opus-v1.2-70b-awq)](https://huggingface.co/dreamgen/opus-v1.2-70b-awq)
### Running on DreamGen.com (free)
You can run the models on [dreamgen.com](https://dreamgen.com) for free — you can use the built-in UI for story-writing & role-playing, or use [the API](https://dreamgen.com/docs/api).
### Running Locally
- **Make sure your prompt is as close as possible to the Opus V1**
- Regardless of which backend you use, it's important that you format your prompt well and that the tokenization works correctly.
- [Read the prompt guide](https://dreamgen.com/docs/models/opus/v1)
- [Read the prompt formatting code](example/prompt/format.py)
- Make sure `<|im_start|>` and `<|im_end|>` are tokenized correctly
- **vLLM**
- [**Google Colab**](https://colab.research.google.com/drive/1J178fH6IdQOXNi-Njgdacf5QgAxsdT20?usp=sharing): This is a simple interactive Google Colab to do role-play with the 7B model, it should fit on the T4 GPU.
- [Code](example/prompt/interactive.py): This is simple script for interactive chat for one hard-coded scenario.
- **SillyTavern**
- [Official SillyTavern documentation for DreamGen](https://docs.sillytavern.app/usage/api-connections/dreamgen/) -- applies to both the API an local models
- SillyTavern (staging) comes with built-in DreamGen preset for RP
- Other presets can be found [here](https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b/tree/main/configs/silly_tavern), v2 kindly provided by @MarinaraSpaghetti
- Make sure to unselect `Skip special tokens`, otherwise it won't work
- This is just an attempt at approximating the Opus V1 prompt, it won't be perfect
- Character cards specifically rewritten for the built-in DreamGen preset:
- [Seraphina](configs/silly_tavern/cards/Seraphina.png) (based on the default Seraphina card)
- [Lara Lightland](configs/silly_tavern/cards/LaraLightland.png) (based on the card by Deffcolony)
- **LM Studio**
- [Config](configs/lmstudio/preset.json)
- Just like ChatML, just changed "assistant" to "text" role.
- **There's a bug** in LM Studio if you delete a message or click "Continue", [see here for details](https://discord.com/channels/1110598183144399058/1212665261128417280/1212665261128417280).
- **HuggingFace**
- [Chat template](tokenizer_config.json#L51)
- Just like ChatML, just changed "assistant" to "text" role.
## Known Issues
- **34B repetition**:
- The 34B sometimes gets stuck repeating the same word, or synonyms. This seems to be a common problem across various Yi 34B fine-tunes.
- **GGUF**:
- The tokenization might be messed up. Some users reported that `<|im_start|>` and `<|im_end|>` are tokenized as multiple tokens. Also llama.cpp may not tokenize correctly (the Yi tokenizer is subtly different from the Llama 2 tokenizer).
## License
- This model is intended for personal use only, other use is not permitted.
|
mradermacher/lostmagic-RP_7B-GGUF | mradermacher | 2024-05-05T15:05:05Z | 432 | 0 | transformers | [
"transformers",
"gguf",
"roleplay",
"uncensored",
"lewd",
"mature",
"not-for-all-audiences",
"en",
"base_model:Dunjeon/lostmagic-RP_7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-26T19:19:10Z | ---
base_model: Dunjeon/lostmagic-RP_7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- roleplay
- uncensored
- lewd
- mature
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Dunjeon/lostmagic-RP_7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Mahou-1.3-mistral-7B-GGUF | mradermacher | 2024-05-30T02:57:39Z | 432 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:flammenai/MahouMix-v1",
"base_model:flammenai/Mahou-1.3-mistral-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-30T02:31:53Z | ---
base_model: flammenai/Mahou-1.3-mistral-7B
datasets:
- flammenai/MahouMix-v1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/flammenai/Mahou-1.3-mistral-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mahou-1.3-mistral-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-mistral-7B-GGUF/resolve/main/Mahou-1.3-mistral-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-mistral-7B-GGUF/resolve/main/Mahou-1.3-mistral-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-mistral-7B-GGUF/resolve/main/Mahou-1.3-mistral-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-mistral-7B-GGUF/resolve/main/Mahou-1.3-mistral-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-mistral-7B-GGUF/resolve/main/Mahou-1.3-mistral-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-mistral-7B-GGUF/resolve/main/Mahou-1.3-mistral-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-mistral-7B-GGUF/resolve/main/Mahou-1.3-mistral-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-mistral-7B-GGUF/resolve/main/Mahou-1.3-mistral-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-mistral-7B-GGUF/resolve/main/Mahou-1.3-mistral-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-mistral-7B-GGUF/resolve/main/Mahou-1.3-mistral-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-mistral-7B-GGUF/resolve/main/Mahou-1.3-mistral-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-mistral-7B-GGUF/resolve/main/Mahou-1.3-mistral-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-mistral-7B-GGUF/resolve/main/Mahou-1.3-mistral-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-mistral-7B-GGUF/resolve/main/Mahou-1.3-mistral-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-mistral-7B-GGUF/resolve/main/Mahou-1.3-mistral-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf | RichardErkhov | 2024-05-31T08:47:06Z | 432 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-31T06:10:41Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
japanese-stablelm-base-beta-7b - GGUF
- Model creator: https://huggingface.co/stabilityai/
- Original model: https://huggingface.co/stabilityai/japanese-stablelm-base-beta-7b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [japanese-stablelm-base-beta-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf/blob/main/japanese-stablelm-base-beta-7b.Q2_K.gguf) | Q2_K | 2.36GB |
| [japanese-stablelm-base-beta-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf/blob/main/japanese-stablelm-base-beta-7b.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [japanese-stablelm-base-beta-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf/blob/main/japanese-stablelm-base-beta-7b.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [japanese-stablelm-base-beta-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf/blob/main/japanese-stablelm-base-beta-7b.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [japanese-stablelm-base-beta-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf/blob/main/japanese-stablelm-base-beta-7b.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [japanese-stablelm-base-beta-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf/blob/main/japanese-stablelm-base-beta-7b.Q3_K.gguf) | Q3_K | 3.07GB |
| [japanese-stablelm-base-beta-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf/blob/main/japanese-stablelm-base-beta-7b.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [japanese-stablelm-base-beta-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf/blob/main/japanese-stablelm-base-beta-7b.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [japanese-stablelm-base-beta-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf/blob/main/japanese-stablelm-base-beta-7b.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [japanese-stablelm-base-beta-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf/blob/main/japanese-stablelm-base-beta-7b.Q4_0.gguf) | Q4_0 | 3.56GB |
| [japanese-stablelm-base-beta-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf/blob/main/japanese-stablelm-base-beta-7b.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [japanese-stablelm-base-beta-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf/blob/main/japanese-stablelm-base-beta-7b.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [japanese-stablelm-base-beta-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf/blob/main/japanese-stablelm-base-beta-7b.Q4_K.gguf) | Q4_K | 3.8GB |
| [japanese-stablelm-base-beta-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf/blob/main/japanese-stablelm-base-beta-7b.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [japanese-stablelm-base-beta-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf/blob/main/japanese-stablelm-base-beta-7b.Q4_1.gguf) | Q4_1 | 3.95GB |
| [japanese-stablelm-base-beta-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf/blob/main/japanese-stablelm-base-beta-7b.Q5_0.gguf) | Q5_0 | 4.33GB |
| [japanese-stablelm-base-beta-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf/blob/main/japanese-stablelm-base-beta-7b.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [japanese-stablelm-base-beta-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf/blob/main/japanese-stablelm-base-beta-7b.Q5_K.gguf) | Q5_K | 4.45GB |
| [japanese-stablelm-base-beta-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf/blob/main/japanese-stablelm-base-beta-7b.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [japanese-stablelm-base-beta-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf/blob/main/japanese-stablelm-base-beta-7b.Q5_1.gguf) | Q5_1 | 4.72GB |
| [japanese-stablelm-base-beta-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf/blob/main/japanese-stablelm-base-beta-7b.Q6_K.gguf) | Q6_K | 5.15GB |
| [japanese-stablelm-base-beta-7b.Q8_0.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-7b-gguf/blob/main/japanese-stablelm-base-beta-7b.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
language:
- ja
tags:
- japanese-stablelm
- causal-lm
pipeline_tag: text-generation
datasets:
- wikipedia
- mc4
- cc100
- oscar-corpus/OSCAR-2301
- oscar-corpus/OSCAR-2201
- cerebras/SlimPajama-627B
license:
- llama2
extra_gated_fields:
Name: text
Email: text
Country: text
Organization or Affiliation: text
I allow Stability AI to contact me about information related to its models and research: checkbox
---
# Japanese-StableLM-Base-Beta-7B

> A cute robot wearing a kimono writes calligraphy with one single brush — [Stable Diffusion XL](https://clipdrop.co/stable-diffusion)
## Model Description
`japanese-stablelm-base-beta-7b` is a 7B-parameter decoder-only language model based on [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b) that has been fine-tuned on a diverse collection of Japanese data, with the intent of maximizing downstream performance on Japanese language tasks.
For an instruction-following model, check [Japanese-StableLM-Instruct-Beta-7B](https://huggingface.co/stabilityai/japanese-stablelm-instruct-beta-7b). The base and instruct models are also available in larger 70b sizes. For a model that has faster inference times, see [Japanese-StableLM-Base-JA_Vocab-Beta-7B](https://huggingface.co/stabilityai/japanese-stablelm-base-ja_vocab-beta-7b), or [the instruction-following version](https://huggingface.co/stabilityai/japanese-stablelm-instruct-ja_vocab-beta-7b).
## Usage
First install additional dependencies in [requirements.txt](./requirements.txt):
```sh
pip install -r requirements.txt
```
Then start generating text with `japanese-stablelm-base-beta-7b` by using the following code snippet:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "stabilityai/japanese-stablelm-base-beta-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
# The next line may need to be modified depending on the environment
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")
prompt = """
AI で科学研究を加速するには、
""".strip()
input_ids = tokenizer.encode(
prompt,
add_special_tokens=True,
return_tensors="pt"
)
# this is for reproducibility.
# feel free to change to get different result
seed = 23
torch.manual_seed(seed)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=128,
temperature=0.99,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
```
We suggest playing with different generation config (`top_p`, `repetition_penalty` etc) to find the best setup for your tasks. For example, use higher temperature for roleplay task, lower temperature for reasoning.
## Model Details
* **Model type**: `japanese-stablelm-base-beta-7b` model is an auto-regressive language model based on the Llama2 transformer architecture.
* **Language(s)**: Japanese
* **License**: [Llama2 Community License](https://ai.meta.com/llama/license/).
* **Contact**: For questions and comments about the model, please join [Stable Community Japan](https://discord.gg/StableJP). For future announcements / information about Stability AI models, research, and events, please follow https://twitter.com/StabilityAI_JP.
## Training Dataset
Roughly 100B tokens from a mixture of the following corpora were used for continued pre-training.
- [Japanese/English Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
- [Japanese mc4](https://huggingface.co/datasets/mc4)
- [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz)
- [Japanese OSCAR](https://oscar-project.github.io/documentation/)
- [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B) (excluding the Books3 subset)
## Use and Limitations
### Intended Use
The model is intended to be used by all individuals as a foundation for application-specific fine-tuning without strict limitations on commercial use.
### Limitations and bias
The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model generated text. We recommend users exercise reasonable caution when using these models in production systems. Do not use the model for any applications that may cause harm or distress to individuals or groups.
## Authors
This model was developed by the Research & Development team at Stability AI Japan, and the development was co-led by [Takuya Akiba](https://huggingface.co/iwiwi) and [Meng Lee](https://huggingface.co/leemeng). The members of the team are as follows:
- [Meng Lee](https://huggingface.co/leemeng)
- [Fujiki Nakamura](https://huggingface.co/fujiki)
- [Makoto Shing](https://huggingface.co/mkshing)
- [Paul McCann](https://huggingface.co/polm-stability)
- [Takuya Akiba](https://huggingface.co/iwiwi)
- [Naoki Orii](https://huggingface.co/mrorii)
## Acknowledgements
We thank Meta Research for releasing Llama 2 under an open license for others to build on.
We are grateful for the contributions of the EleutherAI Polyglot-JA team in helping us to collect a large amount of pre-training data in Japanese. Polyglot-JA members includes Hyunwoong Ko (Project Lead), Fujiki Nakamura (originally started this project when he commited to the Polyglot team), Yunho Mo, Minji Jung, KeunSeok Im, and Su-Kyeong Jang.
We are also appreciative of [AI Novelist/Sta (Bit192, Inc.)](https://ai-novel.com/index.php) and the numerous contributors from [Stable Community Japan](https://discord.gg/VPrcE475HB) for assisting us in gathering a large amount of high-quality Japanese textual data for model training.
## How to cite
```
@misc{JapaneseStableLMBaseBeta7B,
url={[https://huggingface.co/stabilityai/japanese-stablelm-base-beta-7b](https://huggingface.co/stabilityai/japanese-stablelm-base-beta-7b)},
title={Japanese StableLM Base Beta 7B},
author={Lee, Meng and Nakamura, Fujiki and Shing, Makoto and McCann, Paul and Akiba, Takuya and Orii, Naoki}
}
```
|
mradermacher/Fimburs11V3-GGUF | mradermacher | 2024-05-31T20:34:26Z | 432 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/Fimburs11V3",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-31T17:54:49Z | ---
base_model: mergekit-community/Fimburs11V3
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mergekit-community/Fimburs11V3
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Fimburs11V3-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-GGUF/resolve/main/Fimburs11V3.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-GGUF/resolve/main/Fimburs11V3.IQ3_XS.gguf) | IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-GGUF/resolve/main/Fimburs11V3.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-GGUF/resolve/main/Fimburs11V3.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-GGUF/resolve/main/Fimburs11V3.IQ3_M.gguf) | IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-GGUF/resolve/main/Fimburs11V3.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-GGUF/resolve/main/Fimburs11V3.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-GGUF/resolve/main/Fimburs11V3.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-GGUF/resolve/main/Fimburs11V3.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-GGUF/resolve/main/Fimburs11V3.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-GGUF/resolve/main/Fimburs11V3.Q5_K_S.gguf) | Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-GGUF/resolve/main/Fimburs11V3.Q5_K_M.gguf) | Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-GGUF/resolve/main/Fimburs11V3.Q6_K.gguf) | Q6_K | 8.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-GGUF/resolve/main/Fimburs11V3.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Abnmd/Llama_7b-mental_test2.0 | Abnmd | 2024-06-03T21:42:24Z | 432 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-06-02T09:30:03Z | Entry not found |
dyamagishi/output | dyamagishi | 2024-06-20T09:01:10Z | 432 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:cagliostrolab/animagine-xl-3.1",
"license:openrail++",
"region:us"
]
| text-to-image | 2024-06-14T04:51:58Z | ---
base_model: cagliostrolab/animagine-xl-3.1
library_name: diffusers
license: openrail++
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-dyamagishi/output
These are controlnet weights trained on cagliostrolab/animagine-xl-3.1 with new type of conditioning.
You can find some example images below.
prompt: outdoors, scenery, cloud, multiple_girls, sky, day, tree, grass, architecture, 2girls, blue_sky, building, standing, skirt, long_hair, mountain, east_asian_architecture, from_behind, castle, facing_away, black_skirt, school_uniform, pagoda, waterfall, white_shirt, white_hair, shirt, cloudy_sky, bag

## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Ali-C137/Ge1H10M-0000 | Ali-C137 | 2024-06-21T12:43:04Z | 432 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-17T19:25:03Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NickyBP/g2ft-v2 | NickyBP | 2024-06-21T07:31:38Z | 432 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"gemma",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-21T07:27:26Z | Entry not found |
ThucPD/LLaMA-2-ReRank-512-EP2-10MAY2024-Q5_0.gguf | ThucPD | 2024-07-01T04:36:52Z | 432 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-07-01T04:09:03Z | Entry not found |
TheBloke/YuLan-Chat-2-13B-GGUF | TheBloke | 2023-09-27T12:48:24Z | 431 | 4 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:yulan-team/YuLan-Chat-2-13b-fp16",
"license:mit",
"text-generation-inference",
"region:us"
]
| null | 2023-09-07T13:11:10Z | ---
license: mit
model_name: YuLan Chat 2 13B
base_model: yulan-team/YuLan-Chat-2-13b-fp16
inference: false
model_creator: RUC-GSAI-YuLan
model_type: llama
prompt_template: 'The following is a conversation between a human and an AI assistant
namely YuLan, developed by GSAI, Renmin University of China. The AI assistant gives
helpful, detailed, and polite answers to the user''s questions.
[|Human|]:{prompt}
[|AI|]:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# YuLan Chat 2 13B - GGUF
- Model creator: [RUC-GSAI-YuLan](https://huggingface.co/yulan-team)
- Original model: [YuLan Chat 2 13B](https://huggingface.co/yulan-team/YuLan-Chat-2-13b-fp16)
<!-- description start -->
## Description
This repo contains GGUF format model files for [RUC-GSAI-YuLan's YuLan Chat 2 13B](https://huggingface.co/yulan-team/YuLan-Chat-2-13b-fp16).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/YuLan-Chat-2-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/YuLan-Chat-2-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/YuLan-Chat-2-13B-GGUF)
* [RUC-GSAI-YuLan's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/yulan-team/YuLan-Chat-2-13b-fp16)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: YulanChat
```
The following is a conversation between a human and an AI assistant namely YuLan, developed by GSAI, Renmin University of China. The AI assistant gives helpful, detailed, and polite answers to the user's questions.
[|Human|]:{prompt}
[|AI|]:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `mit`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [RUC-GSAI-YuLan's YuLan Chat 2 13B](https://huggingface.co/yulan-team/YuLan-Chat-2-13b-fp16).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [yulan-chat-2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/YuLan-Chat-2-13B-GGUF/blob/main/yulan-chat-2-13b.Q2_K.gguf) | Q2_K | 2 | 5.54 GB| 8.04 GB | smallest, significant quality loss - not recommended for most purposes |
| [yulan-chat-2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/YuLan-Chat-2-13B-GGUF/blob/main/yulan-chat-2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.78 GB| 8.28 GB | very small, high quality loss |
| [yulan-chat-2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/YuLan-Chat-2-13B-GGUF/blob/main/yulan-chat-2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.46 GB| 8.96 GB | very small, high quality loss |
| [yulan-chat-2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/YuLan-Chat-2-13B-GGUF/blob/main/yulan-chat-2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 7.05 GB| 9.55 GB | small, substantial quality loss |
| [yulan-chat-2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/YuLan-Chat-2-13B-GGUF/blob/main/yulan-chat-2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.50 GB| 10.00 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [yulan-chat-2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/YuLan-Chat-2-13B-GGUF/blob/main/yulan-chat-2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.55 GB| 10.05 GB | small, greater quality loss |
| [yulan-chat-2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/YuLan-Chat-2-13B-GGUF/blob/main/yulan-chat-2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 8.00 GB| 10.50 GB | medium, balanced quality - recommended |
| [yulan-chat-2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/YuLan-Chat-2-13B-GGUF/blob/main/yulan-chat-2-13b.Q5_0.gguf) | Q5_0 | 5 | 9.12 GB| 11.62 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [yulan-chat-2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/YuLan-Chat-2-13B-GGUF/blob/main/yulan-chat-2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 9.12 GB| 11.62 GB | large, low quality loss - recommended |
| [yulan-chat-2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/YuLan-Chat-2-13B-GGUF/blob/main/yulan-chat-2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.38 GB| 11.88 GB | large, very low quality loss - recommended |
| [yulan-chat-2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/YuLan-Chat-2-13B-GGUF/blob/main/yulan-chat-2-13b.Q6_K.gguf) | Q6_K | 6 | 10.84 GB| 13.34 GB | very large, extremely low quality loss |
| [yulan-chat-2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/YuLan-Chat-2-13B-GGUF/blob/main/yulan-chat-2-13b.Q8_0.gguf) | Q8_0 | 8 | 14.04 GB| 16.54 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/YuLan-Chat-2-13B-GGUF and below it, a specific filename to download, such as: yulan-chat-2-13b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/YuLan-Chat-2-13B-GGUF yulan-chat-2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/YuLan-Chat-2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/YuLan-Chat-2-13B-GGUF yulan-chat-2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m yulan-chat-2-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "The following is a conversation between a human and an AI assistant namely YuLan, developed by GSAI, Renmin University of China. The AI assistant gives helpful, detailed, and polite answers to the user's questions.\n[|Human|]:{prompt}\n[|AI|]:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/YuLan-Chat-2-13B-GGUF", model_file="yulan-chat-2-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: RUC-GSAI-YuLan's YuLan Chat 2 13B
<div align=center>
<h1>YuLan-Chat: An Open-Source Bilingual Chatbot</h1>
</div>
YuLan-Chat models are chat-based large language models, which are developed by the researchers in GSAI, Renmin University of China (YuLan, which represents Yulan Magnolia, is the campus flower of Renmin University of China). The newest version is developed by continually-pretraining and instruction-tuning LLaMA-2 with high-quality English and Chinese data. The model has the following technical characteristics:
- Due to continued pre-training on high-quality Chinese-English bilingual data, the language ability of the model has been improved.
- To well support Chinese and longer inputs and outputs, we expand the original vocabulary with Chinese words and extend the maximum length of LLaMA-2. It can support 8k context now.
- To well activate the bilingual instruction following capacity, we construct high-quality bilingual instructions, and perform multi-stage instruction-tuning.
> YuLan-Chat系列模型是中国人民大学高瓴人工智能学院师生共同开发的支持聊天的大语言模型(名字"玉兰"取自中国人民大学校花)。最新版本基于LLaMA-2进行了中英文双语的继续预训练和指令微调。该版模型具有如下技术特点:
> - 由于在高质量中英双语数据上进行了继续预训练,模型的语言能力得到提高;
> - 为了更好的支持中文和更长的输入输出,对原版LLaMA-2的词表及长度进行了扩充,目前可支持8k上下文;
> - 为了让模型更好地服从用户指令,构建了高质量双语指令数据集,并行了多阶段指令微调。
## Model Zoo
Due to the license limitation, for models based on LLaMA, we only provide the weight difference with the original checkpoints; for models based on LLaMA-2, they can be used directly. Please check the [Usage](https://github.com/RUC-GSAI/YuLan-LLM/tree/main#usage) section for more details.
**Limitations**: Despite our efforts to reduce potential security issues during the model's usage and encourage the generation of text that aligns with ethical and legal requirements, the language model is based on probabilistic generation, which means it may still produce unexpected outputs. For instance, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We do not assume any responsibility for any consequences resulting from the dissemination of harmful information.
> 由于许可证的限制,基于LLaMA的模型我们仅提供与官方模型的差值,基于LLaMA-2的模型可直接使用,具体请参见使用方法章节。
> **局限性**:尽管我们尝试减少模型在使用中可能出现的安全性问题,并鼓励模型生成符合道德和法律要求的文本,但由于语言模型基于概率生成的范式,模型仍然可能会产生意外的输出。 例如,生成的响应可能包含偏见、歧视或其他有害内容。 请不要传播此类内容。 我们对因传播有害信息而造成的任何后果不承担任何责任。
| Model | Backbone | Extended Vocab | Extended Length | Continue PT | SFT | Released Date |
| ------------------- | :--------: | :------------: | :-------------: | :---------: | ---- | :-----------: |
| [YuLan-Chat-2-13B](https://huggingface.co/yulan-team/YuLan-Chat-2-13b-fp16) | LLaMA2-13B | ✅ 51,190 | ✅ 8,192 | ✅ | ✅ | 2023.8.2 |
| [YuLan-LLaMA-2-13B](https://huggingface.co/yulan-team/YuLan-LLaMA-2-13b) | LLaMA2-13B | ✅ 51,190 | ✅ 8,192 | ✅ | ❌ | 2023.8.2 |
| [YuLan-Chat-1-65B-v2](https://huggingface.co/yulan-team/YuLan-Chat-1-65B-v2-delta) | LLaMA-65B | ✅ 51,190 | ❌ 2,048 | ✅ | ✅ | 2023.8.2 |
| [YuLan-Chat-1-13B-v1](https://huggingface.co/RUCAIBox/YuLan-Chat-13b-delta) | LLaMA-13B | ❌ 32,000 | ❌ 2,048 | ❌ | ✅ | 2023.6.8 |
| [YuLan-Chat-1-65B-v1](https://huggingface.co/RUCAIBox/YuLan-Chat-65b-delta) | LLaMA-65B | ❌ 32,000 | ❌ 2,048 | ❌ | ✅ | 2023.6.8 |
## Evaluation
We evaluate our YuLan-Chat model on several Chinese and English benchmarks. The evaluation results are shown as follows.
> 我们在中英文的一些基准测试上对YuLan-Chat进行了评价,其结果如下。
### MMLU
[MMLU](https://github.com/hendrycks/test) (Massive Multitask Language Understanding) is a benchmark designed to measure knowledge acquired during pretraining by evaluating models exclusively in zero-shot and few-shot settings.
> MMLU是一个评估模型知识量的常用的英文基准测试集。
| Model | STEM | Social Science | Humanities | Others | Avg. |
| --------------------------------- | :--: | :------------: | :--------: | :----: | :--: |
| YuLan-Chat-1-13B-v1 | 39.6 | 57.8 | 42.6 | 57.6 | 49.4 |
| YuLan-Chat-1-65B-v1 | 49.2 | 71.7 | 57.7 | 66.7 | 61.3 |
| YuLan-Chat-1-65B-v2 | 46.3 | 67.9 | 56.9 | 63.9 | 58.7 |
| LLaMA-2-13B | 44.6 | 64.2 | 53.9 | 62.2 | 56.2 |
| FlagAlpha/Llama2-Chinese-13b-Chat | 44.4 | 63.2 | 51.6 | 60.6 | 55.0 |
| Linly-AI/Chinese-LLaMA-2-13B-hf | 43.6 | 62.7 | 49.8 | 61.6 | 54.4 |
| YuLan-LLaMA-2-13B | 42.9 | 61.5 | 50.4 | 58.6 | 53.4 |
| YuLan-Chat-2-13B | 45.3 | 66.7 | 53.8 | 62.8 | 57.2 |
### C-Eval
[C-Eval](https://cevalbenchmark.com/) is a comprehensive Chinese evaluation suite for foundation models.
> C-Eval是一个针对基石模型综合能力的中文基准测试集。
| Model | STEM | Social Science | Humanities | Others | Avg. | Avg. (Hard) |
| --------------------------------- | :--: | :------------: | :--------: | :----: | :--: | :---------: |
| YuLan-Chat-1-13B-v1 | 30.2 | 37.4 | 31.9 | 30.7 | 32.0 | 25.7 |
| YuLan-Chat-1-65B-v1 | 37.7 | 46.1 | 36.8 | 38.0 | 39.2 | 31.1 |
| YuLan-Chat-1-65B-v2 | 39.9 | 55.9 | 47.7 | 43.7 | 45.4 | 31.4 |
| LLaMA-2-13B | 36.9 | 43.2 | 37.6 | 36.6 | 38.2 | 32.0 |
| FlagAlpha/Llama2-Chinese-13b-Chat | 36.8 | 44.5 | 36.3 | 36.5 | 38.1 | 30.9 |
| Linly-AI/Chinese-LLaMA-2-13B-hf | 33.7 | 44.8 | 36.6 | 36.5 | 37 | 27.7 |
| YuLan-LLaMA-2-13B | 35.3 | 46.4 | 41.9 | 37.6 | 39.3 | 28.6 |
| YuLan-Chat-2-13B | 38.9 | 49.7 | 45.0 | 40.8 | 42.6 | 32.2 |
### AGI-Eval-Gaokao
[AGI-Eval](https://github.com/microsoft/AGIEval) is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. We use the sub-branch Chinese-Gaokao for evaluation.
> AGI-Eval 是一个以人为中心的基准,专门设计用于评估基础模型在与人类认知和解决问题相关的任务中的一般能力。我们使用其中的"高考"分支进行评测。
| Model | Avg. | Chinese | English | Geography | History | Biology | Chemistry | Physics | Math-QA | Math-Cloze |
| --------------------------------- | :--: | :-----: | :-----: | :-------: | :-----: | :-----: | :-------: | :-----: | :-----: | :--------: |
| YuLan-Chat-1-13B-v1 | 24.3 | 22.4 | 60.1 | 27.6 | 25.5 | 21.9 | 30.0 | 8.0 | 21.1 | 1.7 |
| YuLan-Chat-1-65B-v1 | 29.3 | 25.2 | 79.1 | 37.2 | 36.6 | 28.6 | 24.2 | 11.0 | 21.9 | 0.0 |
| YuLan-Chat-1-65B-v2 | 37.9 | 31.4 | 80.4 | 50.8 | 56.6 | 33.3 | 29.0 | 32.0 | 24.4 | 0.8 |
| LLaMA-2-13B | 32.7 | 27.2 | 72.2 | 36.2 | 43.0 | 26.2 | 32.4 | 30.0 | 26.2 | 0.9 |
| FlagAlpha/Llama2-Chinese-13b-Chat | 31.6 | 26.4 | 70.6 | 35.2 | 38.7 | 28.1 | 28.0 | 29.5 | 25.6 | 2.5 |
| Linly-AI/Chinese-LLaMA-2-13B-hf | 31.1 | 22.8 | 74.8 | 42.2 | 37.9 | 24.3 | 28.0 | 23.0 | 26.5 | 0.0 |
| YuLan-LLaMA-2-13B | 34.2 | 25.2 | 70.3 | 43.2 | 48.5 | 30.0 | 29.5 | 31.0 | 28.5 | 1.7 |
| YuLan-Chat-2-13B | 39.5 | 37.0 | 85.3 | 46.7 | 51.9 | 43.8 | 38.2 | 29.0 | 23.1 | 0.9 |
## Usage
### Import from Huggingface Transformers
As our model is trained based on LLaMA, it can be loaded in the same way as original LLaMA.
> 由于我们的模型是基于LLaMA开发的,可以使用与LLaMA相同的方法加载。
```Python
>>> from transformers import LlamaTokenizer, LlamaForCausalLM
>>> tokenizer = LlamaTokenizer.from_pretrained("yulan-team/YuLan-Chat-2-13b")
>>> model = LlamaForCausalLM.from_pretrained("yulan-team/YuLan-Chat-2-13b").cuda()
>>> model = model.eval()
>>> input_text = "hello"
>>> prompt = "The following is a conversation between a human and an AI assistant namely YuLan, developed by GSAI, Renmin University of China. The AI assistant gives helpful, detailed, and polite answers to the user's questions.\n[|Human|]:{}\n[|AI|]:".format(input_text)
>>> inputs = tokenizer(prompt, return_tensors='pt', padding="longest", max_length=8192, truncation=True, return_attention_mask=True, add_special_tokens=True)
>>> kwargs = {'temperature': 0.8, 'top_p': 0.95, "top_k": 50, "repetition_penalty": 1.1, "no_repeat_ngram_size": 64, "max_length": 8192, "pad_token_id": tokenizer.bos_token_id, "eos_token_id": tokenizer.eos_token_id}
>>> outputs = model.generate(inputs['input_ids'].to(model.device), attention_mask=inputs['attention_mask'].to(model.device), do_sample=True, **kwargs)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[len(prompt):])
Hello! How can I assist you today?
```
## License
YuLan-Chat uses [MIT License](https://github.com/RUC-GSAI/YuLan-LLM/blob/main/LICENSE). All data and code in this project can only be used for academic purposes.
> 本项目使用MIT许可,所有的数据和代码仅供学术研究使用。
## Contributors
| **Pre-training** | **Fine-tuning** |
|:----------------------------- |:-------------------------------------------------------------------- |
| [Yutao Zhu](https://github.com/DaoD) (Lead), [Kelong Mao](https://github.com/kyriemao), [Wentong Chen](https://github.com/yiye3), [Yiding Sun](https://github.com/Emanual20), [Yihan Wu](https://github.com/wyh2000), [Qian Cao](https://github.com/Aman-4-Real), [Lei Zhang](https://github.com/LLily0703), [Feng Wang](https://github.com/PhealenWang), [Qiangqiang Ren](https://github.com/QiangKing)| [Kun Zhou](https://github.com/Lancelot39) (Lead), [Yushuo Chen](https://github.com/chenyushuo), [Zhipeng Chen](https://github.com/Timothy023), [Lei Wang](https://github.com/Paitesanshi), [Yupeng Hou](https://github.com/hyp1231), [Xincheng Pang](https://github.com/pangxincheng), [Junyi Li](https://github.com/turboLJY), [Yuhan Chen](https://github.com/Fiorina1212), [Shufang Xie](https://github.com/funtion) |
## Reference
Please kindly cite our work if it helps you.
> 如果我们的项目对您有帮助,请引用我们,谢谢!
```BibTeX
@misc{YuLan-Chat,
author = {YuLan-Team},
title = {YuLan-Chat: An Open-Source Bilingual Chatbot},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/RUC-GSAI/YuLan-Chat}},
}
```
<!-- original-model-card end -->
|
NurtureAI/Orca-2-13B-16k-GGUF | NurtureAI | 2023-11-30T01:51:27Z | 431 | 6 | null | [
"gguf",
"orca",
"orca2",
"microsoft",
"text-generation",
"arxiv:2311.11045",
"license:other",
"region:us"
]
| text-generation | 2023-11-22T05:27:54Z | ---
pipeline_tag: text-generation
tags:
- orca
- orca2
- microsoft
license: other
license_name: microsoft-research-license
license_link: LICENSE
---
# Orca 2 13b extended to 16k context (GGUF files)
# Original Model
# Orca 2
<!-- Provide a quick summary of what the model is/does. -->
Orca 2 is a helpful assistant that is built for research purposes only and provides a single turn response
in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization.
The model is designed to excel particularly in reasoning.
We publicly release Orca 2 to encourage further research on the development, evaluation, and alignment of smaller LMs.
## What is Orca 2’s intended use(s)?
+ Orca 2 is built for research purposes only.
+ The main purpose is to allow the research community to assess its abilities and to provide a foundation for
building better frontier models.
## How was Orca 2 evaluated?
+ Orca 2 has been evaluated on a large number of tasks ranging from reasoning to grounding and safety. Please refer
to Section 6 and Appendix in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf) for details on evaluations.
## Model Details
Orca 2 is a finetuned version of LLAMA-2. Orca 2’s training data is a synthetic dataset that was created to enhance the small model’s reasoning abilities.
All synthetic training data was moderated using the Microsoft Azure content filters. More details about the model can be found in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf).
Please refer to LLaMA-2 technical report for details on the model architecture.
## License
Orca 2 is licensed under the [Microsoft Research License](LICENSE).
Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved.
## Bias, Risks, and Limitations
Orca 2, built upon the LLaMA 2 model family, retains many of its limitations, as well as the
common limitations of other large language models or limitation caused by its training process,
including:
**Data Biases**: Large language models, trained on extensive data, can inadvertently carry
biases present in the source data. Consequently, the models may generate outputs that could
be potentially biased or unfair.
**Lack of Contextual Understanding**: Despite their impressive capabilities in language understanding and generation, these models exhibit limited real-world understanding, resulting
in potential inaccuracies or nonsensical responses.
**Lack of Transparency**: Due to the complexity and size, large language models can act
as “black boxes”, making it difficult to comprehend the rationale behind specific outputs or
decisions. We recommend reviewing transparency notes from Azure for more information.
**Content Harms**: There are various types of content harms that large language models
can cause. It is important to be aware of them when using these models, and to take
actions to prevent them. It is recommended to leverage various content moderation services
provided by different companies and institutions. On an important note, we hope for better
regulations and standards from government and technology leaders around content harms
for AI technologies in future. We value and acknowledge the important role that research
and open source community can play in this direction.
**Hallucination**: It is important to be aware and cautious not to entirely rely on a given
language model for critical decisions or information that might have deep impact as it is
not obvious how to prevent these models from fabricating content. Moreover, it is not clear
whether small models may be more susceptible to hallucination in ungrounded generation
use cases due to their smaller sizes and hence reduced memorization capacities. This is an
active research topic and we hope there will be more rigorous measurement, understanding
and mitigations around this topic.
**Potential for Misuse**: Without suitable safeguards, there is a risk that these models could
be maliciously used for generating disinformation or harmful content.
**Data Distribution**: Orca 2’s performance is likely to correlate strongly with the distribution
of the tuning data. This correlation might limit its accuracy in areas underrepresented in
the training dataset such as math, coding, and reasoning.
**System messages**: Orca 2 demonstrates variance in performance depending on the system
instructions. Additionally, the stochasticity introduced by the model size may lead to
generation of non-deterministic responses to different system instructions.
**Zero-Shot Settings**: Orca 2 was trained on data that mostly simulate zero-shot settings.
While the model demonstrate very strong performance in zero-shot settings, it does not show
the same gains of using few-shot learning compared to other, specially larger, models.
**Synthetic data**: As Orca 2 is trained on synthetic data, it could inherit both the advantages
and shortcomings of the models and methods used for data generation. We posit that Orca
2 benefits from the safety measures incorporated during training and safety guardrails (e.g.,
content filter) within the Azure OpenAI API. However, detailed studies are required for
better quantification of such risks.
This model is solely designed for research settings, and its testing has only been carried
out in such environments. It should not be used in downstream applications, as additional
analysis is needed to assess potential harm or bias in the proposed application.
## Getting started with Orca 2
**Inference with Hugging Face library**
```python
import torch
import transformers
if torch.cuda.is_available():
torch.set_default_device("cuda")
else:
torch.set_default_device("cpu")
model = transformers.AutoModelForCausalLM.from_pretrained("microsoft/Orca-2-13b", device_map='auto')
# https://github.com/huggingface/transformers/issues/27132
# please use the slow tokenizer since fast and slow tokenizer produces different tokens
tokenizer = transformers.AutoTokenizer.from_pretrained(
"microsoft/Orca-2-13b",
use_fast=False,
)
system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
user_message = "How can you determine if a restaurant is popular among locals or mainly attracts tourists, and why might this information be useful?"
prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
inputs = tokenizer(prompt, return_tensors='pt')
output_ids = model.generate(inputs["input_ids"],)
answer = tokenizer.batch_decode(output_ids)[0]
print(answer)
# This example continues showing how to add a second turn message by the user to the conversation
second_turn_user_message = "Give me a list of the key points of your first answer."
# we set add_special_tokens=False because we dont want to automatically add a bos_token between messages
second_turn_message_in_markup = f"\n<|im_start|>user\n{second_turn_user_message}<|im_end|>\n<|im_start|>assistant"
second_turn_tokens = tokenizer(second_turn_message_in_markup, return_tensors='pt', add_special_tokens=False)
second_turn_input = torch.cat([output_ids, second_turn_tokens['input_ids']], dim=1)
output_ids_2 = model.generate(second_turn_input,)
second_turn_answer = tokenizer.batch_decode(output_ids_2)[0]
print(second_turn_answer)
```
**Safe inference with Azure AI Content Safety**
The usage of [Azure AI Content Safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety/) on top of model prediction is strongly encouraged
and can help prevent content harms. Azure AI Content Safety is a content moderation platform
that uses AI to keep your content safe. By integrating Orca 2 with Azure AI Content Safety,
we can moderate the model output by scanning it for sexual content, violence, hate, and
self-harm with multiple severity levels and multi-lingual detection.
```python
import os
import math
import transformers
import torch
from azure.ai.contentsafety import ContentSafetyClient
from azure.core.credentials import AzureKeyCredential
from azure.core.exceptions import HttpResponseError
from azure.ai.contentsafety.models import AnalyzeTextOptions
CONTENT_SAFETY_KEY = os.environ["CONTENT_SAFETY_KEY"]
CONTENT_SAFETY_ENDPOINT = os.environ["CONTENT_SAFETY_ENDPOINT"]
# We use Azure AI Content Safety to filter out any content that reaches "Medium" threshold
# For more information: https://learn.microsoft.com/en-us/azure/ai-services/content-safety/
def should_filter_out(input_text, threshold=4):
# Create an Content Safety client
client = ContentSafetyClient(CONTENT_SAFETY_ENDPOINT, AzureKeyCredential(CONTENT_SAFETY_KEY))
# Construct a request
request = AnalyzeTextOptions(text=input_text)
# Analyze text
try:
response = client.analyze_text(request)
except HttpResponseError as e:
print("Analyze text failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
categories = ["hate_result", "self_harm_result", "sexual_result", "violence_result"]
max_score = -math.inf
for category in categories:
max_score = max(max_score, getattr(response, category).severity)
return max_score >= threshold
model_path = 'microsoft/Orca-2-13b'
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = transformers.AutoModelForCausalLM.from_pretrained(model_path)
model.to(device)
tokenizer = transformers.AutoTokenizer.from_pretrained(
model_path,
model_max_length=4096,
padding_side="right",
use_fast=False,
add_special_tokens=False,
)
system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
user_message = "\" \n :You can't just say, \"\"that's crap\"\" and remove it without gaining a consensus. You already know this, based on your block history. —/ \" \nIs the comment obscene? \nOptions : Yes, No."
prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
inputs = tokenizer(prompt, return_tensors='pt')
inputs = inputs.to(device)
output_ids = model.generate(inputs["input_ids"], max_length=4096, do_sample=False, temperature=0.0, use_cache=True)
sequence_length = inputs["input_ids"].shape[1]
new_output_ids = output_ids[:, sequence_length:]
answers = tokenizer.batch_decode(new_output_ids, skip_special_tokens=True)
final_output = answers[0] if not should_filter_out(answers[0]) else "[Content Filtered]"
print(final_output)
```
## Citation
```bibtex
@misc{mitra2023orca,
title={Orca 2: Teaching Small Language Models How to Reason},
author={Arindam Mitra and Luciano Del Corro and Shweti Mahajan and Andres Codas and Clarisse Simoes and Sahaj Agrawal and Xuxi Chen and Anastasia Razdaibiedina and Erik Jones and Kriti Aggarwal and Hamid Palangi and Guoqing Zheng and Corby Rosset and Hamed Khanpour and Ahmed Awadallah},
year={2023},
eprint={2311.11045},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
``` |
TheBloke/Astrid-Mistral-7B-GGUF | TheBloke | 2023-11-28T20:31:45Z | 431 | 5 | transformers | [
"transformers",
"gguf",
"mistral",
"gpt",
"llm",
"large language model",
"PAIX.Cloud",
"en",
"base_model:PAIXAI/Astrid-Mistral-7B",
"license:apache-2.0",
"text-generation-inference",
"region:us"
]
| null | 2023-11-28T16:24:32Z | ---
base_model: PAIXAI/Astrid-Mistral-7B
inference: false
language:
- en
library_name: transformers
license: apache-2.0
model_creator: PAIX
model_name: Astrid Mistral 7B
model_type: mistral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- gpt
- llm
- large language model
- PAIX.Cloud
thumbnail: https://static.wixstatic.com/media/bdee4e_8aa5cefc86024bc88f7e20e3e19d9ff3~mv2.png/v1/fill/w_192%2Ch_192%2Clg_1%2Cusm_0.66_1.00_0.01/bdee4e_8aa5cefc86024bc88f7e20e3e19d9ff3~mv2.png
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Astrid Mistral 7B - GGUF
- Model creator: [PAIX](https://huggingface.co/PAIXAI)
- Original model: [Astrid Mistral 7B](https://huggingface.co/PAIXAI/Astrid-Mistral-7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [PAIX's Astrid Mistral 7B](https://huggingface.co/PAIXAI/Astrid-Mistral-7B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Astrid-Mistral-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Astrid-Mistral-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Astrid-Mistral-7B-GGUF)
* [PAIX's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/PAIXAI/Astrid-Mistral-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [astrid-mistral-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Astrid-Mistral-7B-GGUF/blob/main/astrid-mistral-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [astrid-mistral-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Astrid-Mistral-7B-GGUF/blob/main/astrid-mistral-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [astrid-mistral-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Astrid-Mistral-7B-GGUF/blob/main/astrid-mistral-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [astrid-mistral-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Astrid-Mistral-7B-GGUF/blob/main/astrid-mistral-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [astrid-mistral-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Astrid-Mistral-7B-GGUF/blob/main/astrid-mistral-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [astrid-mistral-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Astrid-Mistral-7B-GGUF/blob/main/astrid-mistral-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [astrid-mistral-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Astrid-Mistral-7B-GGUF/blob/main/astrid-mistral-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [astrid-mistral-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Astrid-Mistral-7B-GGUF/blob/main/astrid-mistral-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [astrid-mistral-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Astrid-Mistral-7B-GGUF/blob/main/astrid-mistral-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [astrid-mistral-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Astrid-Mistral-7B-GGUF/blob/main/astrid-mistral-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [astrid-mistral-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Astrid-Mistral-7B-GGUF/blob/main/astrid-mistral-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [astrid-mistral-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Astrid-Mistral-7B-GGUF/blob/main/astrid-mistral-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Astrid-Mistral-7B-GGUF and below it, a specific filename to download, such as: astrid-mistral-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Astrid-Mistral-7B-GGUF astrid-mistral-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Astrid-Mistral-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Astrid-Mistral-7B-GGUF astrid-mistral-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m astrid-mistral-7b.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./astrid-mistral-7b.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./astrid-mistral-7b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: PAIX's Astrid Mistral 7B
# Model Card
## Summary
- Base model: [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
This model, Astrid-7B-Assistant is a Mistral-7B base model for causal language modeling, designed to generate human-like text.
It's part of our mission to make AI technology accessible to everyone, focusing on personalization, data privacy, and transparent AI governance.
Trained in English, it's a versatile tool for a variety of applications.
This model is one of the many models available on our platform, and we currently have a 1B and 7B open-source model.
This model was trained by [PAIX.Cloud](https://www.paix.cloud/).
- Wait list: [Wait List](https://www.paix.cloud/join-waitlist)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.34.0
```
Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.
- Either leave `token=True` in the `pipeline` and login to hugginface_hub by running
```python
import huggingface_hub
huggingface_hub.login(<ACCES_TOKEN>)
```
- Or directly pass your <ACCES_TOKEN> to `token` in the `pipeline`
```python
from transformers import pipeline
generate_text = pipeline(
model="PAIXAI/Astrid-Mistral-7B",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
token=True,
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|im_end|><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"PAIXAI/Astrid-Mistral-7B",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"PAIXAI/Astrid-Mistral-7B",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "PAIXAI/Astrid-Mistral-7B" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|im_end|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
MistralForCausalLM(
(model): MistralModel(
(embed_tokens): Embedding(32002, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x MistralDecoderLayer(
(self_attn): MistralAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=1024, bias=False)
(v_proj): Linear(in_features=4096, out_features=1024, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): MistralRotaryEmbedding()
)
(mlp): MistralMLP(
(gate_proj): Linear(in_features=4096, out_features=14336, bias=False)
(up_proj): Linear(in_features=4096, out_features=14336, bias=False)
(down_proj): Linear(in_features=14336, out_features=4096, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): MistralRMSNorm()
(post_attention_layernorm): MistralRMSNorm()
)
)
(norm): MistralRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32002, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
<!-- original-model-card end -->
|
mradermacher/Hapsburg_v0_1-GGUF | mradermacher | 2024-05-06T05:03:38Z | 431 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:JDBMG/Hapsburg_v0_1",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-10T12:28:59Z | ---
base_model: JDBMG/Hapsburg_v0_1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/JDBMG/Hapsburg_v0_1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hapsburg_v0_1-GGUF/resolve/main/Hapsburg_v0_1.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hapsburg_v0_1-GGUF/resolve/main/Hapsburg_v0_1.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Hapsburg_v0_1-GGUF/resolve/main/Hapsburg_v0_1.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hapsburg_v0_1-GGUF/resolve/main/Hapsburg_v0_1.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Hapsburg_v0_1-GGUF/resolve/main/Hapsburg_v0_1.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hapsburg_v0_1-GGUF/resolve/main/Hapsburg_v0_1.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hapsburg_v0_1-GGUF/resolve/main/Hapsburg_v0_1.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hapsburg_v0_1-GGUF/resolve/main/Hapsburg_v0_1.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Hapsburg_v0_1-GGUF/resolve/main/Hapsburg_v0_1.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hapsburg_v0_1-GGUF/resolve/main/Hapsburg_v0_1.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hapsburg_v0_1-GGUF/resolve/main/Hapsburg_v0_1.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Hapsburg_v0_1-GGUF/resolve/main/Hapsburg_v0_1.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Hapsburg_v0_1-GGUF/resolve/main/Hapsburg_v0_1.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Hapsburg_v0_1-GGUF/resolve/main/Hapsburg_v0_1.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Shankhdhar/classifier_onnx | Shankhdhar | 2024-06-06T22:38:25Z | 431 | 0 | transformers | [
"transformers",
"onnx",
"mpnet",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2024-05-10T20:29:51Z | Entry not found |
mradermacher/Llama-3-70b-Uncensored-Lumi-Tess-gradient-GGUF | mradermacher | 2024-05-15T07:33:59Z | 431 | 5 | transformers | [
"transformers",
"gguf",
"uncencored",
"llama-3",
"tess",
"lumimaid",
"Lumi-tess",
"en",
"base_model:ryzen88/Llama-3-70b-Uncensored-Lumi-Tess-gradient",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-15T03:10:34Z | ---
base_model: ryzen88/Llama-3-70b-Uncensored-Lumi-Tess-gradient
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- uncencored
- llama-3
- tess
- lumimaid
- Lumi-tess
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ryzen88/Llama-3-70b-Uncensored-Lumi-Tess-gradient
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-70b-Uncensored-Lumi-Tess-gradient-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Uncensored-Lumi-Tess-gradient-GGUF/resolve/main/Llama-3-70b-Uncensored-Lumi-Tess-gradient.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Uncensored-Lumi-Tess-gradient-GGUF/resolve/main/Llama-3-70b-Uncensored-Lumi-Tess-gradient.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Uncensored-Lumi-Tess-gradient-GGUF/resolve/main/Llama-3-70b-Uncensored-Lumi-Tess-gradient.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Uncensored-Lumi-Tess-gradient-GGUF/resolve/main/Llama-3-70b-Uncensored-Lumi-Tess-gradient.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Uncensored-Lumi-Tess-gradient-GGUF/resolve/main/Llama-3-70b-Uncensored-Lumi-Tess-gradient.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Uncensored-Lumi-Tess-gradient-GGUF/resolve/main/Llama-3-70b-Uncensored-Lumi-Tess-gradient.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Uncensored-Lumi-Tess-gradient-GGUF/resolve/main/Llama-3-70b-Uncensored-Lumi-Tess-gradient.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Uncensored-Lumi-Tess-gradient-GGUF/resolve/main/Llama-3-70b-Uncensored-Lumi-Tess-gradient.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Uncensored-Lumi-Tess-gradient-GGUF/resolve/main/Llama-3-70b-Uncensored-Lumi-Tess-gradient.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Uncensored-Lumi-Tess-gradient-GGUF/resolve/main/Llama-3-70b-Uncensored-Lumi-Tess-gradient.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Uncensored-Lumi-Tess-gradient-GGUF/resolve/main/Llama-3-70b-Uncensored-Lumi-Tess-gradient.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70b-Uncensored-Lumi-Tess-gradient-GGUF/resolve/main/Llama-3-70b-Uncensored-Lumi-Tess-gradient.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-70b-Uncensored-Lumi-Tess-gradient-GGUF/resolve/main/Llama-3-70b-Uncensored-Lumi-Tess-gradient.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70b-Uncensored-Lumi-Tess-gradient-GGUF/resolve/main/Llama-3-70b-Uncensored-Lumi-Tess-gradient.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-70b-Uncensored-Lumi-Tess-gradient-GGUF/resolve/main/Llama-3-70b-Uncensored-Lumi-Tess-gradient.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70b-Uncensored-Lumi-Tess-gradient-GGUF/resolve/main/Llama-3-70b-Uncensored-Lumi-Tess-gradient.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RedRocket/Fluffyrock-Unbound | RedRocket | 2024-06-16T03:27:06Z | 431 | 6 | diffusers | [
"diffusers",
"not-for-all-audiences",
"art",
"text-to-image",
"en",
"arxiv:2312.02696",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2024-05-19T06:23:32Z | ---
license: creativeml-openrail-m
tags:
- not-for-all-audiences
- art
pipeline_tag: text-to-image
language:
- en
base_model: runwayml/stable-diffusion-v1-5
library_name: diffusers
---
<div style="text-align: center;">
<img style="margin-right: 0.5em; width: 30%; display: inline-block;" src="https://huggingface.co/RedRocket/Fluffyrock-Unbound/resolve/main/example-2.webp">
<img style="margin-right: 0.5em; width: 30%; display: inline-block;" src="https://huggingface.co/RedRocket/Fluffyrock-Unbound/resolve/main/example-1.webp">
<img style="width: 30%; display: inline-block;" src="https://huggingface.co/RedRocket/Fluffyrock-Unbound/resolve/main/example-3.webp">
<h1 style="text-align: center; margin-bottom: 0;">FluffyRock Unbound v1.1</h1>
<span style="font-size: large;">By <a href="https://huggingface.co/RedRocket/" style="font-size: large;">Project RedRocket</a> 🚀</span>
</div>
FluffyRock Unbound is a finetune of [Fluffyrock Unleashed v1.0](https://huggingface.co/RedRocket/Fluffyrock-Unleashed) trained on an expanded, curated <a href="https://e621.net/">e621</a> dataset and with training changes adapted from Nvidia Labs <a href="https://arxiv.org/abs/2312.02696">EDM2</a>.
This model can produce detailed sexually explicit content and is not suitable for use by minors. It will generally not produce sexually explicit content unless prompted.
## Downloads
<a href="https://huggingface.co/RedRocket/Fluffyrock-Unbound/resolve/main/Fluffyrock-Unbound-v1-1.safetensors?download=true">Fluffyrock-Unbound-v1-1.safetensors</a> - Main model EMA checkpoint.<br>
<a href="https://huggingface.co/RedRocket/Fluffyrock-Unbound/resolve/main/Fluffyrock-Unbound-v1-1.yaml?download=true">Fluffyrock-Unbound-v1-1.yaml</a> - YAML file for A1111 Stable Diffusion WebUI. Place this in the same folder as the model.<br>
<a href="https://huggingface.co/RedRocket/Fluffyrock-Unbound/resolve/main/fluffyrock-unbound-tag-strength-v1.1.csv?download=true">fluffyrock-unbound-tag-strength-v1.1.csv</a> - Recommended tag completion file, representing the strength of each concept in the model. (<a href="https://huggingface.co/RedRocket/Fluffyrock-Unbound/resolve/main/fluffyrock-unbound-tag-completion-v1.1.csv?download=true">Raw Counts</a>, <a href="https://huggingface.co/RedRocket/Fluffyrock-Unbound/resolve/main/fluffyrock-unbound-tag-metadata-v1.1.csv?download=true">Metadata</a>)<br>
<a href="https://huggingface.co/RedRocket/Fluffyrock-Unbound/resolve/main/boring_e621_unbound_lite.safetensors?download=true">boring_e621_unbound_lite.safetensors</a> - Boring-E621 style embedding to improve quality. Use in the negative prompt. (<a href="https://huggingface.co/RedRocket/Fluffyrock-Unbound/resolve/main/boring_e621_unbound_plus.safetensors?download=true">Stronger Plus Version</a>)
## Prompting Guide
This model is trained on e621 tags seperated by commas, but without underscores. "by" has been added before artist names. Trailing commas are used.<br>
Example prompt: ```solo, anthro, female, wolf, breasts, clothed, standing, outside, full-length portrait, (detailed fur,) by artist name,```
- Some tags have been shortened to save on tokens, so be sure to use the tag completion files if you can.
- Most purple "copyright tags" have been removed from the model, so you <em>must not</em> prompt ~~``dreamworks, how to train your dragon, toothless``~~ but instead prompt just ``toothless``.
- Rare tags are excluded from the autocompletion file so as to not give false hope.
## Automatic1111 Stable Diffusion WebUI Instructions
Place the model and the corresponding .yaml file in the ``models/Stable-diffusion/`` folder. The model will not work properly without the .yaml file.
You will most likely need the CFG Rescale extension: https://github.com/Seshelle/CFG_Rescale_webui A setting of 0.7 seems to be good for almost all cases.
For ideal results go to Settings -> Sampler Parameters and choose Zero Terminal SNR as the "Noise schedule for sampling" and set sigma max to 160 if using a Karras schedule.
## ComfyUI Instructions
Place the model checkpoint in the ``models/checkpoints`` folder. The optional Boring-E621 embeddings go in ``models/embeddings``.<br>
The model is zero-terminal-SNR with V-prediction. Use the ModelSamplingDiscrete node to configure it properly.
<img style="margin-top: 0; width: 500px;" src="https://huggingface.co/RedRocket/Fluffyrock-Unbound/resolve/main/comfyui-1.webp">
If you are using a KarrasScheduler and zsnr, set sigma_max to 160. Do not use zsnr with the default KSampler karras schedule as the sigma_max will not be set correctly.
## Quality Embeddings
Experimental textual inversion embeddings in a similar vein to the [Boring Embeddings](https://huggingface.co/FoodDesert/Boring_Embeddings) are provided above.
They're intended to improve quality while not drastically altering image content. They should be used as part of a negative prompt, although using them in the positive prompt can be fun too.
- The "lite" version is 6 tokens wide and is initialized on the values of ``by <|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>``, which is very close to a "blank slate".
- The "plus" version is trained on the same dataset, is 8 tokens wide, and is initialized on an average vector of 100 low-scoring artists.
- Currently, the "lite" version is recommended.
## Training Details
- Adaptive timestep weighting: Timesteps are weighted using a similar method to what the EDM2 paper used, according to the homoscedastic uncertainty of MSE loss on each timestep, thereby equalizing the contribution of each timestep. Loss weight was also conditioned on resolution in order to equalize the contribution of each resolution group. The overall effect of this is that the model is now very good at both high- and low-frequency details, and is not as biased towards blurry backgrounds.
- EMA weights were assembled post-hoc using the method described in the EDM2 paper. The checkpoint shipped uses an EMA length sigma of 0.225.
- Cross-attention masking was applied to extra completely empty blocks of CLIP token embeddings, making the model work better with short prompts. Previously, if an image had a short caption, it would be fed in similarly to if you had added `BREAK BREAK BREAK` to the prompt in A1111, which caused the model to depend on those extra blocks and made it produce better images with 225 tokens of input. The model is no longer dependent on this.
- Optimizer replaced with schedule-free AdamW, and weight decay was turned off in bias layers, which has greatly stabilized training.
- Low resolution images were removed from higher-resolution buckets. This resulted in removal of approximately 1/3 of images from the highest resolution group. From our testing, we have observed no negative impact on high res generation quality, and this should improve fine details on high res images.
- The tokenizer used for training inputs was set up to never split tags down the middle. If a tag would go to the edge of the block, it will now be moved to the next block. This is similar to how most frontends behave.
- Random dropout is now applied to implied tags. The overall effect of this change should be that more specific tags will be more powerful and less dependent on implied tags, but more general tags will still be present and usable.
## Dataset Changes
- A sizeable overhaul of E621 tagging was done, removing several useless tags and renaming others. We are including new tag files that represent the current state of the dataset.
- The dataset was curated to remove harmful content that was discovered in the prior dataset, and content which was previously found to negatively impact generation quality for adjacent concepts. |
mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-i1-GGUF | mradermacher | 2024-05-24T01:03:47Z | 431 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Alsebay/FinalFintetuning-XVIII-v1.1-2x8B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-22T07:53:45Z | ---
base_model: Alsebay/FinalFintetuning-XVIII-v1.1-2x8B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/Alsebay/FinalFintetuning-XVIII-v1.1-2x8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-i1-GGUF/resolve/main/FinalFintetuning-XVIII-v1.1-2x8B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-i1-GGUF/resolve/main/FinalFintetuning-XVIII-v1.1-2x8B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-i1-GGUF/resolve/main/FinalFintetuning-XVIII-v1.1-2x8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-i1-GGUF/resolve/main/FinalFintetuning-XVIII-v1.1-2x8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-i1-GGUF/resolve/main/FinalFintetuning-XVIII-v1.1-2x8B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-i1-GGUF/resolve/main/FinalFintetuning-XVIII-v1.1-2x8B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-i1-GGUF/resolve/main/FinalFintetuning-XVIII-v1.1-2x8B.i1-Q2_K.gguf) | i1-Q2_K | 5.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-i1-GGUF/resolve/main/FinalFintetuning-XVIII-v1.1-2x8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-i1-GGUF/resolve/main/FinalFintetuning-XVIII-v1.1-2x8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-i1-GGUF/resolve/main/FinalFintetuning-XVIII-v1.1-2x8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-i1-GGUF/resolve/main/FinalFintetuning-XVIII-v1.1-2x8B.i1-IQ3_S.gguf) | i1-IQ3_S | 6.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-i1-GGUF/resolve/main/FinalFintetuning-XVIII-v1.1-2x8B.i1-IQ3_M.gguf) | i1-IQ3_M | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-i1-GGUF/resolve/main/FinalFintetuning-XVIII-v1.1-2x8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-i1-GGUF/resolve/main/FinalFintetuning-XVIII-v1.1-2x8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-i1-GGUF/resolve/main/FinalFintetuning-XVIII-v1.1-2x8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-i1-GGUF/resolve/main/FinalFintetuning-XVIII-v1.1-2x8B.i1-Q4_0.gguf) | i1-Q4_0 | 8.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-i1-GGUF/resolve/main/FinalFintetuning-XVIII-v1.1-2x8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-i1-GGUF/resolve/main/FinalFintetuning-XVIII-v1.1-2x8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-i1-GGUF/resolve/main/FinalFintetuning-XVIII-v1.1-2x8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.6 | |
| [GGUF](https://huggingface.co/mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-i1-GGUF/resolve/main/FinalFintetuning-XVIII-v1.1-2x8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.8 | |
| [GGUF](https://huggingface.co/mradermacher/FinalFintetuning-XVIII-v1.1-2x8B-i1-GGUF/resolve/main/FinalFintetuning-XVIII-v1.1-2x8B.i1-Q6_K.gguf) | i1-Q6_K | 11.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
MaziyarPanahi/Phi-3-mini-4k-instruct-v0.3-GGUF | MaziyarPanahi | 2024-05-23T18:41:53Z | 431 | 5 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"phi",
"phi-3",
"base_model:MaziyarPanahi/Phi-3-mini-4k-instruct-v0.3",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-23T18:25:09Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- phi
- phi-3
- text-generation
model_name: Phi-3-mini-4k-instruct-v0.3-GGUF
base_model: MaziyarPanahi/Phi-3-mini-4k-instruct-v0.3
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Phi-3-mini-4k-instruct-v0.3-GGUF](https://huggingface.co/MaziyarPanahi/Phi-3-mini-4k-instruct-v0.3-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/Phi-3-mini-4k-instruct-v0.3](https://huggingface.co/MaziyarPanahi/Phi-3-mini-4k-instruct-v0.3)
## Description
[MaziyarPanahi/Phi-3-mini-4k-instruct-v0.3-GGUF](https://huggingface.co/MaziyarPanahi/Phi-3-mini-4k-instruct-v0.3-GGUF) contains GGUF format model files for [MaziyarPanahi/Phi-3-mini-4k-instruct-v0.3](https://huggingface.co/MaziyarPanahi/Phi-3-mini-4k-instruct-v0.3).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
RichardErkhov/rinna_-_youri-7b-chat-gguf | RichardErkhov | 2024-05-24T13:40:59Z | 431 | 0 | null | [
"gguf",
"arxiv:2307.09288",
"arxiv:2404.01657",
"region:us"
]
| null | 2024-05-24T11:42:14Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
youri-7b-chat - GGUF
- Model creator: https://huggingface.co/rinna/
- Original model: https://huggingface.co/rinna/youri-7b-chat/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [youri-7b-chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/rinna_-_youri-7b-chat-gguf/blob/main/youri-7b-chat.Q2_K.gguf) | Q2_K | 2.36GB |
| [youri-7b-chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/rinna_-_youri-7b-chat-gguf/blob/main/youri-7b-chat.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [youri-7b-chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/rinna_-_youri-7b-chat-gguf/blob/main/youri-7b-chat.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [youri-7b-chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/rinna_-_youri-7b-chat-gguf/blob/main/youri-7b-chat.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [youri-7b-chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/rinna_-_youri-7b-chat-gguf/blob/main/youri-7b-chat.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [youri-7b-chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/rinna_-_youri-7b-chat-gguf/blob/main/youri-7b-chat.Q3_K.gguf) | Q3_K | 3.07GB |
| [youri-7b-chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/rinna_-_youri-7b-chat-gguf/blob/main/youri-7b-chat.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [youri-7b-chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/rinna_-_youri-7b-chat-gguf/blob/main/youri-7b-chat.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [youri-7b-chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/rinna_-_youri-7b-chat-gguf/blob/main/youri-7b-chat.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [youri-7b-chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/rinna_-_youri-7b-chat-gguf/blob/main/youri-7b-chat.Q4_0.gguf) | Q4_0 | 3.56GB |
| [youri-7b-chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/rinna_-_youri-7b-chat-gguf/blob/main/youri-7b-chat.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [youri-7b-chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/rinna_-_youri-7b-chat-gguf/blob/main/youri-7b-chat.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [youri-7b-chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/rinna_-_youri-7b-chat-gguf/blob/main/youri-7b-chat.Q4_K.gguf) | Q4_K | 3.8GB |
| [youri-7b-chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/rinna_-_youri-7b-chat-gguf/blob/main/youri-7b-chat.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [youri-7b-chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/rinna_-_youri-7b-chat-gguf/blob/main/youri-7b-chat.Q4_1.gguf) | Q4_1 | 3.95GB |
| [youri-7b-chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/rinna_-_youri-7b-chat-gguf/blob/main/youri-7b-chat.Q5_0.gguf) | Q5_0 | 4.33GB |
| [youri-7b-chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/rinna_-_youri-7b-chat-gguf/blob/main/youri-7b-chat.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [youri-7b-chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/rinna_-_youri-7b-chat-gguf/blob/main/youri-7b-chat.Q5_K.gguf) | Q5_K | 4.45GB |
| [youri-7b-chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/rinna_-_youri-7b-chat-gguf/blob/main/youri-7b-chat.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [youri-7b-chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/rinna_-_youri-7b-chat-gguf/blob/main/youri-7b-chat.Q5_1.gguf) | Q5_1 | 4.72GB |
| [youri-7b-chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/rinna_-_youri-7b-chat-gguf/blob/main/youri-7b-chat.Q6_K.gguf) | Q6_K | 5.15GB |
| [youri-7b-chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/rinna_-_youri-7b-chat-gguf/blob/main/youri-7b-chat.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
language:
- ja
- en
license: llama2
datasets:
- databricks/databricks-dolly-15k
- kunishou/databricks-dolly-15k-ja
- izumi-lab/llm-japanese-dataset
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
inference: false
model-index:
- name: youri-7b-chat
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 51.19
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rinna/youri-7b-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 76.09
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rinna/youri-7b-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 46.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rinna/youri-7b-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 41.17
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rinna/youri-7b-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rinna/youri-7b-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.52
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rinna/youri-7b-chat
name: Open LLM Leaderboard
---
# `rinna/youri-7b-chat`

# Overview
The model is the instruction-tuned version of [`rinna/youri-7b`](https://huggingface.co/rinna/youri-7b). It adopts a chat-style input format.
* **Model architecture**
A 32-layer, 4096-hidden-size transformer-based language model. Refer to the [llama2 paper](https://arxiv.org/abs/2307.09288) for architecture details.
* **Fine-tuning**
The fine-tuning data is the subset of the following datasets.
* [Databricks Dolly data](https://huggingface.co/datasets/databricks/databricks-dolly-15k)
* [Japanese Databricks Dolly data](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)
* [Anthropic HH RLHF data](https://huggingface.co/datasets/Anthropic/hh-rlhf) and its Japanese translation
* [FLAN Instruction Tuning data](https://github.com/google-research/FLAN) and its Japanese translation
* [Izumi lab LLM Japanese dataset](https://github.com/masanorihirano/llm-japanese-dataset/tree/main)
* The following sections are used
* alt
* aozora-txt
* CourseraParallel
* ParaNatCom
* Tab-delimited_Bilingual_Sentence_Pairs
* tanaka-corpus
* wikinews
* wordnet
* yasashi-japanese
* The [remaining sections](https://github.com/masanorihirano/llm-japanese-dataset/tree/main/datasets-cc-by-sa) contain commonly used evaluation corpora so they are skipped to prevent data leak.
* **Contributors**
- [Tianyu Zhao](https://huggingface.co/tianyuz)
- [Kei Sawada](https://huggingface.co/keisawada)
---
# Benchmarking
Please refer to [rinna's LM benchmark page](https://rinnakk.github.io/research/benchmarks/lm/index.html).
---
# How to use the model
~~~~python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("rinna/youri-7b-chat")
model = AutoModelForCausalLM.from_pretrained("rinna/youri-7b-chat")
if torch.cuda.is_available():
model = model.to("cuda")
instruction = "次の日本語を英語に翻訳してください。"
input = "自然言語による指示に基づきタスクが解けるよう学習させることを Instruction tuning と呼びます。"
context = [
{
"speaker": "設定",
"text": instruction
},
{
"speaker": "ユーザー",
"text": input
}
]
prompt = [
f"{uttr['speaker']}: {uttr['text']}"
for uttr in context
]
prompt = "\n".join(prompt)
prompt = (
prompt
+ "\n"
+ "システム: "
)
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_new_tokens=200,
do_sample=True,
temperature=0.5,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id
)
output = tokenizer.decode(output_ids.tolist()[0])
print(output)
"""
設定: 次の日本語を英語に翻訳してください。
ユーザー: 自然言語による指示に基づきタスクが解けるよう学習させることを Instruction tuning と呼びます。
システム: Learning to solve tasks based on natural language instructions is called instruction tuning.</s>
"""
output = output[len(prompt):-len("</s>")].strip()
input = "大規模言語モデル(だいきぼげんごモデル、英: large language model、LLM)は、多数のパラメータ(数千万から数十億)を持つ人工ニューラルネットワークで構成されるコンピュータ言語モデルで、膨大なラベルなしテキストを使用して自己教師あり学習または半教師あり学習によって訓練が行われる。"
context.extend([
{
"speaker": "システム",
"text": output
},
{
"speaker": "ユーザー",
"text": input
}
])
prompt = [
f"{uttr['speaker']}: {uttr['text']}"
for uttr in context
]
prompt = "\n".join(prompt)
prompt = (
prompt
+ "\n"
+ "システム: "
)
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_new_tokens=200,
do_sample=True,
temperature=0.5,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id
)
output = tokenizer.decode(output_ids.tolist()[0])
print(output)
"""
設定: 次の日本語を英語に翻訳してください。
ユーザー: 自然言語による指示に基づきタスクが解けるよう学習させることを Instruction tuning と呼びます。
システム: Learning to solve tasks based on natural language instructions is called instruction tuning.
ユーザー: 大規模言語モデル(だいきぼげんごモデル、英: large language model、LLM)は、多数のパラメータ(数千万から数十億)を持つ人工ニューラルネットワークで構成されるコンピュータ言語モデルで、膨大なラベルなしテ キストを使用して自己教師あり学習または半教師あり学習によって訓練が行われる。
システム: Large language models (LLMs) are computer language models consisting of a deep artificial neural network with millions to billions of parameters that are trained by self-supervised learning or semi-supervised learning using vast unlabeled text corpora.</s>
"""
~~~~
---
# Tokenization
The model uses the original llama-2 tokenizer.
---
# How to cite
~~~
@misc{rinna-youri-7b-chat,
title = {rinna/youri-7b-chat},
author={Zhao, Tianyu and Sawada, Kei}
url = {https://huggingface.co/rinna/youri-7b-chat},
}
@inproceedings{sawada2024release,
title = {Release of Pre-Trained Models for the {J}apanese Language},
author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
month = {5},
year = {2024},
url = {https://arxiv.org/abs/2404.01657},
}
~~~
---
# License
[The llama2 license](https://ai.meta.com/llama/license/)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_rinna__youri-7b-chat)
| Metric |Value|
|---------------------------------|----:|
|Avg. |48.51|
|AI2 Reasoning Challenge (25-Shot)|51.19|
|HellaSwag (10-Shot) |76.09|
|MMLU (5-Shot) |46.06|
|TruthfulQA (0-shot) |41.17|
|Winogrande (5-shot) |75.06|
|GSM8k (5-shot) | 1.52|
|
mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF | mradermacher | 2024-05-28T03:39:36Z | 431 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:migtissera/Llama-3-70B-Synthia-v3.5",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-27T04:36:21Z | ---
base_model: migtissera/Llama-3-70B-Synthia-v3.5
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/migtissera/Llama-3-70B-Synthia-v3.5
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70B-Synthia-v3.5-i1-GGUF/resolve/main/Llama-3-70B-Synthia-v3.5.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
SulthanTriesToCode/Llama3-CoEdit-GGUF | SulthanTriesToCode | 2024-06-03T17:44:45Z | 431 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-30T13:03:00Z | Entry not found |
femboysLover/vqmodel_tf2_spy_sample | femboysLover | 2024-06-07T10:00:06Z | 431 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"region:us"
]
| null | 2024-06-06T09:47:48Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF | mradermacher | 2024-06-08T22:54:42Z | 431 | 0 | transformers | [
"transformers",
"gguf",
"mixtral",
"en",
"dataset:lemonilia/LimaRP",
"base_model:Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-08T15:19:44Z | ---
base_model: Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss
datasets:
- lemonilia/LimaRP
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mixtral
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.Q2_K.gguf) | Q2_K | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.IQ3_XS.gguf) | IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.IQ3_S.gguf) | IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.Q3_K_S.gguf) | Q3_K_S | 20.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.IQ3_M.gguf) | IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.Q3_K_M.gguf) | Q3_K_M | 22.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.Q3_K_L.gguf) | Q3_K_L | 24.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.IQ4_XS.gguf) | IQ4_XS | 25.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.Q4_K_S.gguf) | Q4_K_S | 26.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.Q4_K_M.gguf) | Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.Q5_K_S.gguf) | Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.Q5_K_M.gguf) | Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.Q6_K.gguf) | Q6_K | 38.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.Q8_0.gguf) | Q8_0 | 49.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Oblivionus/LLaMa-3-Stheno-v3.2-15B-Q6_K-GGUF | Oblivionus | 2024-06-10T11:41:00Z | 431 | 4 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:PJMixers/LLaMa-3-Stheno-v3.2-15B",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-10T11:40:28Z | ---
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model: PJMixers/LLaMa-3-Stheno-v3.2-15B
---
# Oblivionus/LLaMa-3-Stheno-v3.2-15B-Q6_K-GGUF
This model was converted to GGUF format from [`PJMixers/LLaMa-3-Stheno-v3.2-15B`](https://huggingface.co/PJMixers/LLaMa-3-Stheno-v3.2-15B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/PJMixers/LLaMa-3-Stheno-v3.2-15B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo Oblivionus/LLaMa-3-Stheno-v3.2-15B-Q6_K-GGUF --hf-file llama-3-stheno-v3.2-15b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Oblivionus/LLaMa-3-Stheno-v3.2-15B-Q6_K-GGUF --hf-file llama-3-stheno-v3.2-15b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo Oblivionus/LLaMa-3-Stheno-v3.2-15B-Q6_K-GGUF --hf-file llama-3-stheno-v3.2-15b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo Oblivionus/LLaMa-3-Stheno-v3.2-15B-Q6_K-GGUF --hf-file llama-3-stheno-v3.2-15b-q6_k.gguf -c 2048
```
|
gglabs/TinyLM-Chat-0612-31-epoch | gglabs | 2024-06-12T05:42:41Z | 431 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:gglabs/TinyLM-Chat-0611-11-epoch",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-12T05:36:41Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: gglabs/TinyLM-Chat-0611-11-epoch
---
# Uploaded model
- **Developed by:** gglabs
- **License:** apache-2.0
- **Finetuned from model :** gglabs/TinyLM-Chat-0611-11-epoch
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lobot/Qwen2_fact_quest_7b_q4 | lobot | 2024-06-23T16:57:46Z | 431 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-23T16:06:36Z | Entry not found |
jiyi/hyact-qwen-Q4_K_M-GGUF | jiyi | 2024-07-01T07:21:58Z | 431 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:jiyi/hyact-qwen",
"license:apache-2.0",
"region:us"
]
| null | 2024-07-01T06:33:31Z | ---
base_model: jiyi/hyact-qwen
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# jiyi/hyact-qwen-Q4_K_M-GGUF
This model was converted to GGUF format from [`jiyi/hyact-qwen`](https://huggingface.co/jiyi/hyact-qwen) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/jiyi/hyact-qwen) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jiyi/hyact-qwen-Q4_K_M-GGUF --hf-file hyact-qwen-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jiyi/hyact-qwen-Q4_K_M-GGUF --hf-file hyact-qwen-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jiyi/hyact-qwen-Q4_K_M-GGUF --hf-file hyact-qwen-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jiyi/hyact-qwen-Q4_K_M-GGUF --hf-file hyact-qwen-q4_k_m.gguf -c 2048
```
|
Helsinki-NLP/opus-mt-uk-ru | Helsinki-NLP | 2023-08-16T12:08:17Z | 430 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"uk",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-03-02T23:29:04Z | ---
language:
- uk
- ru
tags:
- translation
license: apache-2.0
---
### ukr-rus
* source group: Ukrainian
* target group: Russian
* OPUS readme: [ukr-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-rus/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.rus | 69.2 | 0.826 |
### System Info:
- hf_name: ukr-rus
- source_languages: ukr
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'ru']
- src_constituents: {'ukr'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-rus/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: rus
- short_pair: uk-ru
- chrF2_score: 0.826
- bleu: 69.2
- brevity_penalty: 0.992
- ref_len: 60387.0
- src_name: Ukrainian
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: ru
- prefer_old: False
- long_pair: ukr-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
edbeeching/decision-transformer-gym-hopper-medium | edbeeching | 2022-06-29T19:15:16Z | 430 | 2 | transformers | [
"transformers",
"pytorch",
"decision_transformer",
"feature-extraction",
"deep-reinforcement-learning",
"reinforcement-learning",
"decision-transformer",
"gym-continous-control",
"arxiv:2106.01345",
"endpoints_compatible",
"region:us"
]
| reinforcement-learning | 2022-03-16T08:20:31Z | ---
tags:
- deep-reinforcement-learning
- reinforcement-learning
- decision-transformer
- gym-continous-control
pipeline_tag: reinforcement-learning
---
# Decision Transformer model trained on medium trajectories sampled from the Gym Hopper environment
This is a trained [Decision Transformer](https://arxiv.org/abs/2106.01345) model trained on medium trajectories sampled from the Gym Hopper environment.
The following normlization coefficients are required to use this model:
mean = [ 1.311279, -0.08469521, -0.5382719, -0.07201576, 0.04932366, 2.1066856, -0.15017354, 0.00878345, -0.2848186, -0.18540096, -0.28461286]
std = [0.17790751, 0.05444621, 0.21297139, 0.14530419, 0.6124444, 0.85174465, 1.4515252, 0.6751696, 1.536239, 1.6160746, 5.6072536 ]
See our [Blog Post](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing), [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing) or [Example Script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/decision_transformer) for usage. |
nvidia/tts_hifigan | nvidia | 2022-06-29T21:31:29Z | 430 | 24 | nemo | [
"nemo",
"text-to-speech",
"speech",
"audio",
"Vocoder",
"GAN",
"pytorch",
"NeMo",
"Riva",
"en",
"dataset:ljspeech",
"arxiv:2010.05646",
"license:cc-by-4.0",
"region:us"
]
| text-to-speech | 2022-06-29T01:51:43Z | ---
language:
- en
library_name: nemo
datasets:
- ljspeech
thumbnail: null
tags:
- text-to-speech
- speech
- audio
- Vocoder
- GAN
- pytorch
- NeMo
- Riva
license: cc-by-4.0
---
# NVIDIA Hifigan Vocoder (en-US)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets)
| [](#deployment-with-nvidia-riva) |
HiFiGAN [1] is a generative adversarial network (GAN) model that generates audio from mel spectrograms. The generator uses transposed convolutions to upsample mel spectrograms to audio.
## Usage
The model is available for use in the NeMo toolkit [2] and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed the latest PyTorch version.
```
pip install nemo_toolkit['all']
```
### Automatically instantiate the model
NOTE: In order to generate audio, you also need a spectrogram generator from NeMo. This example uses the FastPitch model.
```python
# Load FastPitch
from nemo.collections.tts.models import FastPitchModel
spec_generator = FastPitchModel.from_pretrained("nvidia/tts_en_fastpitch")
# Load vocoder
from nemo.collections.tts.models import HifiGanModel
model = HifiGanModel.from_pretrained(model_name="nvidia/tts_hifigan")
```
### Generate audio
```python
import soundfile as sf
parsed = spec_generator.parse("You can type your sentence here to get nemo to produce speech.")
spectrogram = spec_generator.generate_spectrogram(tokens=parsed)
audio = model.convert_spectrogram_to_audio(spec=spectrogram)
```
### Save the generated audio file
```python
# Save the audio to disk in a file called speech.wav
sf.write("speech.wav", audio.to('cpu').numpy(), 22050)
```
### Input
This model accepts batches of mel spectrograms.
### Output
This model outputs audio at 22050Hz.
## Model Architecture
HiFi-GAN [1] consists of one generator and two discriminators: multi-scale and multi-period discriminators. The generator and discriminators are trained adversarially, along with two additional losses for
improving training stability and model performance.
## Training
The NeMo toolkit [3] was used for training the models for several epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/tts/hifigan.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/tts/conf/hifigan/hifigan.yaml).
### Datasets
This model is trained on LJSpeech sampled at 22050Hz, and has been tested on generating female English voices with an American accent.
## Performance
No performance information is available at this time.
## Limitations
If the spectrogram generator model (example FastPitch) is trained/finetuned on new speaker's data it is recommended to finetune HiFi-GAN also. HiFi-GAN shows improvement using synthesized mel spectrograms, so the first step is to generate mel spectrograms with our finetuned FastPitch model to use as input to finetune HiFiGAN.
## Deployment with NVIDIA Riva
For the best real-time accuracy, latency, and throughput, deploy the model with [NVIDIA Riva](https://developer.nvidia.com/riva), an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, at the edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and Enterprise-grade support
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
## References
- [1] [HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis](https://arxiv.org/abs/2010.05646)
- [2] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo) |
fusing/vqgan-dummy | fusing | 2022-08-06T05:13:28Z | 430 | 0 | diffusers | [
"diffusers",
"region:us"
]
| null | 2022-06-29T10:20:24Z | Entry not found |
timm/maxvit_rmlp_tiny_rw_256.sw_in1k | timm | 2023-05-11T00:20:10Z | 430 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2204.01697",
"arxiv:2111.09883",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-01-20T21:34:39Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for maxvit_rmlp_tiny_rw_256.sw_in1k
A timm specific MaxViT (w/ a MLP Log-CPB (continuous log-coordinate relative position bias motivated by Swin-V2) image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman.
ImageNet-1k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program.
### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py)
MaxxViT covers a number of related model architectures that share a common structure including:
- CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages.
- MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid).
- CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate.
Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations.
All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 29.1
- GMACs: 6.8
- Activations (M): 46.9
- Image size: 256 x 256
- **Papers:**
- MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697
- Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('maxvit_rmlp_tiny_rw_256.sw_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'maxvit_rmlp_tiny_rw_256.sw_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 128, 128])
# torch.Size([1, 64, 64, 64])
# torch.Size([1, 128, 32, 32])
# torch.Size([1, 256, 16, 16])
# torch.Size([1, 512, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'maxvit_rmlp_tiny_rw_256.sw_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
### By Top-1
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
### By Throughput (samples / sec)
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{tu2022maxvit,
title={MaxViT: Multi-Axis Vision Transformer},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={ECCV},
year={2022},
}
```
```bibtex
@article{dai2021coatnet,
title={CoAtNet: Marrying Convolution and Attention for All Data Sizes},
author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing},
journal={arXiv preprint arXiv:2106.04803},
year={2021}
}
```
|
sanchit-gandhi/ast-fleurs-langid-dropout-0.2-layers-6 | sanchit-gandhi | 2023-02-10T02:42:56Z | 430 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:fleurs",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2023-02-08T20:46:46Z | ---
license: bsd-3-clause
tags:
- generated_from_trainer
datasets:
- fleurs
metrics:
- accuracy
model-index:
- name: ast-fleurs-langid-dropout-0.2-layers-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-fleurs-langid-dropout-0.2-layers-6
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 7.4304
- Accuracy: 0.1802
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0405 | 1.0 | 16987 | 6.6986 | 0.1722 |
| 0.0002 | 2.0 | 33974 | 7.1284 | 0.1811 |
| 0.0 | 3.0 | 50961 | 7.4304 | 0.1802 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
grammarly/detexd-roberta-base | grammarly | 2023-07-10T17:34:23Z | 430 | 7 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-21T18:44:55Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-classification
---
# DeTexD-RoBERTa-base delicate text detection
This is a baseline RoBERTa-base model for the delicate text detection task.
* Paper: [DeTexD: A Benchmark Dataset for Delicate Text Detection](TODO)
* [GitHub repository](https://github.com/grammarly/detexd)
The labels meaning according to the paper:
- LABEL_0 -> non-delicate (0)
- LABEL_1 -> very low risk (1)
- LABEL_2 -> low risk (2)
- LABEL_3 -> medium risk (3)
- LABEL_4 -> high risk (4)
- LABEL_5 -> very high risk (5)
## Classification example code
Here's a short usage example with the torch library in a binary classification task:
```python
from transformers import pipeline
classifier = pipeline("text-classification", model="grammarly/detexd-roberta-base")
def predict_binary_score(text: str):
# get multiclass probability scores
scores = classifier(text, top_k=None)
# convert to a single score by summing the probability scores
# for the higher-index classes
return sum(score['score']
for score in scores
if score['label'] in ('LABEL_3', 'LABEL_4', 'LABEL_5'))
def predict_delicate(text: str, threshold=0.72496545):
return predict_binary_score(text) > threshold
print(predict_delicate("Time flies like an arrow. Fruit flies like a banana."))
```
Expected output:
```
False
```
## Citation Information
```
@inproceedings{chernodub-etal-2023-detexd,
title = "{D}e{T}ex{D}: A Benchmark Dataset for Delicate Text Detection",
author = "Yavnyi, Serhii and Sliusarenko, Oleksii and Razzaghi, Jade and Mo, Yichen and Hovakimyan, Knar and Chernodub, Artem",
booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.woah-1.2",
pages = "14--28",
abstract = "Over the past few years, much research has been conducted to identify and regulate toxic language. However, few studies have addressed a broader range of sensitive texts that are not necessarily overtly toxic. In this paper, we introduce and define a new category of sensitive text called {``}delicate text.{''} We provide the taxonomy of delicate text and present a detailed annotation scheme. We annotate DeTexD, the first benchmark dataset for delicate text detection. The significance of the difference in the definitions is highlighted by the relative performance deltas between models trained each definitions and corpora and evaluated on the other. We make publicly available the DeTexD Benchmark dataset, annotation guidelines, and baseline model for delicate text detection.",
}
``` |
shibal1/anything-v4.5-clone | shibal1 | 2023-08-06T15:13:02Z | 430 | 13 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-06-12T14:41:31Z | ---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
duplicated_from: andite/anything-v4.0
---
[UPDATE (August 6, 2023)]
Hi! It may have seem the original repository I forked from [andite/anything-v4.0] is unavailable for some reason.
The original purpose of this forked repo was to train a model in SD API but didn't work and left this repo up in hopes of trying again but it may seem that
Google search results pointed to this repository instead,
upon further investigation the author of the original repo andite removed their huggingface repo, civitai now only have 4.5 models up
therefore I think this repo now only serves as an archive (unless asked to be taken down ofc).
Steps to access older models (e.g. 4.0)
1. Go to the 'Files and versions' tab
2. Click on the first commit 'Duplicate from andite/anything-v4.0'
3. 'Browse files'
4. ???
5. Profit
-------
Try out my new model! - [Pastel Mix || Stylized Anime Model](https://huggingface.co/andite/pastel-mix). Thanks.
I also uploaded it in CivitAI! https://civitai.com/models/5414/pastel-mix-stylized-anime-model I'd appreciate the ratings, thank you!
Yes, it's a shameless plug.
Examples:



-------
<font color="grey">
[Linaqruf](https://huggingface.co/Linaqruf) for letting me borrow his model card for reference.
# Anything V4
Welcome to Anything V4 - a latent diffusion model for weebs. The newest version of Anything. This model is intended to produce high-quality, highly detailed anime style with just a few prompts. Like other anime-style Stable Diffusion models, it also supports danbooru tags to generate images.
e.g. **_1girl, white hair, golden eyes, beautiful eyes, detail, flower meadow, cumulonimbus clouds, lighting, detailed sky, garden_**
I think the V4.5 version better though, it's in this repo. feel free 2 try it.
## Yes, this model has [AbyssOrangeMix2](https://huggingface.co/WarriorMama777/OrangeMixs) in it. coz its a very good model. check it out luls ;)
# Gradio
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run anything-v4.0:
[](https://huggingface.co/spaces/akhaliq/anything-v4.0)
## 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "andite/anything-v4.0"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "hatsune_miku"
image = pipe(prompt).images[0]
image.save("./hatsune_miku.png")
```
## Examples
Below are some examples of images generated using this model:
**Anime Girl:**

```
masterpiece, best quality, 1girl, white hair, medium hair, cat ears, closed eyes, looking at viewer, :3, cute, scarf, jacket, outdoors, streets
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7
```
**Anime Boy:**

```
1boy, bishounen, casual, indoors, sitting, coffee shop, bokeh
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7
```
**Scenery:**

```
scenery, village, outdoors, sky, clouds
Steps: 50, Sampler: DPM++ 2S a Karras, CFG scale: 7
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
## Big Thanks to
- [Linaqruf](https://huggingface.co/Linaqruf). [NoCrypt](https://huggingface.co/NoCrypt), and Fannovel16#9022 for helping me out alot regarding my inquiries and concern about models and other stuff. |
artificialguybr/LineAniRedmond-LinearMangaSDXL-V2 | artificialguybr | 2023-10-07T04:14:25Z | 430 | 6 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-10-07T04:12:13Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: LineAniAF, lineart
widget:
- text: LineAniAF, lineart
---
# LineAni.Redmond V2

LineAni.Redmond V2 is here!
Test all my loras here:https://huggingface.co/spaces/artificialguybr/artificialguybr-demo-lora
I'm grateful for the GPU time from Redmond.AI that allowed me to make this LORA!
This is a Manga Lineart LORA fine-tuned on SD XL 1.0.
The LORA has a high capacity to generate Manga Lineart styles i
It's a versatile LORA.
You can use detailed, minimalist, colorful, black and white as tag to control the results.
The tag for the model:LineAniAF
LORA is not perfect and sometimes needs more than one gen to create good images.
That's my first Anime LORA. Please be patient <3
This is inspired in a good LORA for SD 1.5!
I really hope you like the LORA and use it.
If you like the model and think it's worth it, you can make a donation to my Patreon or Ko-fi.
Follow me in my twitter to know before all about new models:
https://twitter.com/artificialguybr/ |
TheBloke/SlimOpenOrca-Mistral-7B-GGUF | TheBloke | 2023-10-18T22:59:05Z | 430 | 7 | transformers | [
"transformers",
"gguf",
"mistral",
"base_model:PulsarAI/SlimOpenOrca-Mistral-7B",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
]
| null | 2023-10-18T22:53:19Z | ---
base_model: PulsarAI/SlimOpenOrca-Mistral-7B
inference: false
license: cc-by-nc-4.0
model_creator: PulsarAI
model_name: SlimOpenOrca Mistral 7B
model_type: mistral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# SlimOpenOrca Mistral 7B - GGUF
- Model creator: [PulsarAI](https://huggingface.co/PulsarAI)
- Original model: [SlimOpenOrca Mistral 7B](https://huggingface.co/PulsarAI/SlimOpenOrca-Mistral-7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [PulsarAI's SlimOpenOrca Mistral 7B](https://huggingface.co/PulsarAI/SlimOpenOrca-Mistral-7B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SlimOpenOrca-Mistral-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SlimOpenOrca-Mistral-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SlimOpenOrca-Mistral-7B-GGUF)
* [PulsarAI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/PulsarAI/SlimOpenOrca-Mistral-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [slimopenorca-mistral-7b.Q2_K.gguf](https://huggingface.co/TheBloke/SlimOpenOrca-Mistral-7B-GGUF/blob/main/slimopenorca-mistral-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [slimopenorca-mistral-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/SlimOpenOrca-Mistral-7B-GGUF/blob/main/slimopenorca-mistral-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [slimopenorca-mistral-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/SlimOpenOrca-Mistral-7B-GGUF/blob/main/slimopenorca-mistral-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [slimopenorca-mistral-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/SlimOpenOrca-Mistral-7B-GGUF/blob/main/slimopenorca-mistral-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [slimopenorca-mistral-7b.Q4_0.gguf](https://huggingface.co/TheBloke/SlimOpenOrca-Mistral-7B-GGUF/blob/main/slimopenorca-mistral-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [slimopenorca-mistral-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/SlimOpenOrca-Mistral-7B-GGUF/blob/main/slimopenorca-mistral-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [slimopenorca-mistral-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/SlimOpenOrca-Mistral-7B-GGUF/blob/main/slimopenorca-mistral-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [slimopenorca-mistral-7b.Q5_0.gguf](https://huggingface.co/TheBloke/SlimOpenOrca-Mistral-7B-GGUF/blob/main/slimopenorca-mistral-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [slimopenorca-mistral-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/SlimOpenOrca-Mistral-7B-GGUF/blob/main/slimopenorca-mistral-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [slimopenorca-mistral-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/SlimOpenOrca-Mistral-7B-GGUF/blob/main/slimopenorca-mistral-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [slimopenorca-mistral-7b.Q6_K.gguf](https://huggingface.co/TheBloke/SlimOpenOrca-Mistral-7B-GGUF/blob/main/slimopenorca-mistral-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [slimopenorca-mistral-7b.Q8_0.gguf](https://huggingface.co/TheBloke/SlimOpenOrca-Mistral-7B-GGUF/blob/main/slimopenorca-mistral-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/SlimOpenOrca-Mistral-7B-GGUF and below it, a specific filename to download, such as: slimopenorca-mistral-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/SlimOpenOrca-Mistral-7B-GGUF slimopenorca-mistral-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/SlimOpenOrca-Mistral-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/SlimOpenOrca-Mistral-7B-GGUF slimopenorca-mistral-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m slimopenorca-mistral-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/SlimOpenOrca-Mistral-7B-GGUF", model_file="slimopenorca-mistral-7b.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: PulsarAI's SlimOpenOrca Mistral 7B
<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
Merge of [Open-Orca/Mistral-7B-SlimOrca](https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca) and [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) using ties merge.
### *Weights*
- [Open-Orca/Mistral-7B-SlimOrca](https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca): 0.5
- [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.3
### *Density*
- [Open-Orca/Mistral-7B-SlimOrca](https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca): 0.5
- [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.5
# Evulation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard))
| Metric | Value |
|-----------------------|-------|
| Avg. | |
| ARC (25-shot) | |
| HellaSwag (10-shot) | |
| MMLU (5-shot) | |
| TruthfulQA (0-shot) | |
<!-- original-model-card end -->
|
HuggingFaceH4/mistral-7b-grok | HuggingFaceH4 | 2024-02-01T12:45:54Z | 430 | 42 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"conversational",
"dataset:HuggingFaceH4/grok-conversation-harmless",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-01-29T14:13:00Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- alignment-handbook
- generated_from_trainer
datasets:
- HuggingFaceH4/grok-conversation-harmless
- HuggingFaceH4/ultrachat_200k
model-index:
- name: mistral-7b-grok
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral 7B Grok
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) that has been aligned via Constitutional AI to mimic the style of [xAI's Grok assistant](https://grok.x.ai/).
It achieves the following results on the evaluation set:
- Loss: 0.9348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9326 | 1.0 | 545 | 0.9348 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Shaleen123/llama3-code-8bit | Shaleen123 | 2024-05-02T08:53:12Z | 430 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-04-29T23:20:15Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1 | adalbertojunior | 2024-05-23T17:11:19Z | 430 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-30T00:41:59Z | ---
library_name: transformers
tags: []
model-index:
- name: Qwen1.5-32B-Dolphin-Portuguese-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 74.74
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 66.34
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 53.71
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 93.66
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 77.7
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 82.14
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 86.71
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 68.68
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia/tweetsentbr_fewshot
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 72.82
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1
name: Open Portuguese LLM Leaderboard
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
# Open Portuguese LLM Leaderboard Evaluation Results
Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/adalbertojunior/Qwen1.5-32B-Dolphin-Portuguese-v0.1) and on the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
| Metric | Value |
|--------------------------|---------|
|Average |**75.17**|
|ENEM Challenge (No Images)| 74.74|
|BLUEX (No Images) | 66.34|
|OAB Exams | 53.71|
|Assin2 RTE | 93.66|
|Assin2 STS | 77.70|
|FaQuAD NLI | 82.14|
|HateBR Binary | 86.71|
|PT Hate Speech Binary | 68.68|
|tweetSentBR | 72.82|
|
dbalasub/ensemble-five-qa | dbalasub | 2024-05-12T22:01:10Z | 430 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text2text-generation | 2024-05-11T02:15:25Z | ---
library_name: transformers
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf | RichardErkhov | 2024-05-16T03:03:38Z | 430 | 1 | null | [
"gguf",
"region:us"
]
| null | 2024-05-16T01:38:39Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Silicon-Maid-7B - GGUF
- Model creator: https://huggingface.co/SanjiWatsuki/
- Original model: https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Silicon-Maid-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf/blob/main/Silicon-Maid-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [Silicon-Maid-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf/blob/main/Silicon-Maid-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Silicon-Maid-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf/blob/main/Silicon-Maid-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Silicon-Maid-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf/blob/main/Silicon-Maid-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Silicon-Maid-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf/blob/main/Silicon-Maid-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Silicon-Maid-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf/blob/main/Silicon-Maid-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [Silicon-Maid-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf/blob/main/Silicon-Maid-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Silicon-Maid-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf/blob/main/Silicon-Maid-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Silicon-Maid-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf/blob/main/Silicon-Maid-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Silicon-Maid-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf/blob/main/Silicon-Maid-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Silicon-Maid-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf/blob/main/Silicon-Maid-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Silicon-Maid-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf/blob/main/Silicon-Maid-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Silicon-Maid-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf/blob/main/Silicon-Maid-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [Silicon-Maid-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf/blob/main/Silicon-Maid-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Silicon-Maid-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf/blob/main/Silicon-Maid-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Silicon-Maid-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf/blob/main/Silicon-Maid-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Silicon-Maid-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf/blob/main/Silicon-Maid-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Silicon-Maid-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf/blob/main/Silicon-Maid-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [Silicon-Maid-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf/blob/main/Silicon-Maid-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Silicon-Maid-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf/blob/main/Silicon-Maid-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Silicon-Maid-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf/blob/main/Silicon-Maid-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [Silicon-Maid-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/SanjiWatsuki_-_Silicon-Maid-7B-gguf/blob/main/Silicon-Maid-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: cc-by-4.0
language:
- en
tags:
- merge
- not-for-all-audiences
- nsfw
---
<div style="display: flex; justify-content: center; align-items: center">
<img src="https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B/resolve/main/assets/cybermaid.png">
</div
>
<p align="center">
<big><b>Top 1 RP Performer on MT-bench 🤪</b
></big>
</p>
<p align="center">
<strong>Next Gen Silicon-Based RP Maid</strong>
</p>
## WTF is This?
Silicon-Maid-7B is another model targeted at being both strong at RP **and** being a smart cookie that can follow character cards very well. As of right now, Silicon-Maid-7B outscores both of my previous 7B RP models in my RP benchmark and I have been impressed by this model's creativity. It is suitable for RP/ERP and general use. Quants can be found [here](https://huggingface.co/collections/SanjiWatsuki/silicon-maid-7b-658d1669292816fe4992daa4).
It's built on [xDAN-AI/xDAN-L1-Chat-RL-v1](https://huggingface.co/xDAN-AI/xDAN-L1-Chat-RL-v1), a 7B model which scores unusually high on MT-Bench, and chargoddard/loyal-piano-m7, an Alpaca format 7B model with surprisingly creative outputs. I was excited to see this model for two main reasons:
* MT-Bench normally correlates well with real world model quality
* It was an Alpaca prompt model with high benches which meant I could try swapping out my Marcoroni frankenmerge used in my previous model.
**MT-Bench Average Turn**
| model | score | size
|--------------------|-----------|--------
| gpt-4 | 8.99 | -
| *xDAN-L1-Chat-RL-v1* | 8.24^1 | 7b
| Starling-7B | 8.09 | 7b
| Claude-2 | 8.06 | -
| **Silicon-Maid** | **7.96** | **7b**
| *Loyal-Macaroni-Maid*| 7.95 | 7b
| gpt-3.5-turbo | 7.94 | 20b?
| Claude-1 | 7.90 | -
| OpenChat-3.5 | 7.81 | -
| vicuna-33b-v1.3 | 7.12 | 33b
| wizardlm-30b | 7.01 | 30b
| Llama-2-70b-chat | 6.86 | 70b
^1 xDAN's testing placed it 8.35 - this number is from my independent MT-Bench run.
<img src="https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B/resolve/main/assets/fig-silicon-loyal.png">
It's unclear to me if xDAN-L1-Chat-RL-v1 is overtly benchmaxxing but it seemed like a solid 7B from my limited testing (although nothing that screams 2nd best model behind GPT-4). Amusingly, the model lost a lot of Reasoning and Coding skills in the merger. This was a much greater MT-Bench dropoff than I expected, perhaps suggesting the Math/Reasoning ability in the original model was rather dense and susceptible to being lost to a DARE TIE merger?
Besides that, the merger is almost identical to the Loyal-Macaroni-Maid merger with a new base "smart cookie" model. If you liked any of my previous RP models, give this one a shot and let me know in the Community tab what you think!
### The Sauce
```
models: # Top-Loyal-Bruins-Maid-DARE-7B
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: xDAN-AI/xDAN-L1-Chat-RL-v1
parameters:
weight: 0.4
density: 0.8
- model: chargoddard/loyal-piano-m7
parameters:
weight: 0.3
density: 0.8
- model: Undi95/Toppy-M-7B
parameters:
weight: 0.2
density: 0.4
- model: NeverSleep/Noromaid-7b-v0.2
parameters:
weight: 0.2
density: 0.4
- model: athirdpath/NSFW_DPO_vmgb-7b
parameters:
weight: 0.2
density: 0.4
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
For more information about why I use this merger, see the [Loyal-Macaroni-Maid repo](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B#the-sauce-all-you-need-is-dare)
### Prompt Template (Alpaca)
I found the best SillyTavern results from using the Noromaid template but please try other templates! Let me know if you find anything good.
SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
Additionally, here is my highly recommended [Text Completion preset](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B/blob/main/Characters/MinP.json). You can tweak this by adjusting temperature up or dropping min p to boost creativity or raise min p to increase stability. You shouldn't need to touch anything else!
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
### Other Benchmarks
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
| [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) [📄](https://gist.github.com/mlabonne/36c412889c4acfad7061f269a31f9055) | 56.85 | 44.74 | 75.6 | 59.89 | 47.17 |
| [**Silicon-Maid-7B**](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B) [📄](https://gist.github.com/DHNishi/315ba1abba27af930f5f546af3515735) | **56.45**| 44.74| 74.26| 61.5| 45.32|
| [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) [📄](https://gist.github.com/mlabonne/14687f1eb3425b166db511f31f8e66f6) | 53.51 | 43.67 | 73.24 | 55.37 | 41.76 |
| [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) [📄](https://gist.github.com/mlabonne/88b21dd9698ffed75d6163ebdc2f6cc8) | 52.42 | 42.75 | 72.99 | 52.99 | 40.94 |
| [openchat/openchat_3.5](https://huggingface.co/openchat/openchat_3.5) [📄](https://gist.github.com/mlabonne/e23d7d8418619cf5b1ca10da391ac629) | 51.34 | 42.67 | 72.92 | 47.27 | 42.51 |
| [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) [📄](https://gist.github.com/mlabonne/c31cc46169ef3004c0df250017d5cac9) | 51.16 | 42.06 | 72.72 | 47.33 | 42.53 |
| [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) [📄](https://gist.github.com/mlabonne/32a36f448fd36a3100c325d51d01c0a1) | 50.99 | 37.33 | 71.83 | 55.1 | 39.7 |
|
QuantFactory/internlm2-chat-7b-sft-GGUF | QuantFactory | 2024-06-01T03:13:27Z | 430 | 0 | null | [
"gguf",
"text-generation",
"base_model:internlm/internlm2-chat-7b-sft",
"license:other",
"region:us"
]
| text-generation | 2024-05-31T06:35:09Z | ---
pipeline_tag: text-generation
license: other
base_model: internlm/internlm2-chat-7b-sft
---
# QuantFactory/internlm2-chat-7b-sft-GGUF
This is quantized version of [internlm/internlm2-chat-7b-sft](https://huggingface.co/internlm/internlm2-chat-7b-sft) created using llama.cpp
# Model Description
<div align="center">
<img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/>
<div> </div>
<div align="center">
<b><font size="5">InternLM</font></b>
<sup>
<a href="https://internlm.intern-ai.org.cn/">
<i><font size="4">HOT</font></i>
</a>
</sup>
<div> </div>
</div>
[](https://github.com/internLM/OpenCompass/)
</div>
## Introduction
InternLM2 has open-sourced a 7 billion parameter base model and a chat model tailored for practical scenarios. The model has the following characteristics:
- **200K Context window**: Nearly perfect at finding needles in the haystack with 200K-long context, with leading performance on long-context tasks like LongBench and L-Eval. Try it with [LMDeploy](https://github.com/InternLM/lmdeploy) for 200K-context inference.
- **Outstanding comprehensive performance**: Significantly better than the last generation in all dimensions, especially in reasoning, math, code, chat experience, instruction following, and creative writing, with leading performance among open-source models in similar sizes. In some evaluations, InternLM2-Chat-20B may match or even surpass ChatGPT (GPT-3.5).
- **Code interpreter & Data analysis**: With code interpreter, InternLM2-Chat-20B obtains compatible performance with GPT-4 on GSM8K and MATH. InternLM2-Chat also provides data analysis capability.
- **Stronger tool use**: Based on better tool utilization-related capabilities in instruction following, tool selection and reflection, InternLM2 can support more kinds of agents and multi-step tool calling for complex tasks. See [examples](https://github.com/InternLM/lagent).
## InternLM2-Chat-7B-SFT
InternLM2-Chat-7B-SFT is the SFT version based on InternLM2-Base, and InternLM2-Chat-7B is further trained from InternLM2-Chat-7B-SFT by Online RLHF.
We release the SFT version so that the community can study the influence of RLHF deeply.
### Performance Evaluation
We conducted a comprehensive evaluation of InternLM2 using the open-source evaluation tool [OpenCompass](https://github.com/internLM/OpenCompass/). The evaluation covered five dimensions of capabilities: disciplinary competence, language competence, knowledge competence, inference competence, and comprehension competence. Here are some of the evaluation results, and you can visit the [OpenCompass leaderboard](https://opencompass.org.cn/rank) for more evaluation results.
| Dataset\Models | InternLM2-7B | InternLM2-Chat-7B | InternLM2-20B | InternLM2-Chat-20B | ChatGPT | GPT-4 |
| --- | --- | --- | --- | --- | --- | --- |
| MMLU | 65.8 | 63.7 | 67.7 | 66.5 | 69.1 | 83.0 |
| AGIEval | 49.9 | 47.2 | 53.0 | 50.3 | 39.9 | 55.1 |
| BBH | 65.0 | 61.2 | 72.1 | 68.3 | 70.1 | 86.7 |
| GSM8K | 70.8 | 70.7 | 76.1 | 79.6 | 78.2 | 91.4 |
| MATH | 20.2 | 23.0 | 25.5 | 31.9 | 28.0 | 45.8 |
| HumanEval | 43.3 | 59.8 | 48.8 | 67.1 | 73.2 | 74.4 |
| MBPP(Sanitized) | 51.8 | 51.4 | 63.0 | 65.8 | 78.9 | 79.0 |
- The evaluation results were obtained from [OpenCompass](https://github.com/internLM/OpenCompass/) (some data marked with *, which means come from the original papers), and evaluation configuration can be found in the configuration files provided by [OpenCompass](https://github.com/internLM/OpenCompass/).
- The evaluation data may have numerical differences due to the version iteration of [OpenCompass](https://github.com/internLM/OpenCompass/), so please refer to the latest evaluation results of [OpenCompass](https://github.com/internLM/OpenCompass/).
**Limitations:** Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information.
### Import from Transformers
To load the InternLM 7B Chat model using Transformers, use the following code:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-chat-7b-sft", trust_remote_code=True)
# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and cause OOM Error.
model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-chat-7b-sft", torch_dtype=torch.float16, trust_remote_code=True).cuda()
model = model.eval()
response, history = model.chat(tokenizer, "hello", history=[])
print(response)
# Hello! How can I help you today?
response, history = model.chat(tokenizer, "please provide three suggestions about time management", history=history)
print(response)
```
The responses can be streamed using `stream_chat`:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "internlm/internlm2-chat-7b-sft"
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True).cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = model.eval()
length = 0
for response, history in model.stream_chat(tokenizer, "Hello", history=[]):
print(response[length:], flush=True, end="")
length = len(response)
```
## Deployment
### LMDeploy
LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams.
```bash
pip install lmdeploy
```
You can run batch inference locally with the following python code:
```python
import lmdeploy
pipe = lmdeploy.pipeline("internlm/internlm2-chat-7b-sft")
response = pipe(["Hi, pls intro yourself", "Shanghai is"])
print(response)
```
Or you can launch an OpenAI compatible server with the following command:
```bash
lmdeploy serve api_server internlm/internlm2-chat-7b-sft --model-name internlm2-chat-7b-sft --server-port 23333
```
Then you can send a chat request to the server:
```bash
curl http://localhost:23333/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "internlm2-chat-7b-sft",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Introduce deep learning to me."}
]
}'
```
Find more details in the [LMDeploy documentation](https://lmdeploy.readthedocs.io/en/latest/)
### vLLM
Launch OpenAI compatible server with `vLLM>=0.3.2`:
```bash
pip install vllm
```
```bash
python -m vllm.entrypoints.openai.api_server --model internlm/internlm2-chat-7b-sft --served-model-name internlm2-chat-7b-sft --trust-remote-code
```
Then you can send a chat request to the server:
```bash
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "internlm2-chat-7b-sft",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Introduce deep learning to me."}
]
}'
```
Find more details in the [vLLM documentation](https://docs.vllm.ai/en/latest/index.html)
## Open Source License
The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow **free** commercial usage. To apply for a commercial license, please fill in the [application form (English)](https://wj.qq.com/s2/12727483/5dba/)/[申请表(中文)](https://wj.qq.com/s2/12725412/f7c1/). For other questions or collaborations, please contact <[email protected]>.
|
RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf | RichardErkhov | 2024-06-16T07:42:20Z | 430 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-15T18:17:47Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
NeuralMona_MoE-4x7B - GGUF
- Model creator: https://huggingface.co/CultriX/
- Original model: https://huggingface.co/CultriX/NeuralMona_MoE-4x7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [NeuralMona_MoE-4x7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf/blob/main/NeuralMona_MoE-4x7B.Q2_K.gguf) | Q2_K | 8.24GB |
| [NeuralMona_MoE-4x7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf/blob/main/NeuralMona_MoE-4x7B.IQ3_XS.gguf) | IQ3_XS | 9.21GB |
| [NeuralMona_MoE-4x7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf/blob/main/NeuralMona_MoE-4x7B.IQ3_S.gguf) | IQ3_S | 9.73GB |
| [NeuralMona_MoE-4x7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf/blob/main/NeuralMona_MoE-4x7B.Q3_K_S.gguf) | Q3_K_S | 9.72GB |
| [NeuralMona_MoE-4x7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf/blob/main/NeuralMona_MoE-4x7B.IQ3_M.gguf) | IQ3_M | 9.92GB |
| [NeuralMona_MoE-4x7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf/blob/main/NeuralMona_MoE-4x7B.Q3_K.gguf) | Q3_K | 10.79GB |
| [NeuralMona_MoE-4x7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf/blob/main/NeuralMona_MoE-4x7B.Q3_K_M.gguf) | Q3_K_M | 10.79GB |
| [NeuralMona_MoE-4x7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf/blob/main/NeuralMona_MoE-4x7B.Q3_K_L.gguf) | Q3_K_L | 11.68GB |
| [NeuralMona_MoE-4x7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf/blob/main/NeuralMona_MoE-4x7B.IQ4_XS.gguf) | IQ4_XS | 12.15GB |
| [NeuralMona_MoE-4x7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf/blob/main/NeuralMona_MoE-4x7B.Q4_0.gguf) | Q4_0 | 12.69GB |
| [NeuralMona_MoE-4x7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf/blob/main/NeuralMona_MoE-4x7B.IQ4_NL.gguf) | IQ4_NL | 12.81GB |
| [NeuralMona_MoE-4x7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf/blob/main/NeuralMona_MoE-4x7B.Q4_K_S.gguf) | Q4_K_S | 12.8GB |
| [NeuralMona_MoE-4x7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf/blob/main/NeuralMona_MoE-4x7B.Q4_K.gguf) | Q4_K | 13.61GB |
| [NeuralMona_MoE-4x7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf/blob/main/NeuralMona_MoE-4x7B.Q4_K_M.gguf) | Q4_K_M | 13.61GB |
| [NeuralMona_MoE-4x7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf/blob/main/NeuralMona_MoE-4x7B.Q4_1.gguf) | Q4_1 | 14.09GB |
| [NeuralMona_MoE-4x7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf/blob/main/NeuralMona_MoE-4x7B.Q5_0.gguf) | Q5_0 | 15.48GB |
| [NeuralMona_MoE-4x7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf/blob/main/NeuralMona_MoE-4x7B.Q5_K_S.gguf) | Q5_K_S | 15.48GB |
| [NeuralMona_MoE-4x7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf/blob/main/NeuralMona_MoE-4x7B.Q5_K.gguf) | Q5_K | 15.96GB |
| [NeuralMona_MoE-4x7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf/blob/main/NeuralMona_MoE-4x7B.Q5_K_M.gguf) | Q5_K_M | 15.96GB |
| [NeuralMona_MoE-4x7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf/blob/main/NeuralMona_MoE-4x7B.Q5_1.gguf) | Q5_1 | 16.88GB |
| [NeuralMona_MoE-4x7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf/blob/main/NeuralMona_MoE-4x7B.Q6_K.gguf) | Q6_K | 18.46GB |
| [NeuralMona_MoE-4x7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/CultriX_-_NeuralMona_MoE-4x7B-gguf/blob/main/NeuralMona_MoE-4x7B.Q8_0.gguf) | Q8_0 | 23.9GB |
Original model description:
---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- CultriX/MonaTrix-v4
- mlabonne/OmniTruthyBeagle-7B-v0
- CultriX/MoNeuTrix-7B-v1
- paulml/OmniBeagleSquaredMBX-v3-7B
base_model:
- CultriX/MonaTrix-v4
- mlabonne/OmniTruthyBeagle-7B-v0
- CultriX/MoNeuTrix-7B-v1
- paulml/OmniBeagleSquaredMBX-v3-7B
---
# NeuralMona_MoE-4x7B
NeuralMona_MoE-4x7B is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [CultriX/MonaTrix-v4](https://huggingface.co/CultriX/MonaTrix-v4)
* [mlabonne/OmniTruthyBeagle-7B-v0](https://huggingface.co/mlabonne/OmniTruthyBeagle-7B-v0)
* [CultriX/MoNeuTrix-7B-v1](https://huggingface.co/CultriX/MoNeuTrix-7B-v1)
* [paulml/OmniBeagleSquaredMBX-v3-7B](https://huggingface.co/paulml/OmniBeagleSquaredMBX-v3-7B)
## 🧩 Configuration
```yaml
base_model: CultriX/MonaTrix-v4
dtype: bfloat16
experts:
- source_model: "CultriX/MonaTrix-v4" # Historical Analysis, Geopolitics, and Economic Evaluation
positive_prompts:
- "Historic analysis"
- "Geopolitical impacts"
- "Evaluate significance"
- "Predict impact"
- "Assess consequences"
- "Discuss implications"
- "Explain geopolitical"
- "Analyze historical"
- "Examine economic"
- "Evaluate role"
- "Analyze importance"
- "Discuss cultural impact"
- "Discuss historical"
negative_prompts:
- "Compose"
- "Translate"
- "Debate"
- "Solve math"
- "Analyze data"
- "Forecast"
- "Predict"
- "Process"
- "Coding"
- "Programming"
- "Code"
- "Datascience"
- "Cryptography"
- source_model: "mlabonne/OmniTruthyBeagle-7B-v0" # Multilingual Communication and Cultural Insights
positive_prompts:
- "Describe cultural"
- "Explain in language"
- "Translate"
- "Compare cultural differences"
- "Discuss cultural impact"
- "Narrate in language"
- "Explain impact on culture"
- "Discuss national identity"
- "Describe cultural significance"
- "Narrate cultural"
- "Discuss folklore"
negative_prompts:
- "Compose"
- "Debate"
- "Solve math"
- "Analyze data"
- "Forecast"
- "Predict"
- "Coding"
- "Programming"
- "Code"
- "Datascience"
- "Cryptography"
- source_model: "CultriX/MoNeuTrix-7B-v1" # Problem Solving, Innovation, and Creative Thinking
positive_prompts:
- "Devise strategy"
- "Imagine society"
- "Invent device"
- "Design concept"
- "Propose theory"
- "Reason math"
- "Develop strategy"
- "Invent"
negative_prompts:
- "Translate"
- "Discuss"
- "Debate"
- "Summarize"
- "Explain"
- "Detail"
- "Compose"
- source_model: "paulml/OmniBeagleSquaredMBX-v3-7B" # Explaining Scientific Phenomena and Principles
positive_prompts:
- "Explain scientific"
- "Discuss impact"
- "Analyze potential"
- "Elucidate significance"
- "Summarize findings"
- "Detail explanation"
negative_prompts:
- "Cultural significance"
- "Engage in creative writing"
- "Perform subjective judgment tasks"
- "Discuss cultural traditions"
- "Write review"
- "Design"
- "Create"
- "Narrate"
- "Discuss"
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/NeuralMona_MoE-4x7B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Ali-C137/Q15H10M-0000 | Ali-C137 | 2024-06-21T12:40:53Z | 430 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-18T11:08:37Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gagan3012/wav2vec2-xlsr-punjabi | gagan3012 | 2021-07-06T04:21:10Z | 429 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: pa-IN
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: wav2vec2-xlsr-punjabi
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice pa
type: common_voice
args: pa-IN
metrics:
- name: Test WER
type: wer
value: 58.06
---
# Wav2Vec2-Large-XLSR-53-Punjabi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Punjabi using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pa-IN", split="test")
processor = Wav2Vec2Processor.from_pretrained("gagan3012/wav2vec2-xlsr-punjabi")
model = Wav2Vec2ForCTC.from_pretrained("gagan3012/wav2vec2-xlsr-punjabi")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
#### Results:
Prediction: ['ਹਵਾ ਲਾਤ ਵਿੱਚ ਪੰਦ ਛੇ ਇਖਲਾਟਕੀ ਮੁਜਰਮ ਸਨ', 'ਮੈ ਇ ਹਾ ਪੈਸੇ ਲੇਹੜ ਨਹੀਂ ਸੀ ਚੌਨਾ']
Reference: ['ਹਵਾਲਾਤ ਵਿੱਚ ਪੰਜ ਛੇ ਇਖ਼ਲਾਕੀ ਮੁਜਰਮ ਸਨ', 'ਮੈਂ ਇਹ ਪੈਸੇ ਲੈਣੇ ਨਹੀਂ ਸੀ ਚਾਹੁੰਦਾ']
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice. # TODO: replace #TODO: replace language with your {language}, *e.g.* French
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "pa-IN", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("gagan3012/wav2vec2-xlsr-punjabi")
model = Wav2Vec2ForCTC.from_pretrained("gagan3012/wav2vec2-xlsr-punjabi")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\“]' # TODO: adapt this list to include all special characters you removed from the data
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\\\\\\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\\\\\\\twith torch.no_grad():
\\\\\\\\t\\\\\\\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
\\\\\\\\tpred_ids = torch.argmax(logits, dim=-1)
\\\\\\\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\\\\\\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 58.05 %
## Training
The script used for training can be found [here](https://colab.research.google.com/drive/1A7Y20c1QkSHfdOmLXPMiOEpwlTjDZ7m5?usp=sharing) |
DaniloTrotta/TestDeleV2 | DaniloTrotta | 2023-05-23T11:29:42Z | 429 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-05-22T14:51:01Z | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
extra_gated_prompt: >-
This model is open access and available to all, with a CreativeML OpenRAIL-M
license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or
harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use
them and are accountable for their use which must not go against the
provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as
a service. If you do, please be aware you have to include the same use
restrictions as the ones in the license and share a copy of the CreativeML
OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here:
https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
pipeline_tag: text-to-image
---
# DELIBERATE
#### All in One / Any Case Version
This model provides you the ability to create anything you want.</br>
The more power of prompt knowledges you have, the better results you'll get.</br>
It basically means that you'll never get a perfect result with just a few words.</br>
You have to fill out your prompt line extremely detailed.

#### Who find this model perfect:
- NSFW masters
- Meticulous anatomy artists
- Creative prompters
- Art designers
Dive into the perfect creations world with [my prompts](https://civitai.com/models/4823/deliberate "my prompts").</br>
Your research will be appreciated, so feel free to show everyone, what you can get with this model
---
license: bigscience-openrail-m
--- |
stablediffusionapi/hentaiflow-eevee | stablediffusionapi | 2023-07-24T13:23:23Z | 429 | 1 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-24T11:43:26Z | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Hentaiflow Eevee API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "hentaiflow-eevee"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/hentaiflow-eevee)
Model link: [View model](https://stablediffusionapi.com/models/hentaiflow-eevee)
Credits: [View credits](https://civitai.com/?query=Hentaiflow%20Eevee)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "hentaiflow-eevee",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
TheBloke/EverythingLM-13B-V3-16K-GGUF | TheBloke | 2023-09-27T12:54:08Z | 429 | 4 | transformers | [
"transformers",
"gguf",
"llama",
"dataset:totally-not-an-llm/EverythingLM-data-V3",
"base_model:totally-not-an-llm/EverythingLM-13b-V3-16k",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-09-23T20:28:50Z | ---
license: llama2
datasets:
- totally-not-an-llm/EverythingLM-data-V3
model_name: EverythingLM 13B V3 16K
base_model: totally-not-an-llm/EverythingLM-13b-V3-16k
inference: false
model_creator: Kai Howard
model_type: llama
prompt_template: 'USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# EverythingLM 13B V3 16K - GGUF
- Model creator: [Kai Howard](https://huggingface.co/totally-not-an-llm)
- Original model: [EverythingLM 13B V3 16K](https://huggingface.co/totally-not-an-llm/EverythingLM-13b-V3-16k)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Kai Howard's EverythingLM 13B V3 16K](https://huggingface.co/totally-not-an-llm/EverythingLM-13b-V3-16k).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/EverythingLM-13B-V3-16K-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/EverythingLM-13B-V3-16K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/EverythingLM-13B-V3-16K-GGUF)
* [Kai Howard's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/totally-not-an-llm/EverythingLM-13b-V3-16k)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: User-Assistant
```
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [everythinglm-13b-v3-16k.Q2_K.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-V3-16K-GGUF/blob/main/everythinglm-13b-v3-16k.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [everythinglm-13b-v3-16k.Q3_K_S.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-V3-16K-GGUF/blob/main/everythinglm-13b-v3-16k.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [everythinglm-13b-v3-16k.Q3_K_M.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-V3-16K-GGUF/blob/main/everythinglm-13b-v3-16k.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [everythinglm-13b-v3-16k.Q3_K_L.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-V3-16K-GGUF/blob/main/everythinglm-13b-v3-16k.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [everythinglm-13b-v3-16k.Q4_0.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-V3-16K-GGUF/blob/main/everythinglm-13b-v3-16k.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [everythinglm-13b-v3-16k.Q4_K_S.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-V3-16K-GGUF/blob/main/everythinglm-13b-v3-16k.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [everythinglm-13b-v3-16k.Q4_K_M.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-V3-16K-GGUF/blob/main/everythinglm-13b-v3-16k.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [everythinglm-13b-v3-16k.Q5_0.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-V3-16K-GGUF/blob/main/everythinglm-13b-v3-16k.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [everythinglm-13b-v3-16k.Q5_K_S.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-V3-16K-GGUF/blob/main/everythinglm-13b-v3-16k.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [everythinglm-13b-v3-16k.Q5_K_M.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-V3-16K-GGUF/blob/main/everythinglm-13b-v3-16k.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [everythinglm-13b-v3-16k.Q6_K.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-V3-16K-GGUF/blob/main/everythinglm-13b-v3-16k.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [everythinglm-13b-v3-16k.Q8_0.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-V3-16K-GGUF/blob/main/everythinglm-13b-v3-16k.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/EverythingLM-13B-V3-16K-GGUF and below it, a specific filename to download, such as: everythinglm-13b-v3-16k.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/EverythingLM-13B-V3-16K-GGUF everythinglm-13b-v3-16k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/EverythingLM-13B-V3-16K-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/EverythingLM-13B-V3-16K-GGUF everythinglm-13b-v3-16k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m everythinglm-13b-v3-16k.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "USER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/EverythingLM-13B-V3-16K-GGUF", model_file="everythinglm-13b-v3-16k.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Kai Howard's EverythingLM 13B V3 16K
# EverythingLM-13b-V3-16k
Introducing EverythingLM, a llama-2 based, general-purpose 13b model with 16k context thanks to LlongMa. The model is trained on the EverythingLM-V3 dataset, more info can be found on the dataset page.
The model is completely uncensored.
Despite being "uncensored", the base model might be resistant; you might have to prompt-engineer certain prompts.
### Notable features:
- Automatically triggered CoT reasoning.
- Verbose and detailed replies.
- Creative stories.
- Good prompt understanding.
### Differences from V2:
- Much more uncensored.
- Actual roleplaying ability now!
- General all around improvements thanks to the new dataset. Check out the dataset for more info.
### Prompt format (Alpaca-chat):
```
USER: <prompt>
ASSISTANT:
```
### Future plans:
- Highest priority right now is V3.1 with more optimized training and iterative dataset improvements based on testing.
### Note:
Through testing V2, I realized some alignment data had leaked in, causing the model to be less cooperative then intended. This model should do much better due to stricter filetering.
<!-- original-model-card end -->
|
Yntec/KomowataHaruka | Yntec | 2023-11-24T07:46:26Z | 429 | 3 | diffusers | [
"diffusers",
"safetensors",
"Anime",
"Chibi",
"Cute",
"Cartoon",
"CyberAIchemist",
"s6yx",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-11-24T07:21:20Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Anime
- Chibi
- Cute
- Cartoon
- CyberAIchemist
- s6yx
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
# Komowata Haruka Chibi Style
ReVAnimated 1.0 with the Komowata Haruka Chibi Style 1.0 LoRA baked in.
Samples and prompt:


Full picture of a pretty cute girl making food, detailed face, sitting on food, intricate, aesthetic, unreal engine, neat hair, highly detailed, smooth, light novel cover art by artgerm, greg rutkowski, ilya kuvshinov, rossdraws and alphonse mucha. sharp focus, chiaroscuro, manga illustration
Original pages:
https://civitai.com/models/9922?modelVersionId=11791 (Komowata Haruka Chibi Style 1.0)
https://civitai.com/models/7371?modelVersionId=8665 (ReVAnimated 1.0) |
pyannote/ci-segmentation | pyannote | 2024-01-25T13:24:40Z | 429 | 0 | pyannote-audio | [
"pyannote-audio",
"pytorch",
"pyannote",
"pyannote-audio-model",
"license:mit",
"region:us"
]
| null | 2024-01-25T13:11:02Z | ---
tags:
- pyannote
- pyannote-audio
- pyannote-audio-model
license: mit
inference: false
---
Dummy segmentation model used for continuous integration and unit tests.
```bash
pyannote-audio-train \
+registry="[REDACTED]/pyannote-audio/tutorials/AMI-diarization-setup/pyannote/database.yml" \
protocol=AMI.SpeakerDiarization.only_words \
model=DebugSegmentation \
task=SpeakerDiarization \
trainer.max_epochs=1
```
|
hahahafofo/Qwen-1_8B-Stable-Diffusion-Prompt-GGUF | hahahafofo | 2024-01-27T14:31:58Z | 429 | 3 | null | [
"gguf",
"license:apache-2.0",
"region:us"
]
| null | 2024-01-27T14:28:31Z | ---
license: apache-2.0
---
|
Intel/Mistral-7B-v0.1-int4-inc | Intel | 2024-05-31T05:52:51Z | 429 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dataset:NeelNanda/pile-10k",
"arxiv:2309.05516",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
]
| text-generation | 2024-02-21T07:45:43Z | ---
license: apache-2.0
datasets:
- NeelNanda/pile-10k
---
## Model Details: Mistral-7B-v0.1-int4-inc
This model is an int4 model with group_size 128 of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generated by [intel/auto-round](https://github.com/intel/auto-round).
## How To Use
### Reproduce the model
Here is the sample command to reproduce the model
```bash
git clone https://github.com/intel/auto-round
cd auto-round/examples/language-modeling
pip install -r requirements.txt
python3 main.py \
--model_name mistralai/Mistral-7B-v0.1 \
--device 0 \
--group_size 128 \
--bits 4 \
--iters 1000 \
--deployment_device 'gpu' \
--output_dir "./tmp_autoround"
```
### Use the model
### INT4 Inference with ITREX on CPU
Install the latest [intel-extension-for-transformers](https://github.com/intel/intel-extension-for-transformers)
```python
from intel_extension_for_transformers.transformers import AutoModelForCausalLM
from transformers import AutoTokenizer
quantized_model_dir = "Intel/Mistral-7B-v0.1-int4-inc"
model = AutoModelForCausalLM.from_pretrained(quantized_model_dir,
device_map="auto",
trust_remote_code=False,
use_neural_speed=False,
)
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, use_fast=True)
print(tokenizer.decode(model.generate(**tokenizer("There is a girl who likes adventure,", return_tensors="pt").to(model.device),max_new_tokens=50)[0]))
"""
<s> There is a girl who likes adventure, and she is a little bit crazy. She is a little bit crazy because she likes to do things that are dangerous. She likes to climb mountains, and she likes to go on long hikes. She also likes to go on long bike rides
"""
```
### INT4 Inference with AutoGPTQ
Install [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) from source first
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
quantized_model_dir = "Intel/Mistral-7B-v0.1-int4-inc"
model = AutoModelForCausalLM.from_pretrained(quantized_model_dir,
device_map="auto",
trust_remote_code=False,
)
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, use_fast=True)
print(tokenizer.decode(model.generate(**tokenizer("There is a girl who likes adventure,", return_tensors="pt").to(model.device),max_new_tokens=50)[0]))
```
### Evaluate the model
Install [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness.git) from source, we used the git id f3b7917091afba325af3980a35d8a6dcba03dc3f
```bash
lm_eval --model hf --model_args pretrained="Intel/Mistral-7B-v0.1-int4-inc",autogptq=True,gptq_use_triton=True --device cuda:0 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,rte,arc_easy,arc_challenge,mmlu --batch_size 128
```
| Metric | BF16 | INT4 |
| -------------- | ------ | ------ |
| Avg. | 0.6306 | 0.6308 |
| mmlu | 0.5961 | 0.5880 |
| lambada_openai | 0.7561 | 0.7551 |
| hellaswag | 0.6128 | 0.6079 |
| winogrande | 0.7443 | 0.7451 |
| piqa | 0.8079 | 0.8014 |
| truthfulqa_mc1 | 0.2803 | 0.2889 |
| openbookqa | 0.3280 | 0.3300 |
| boolq | 0.8373 | 0.8278 |
| rte | 0.6643 | 0.6968 |
| arc_easy | 0.8085 | 0.8060 |
| arc_challenge | 0.5009 | 0.4915 |
## Ethical Considerations and Limitations
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of the model, developers should perform safety testing.
## Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
* Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
* Intel Extension for Transformers [link](https://github.com/intel/intel-extension-for-transformers)
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Cite
@article{cheng2023optimize,
title={Optimize weight rounding via signed gradient descent for the quantization of llms},
author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao},
journal={arXiv preprint arXiv:2309.05516},
year={2023}
}
[arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round) |
pyp1/VoiceCraft_giga330M | pyp1 | 2024-04-16T16:07:59Z | 429 | 0 | voicecraft | [
"voicecraft",
"safetensors",
"text-to-speech",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"region:us"
]
| text-to-speech | 2024-04-08T21:14:32Z | ---
library_name: voicecraft
tags:
- text-to-speech
- pytorch_model_hub_mixin
- model_hub_mixin
repo_url: https://github.com/jasonppy/VoiceCraft
---
This model has been pushed to the Hub using VoiceCraft:
- Library: https://github.com/jasonppy/VoiceCraft
- Docs: [More Information Needed] |
Jaward/phi-3-mini-4k-instruct.Q4_0.gguf | Jaward | 2024-04-27T17:21:53Z | 429 | 2 | transformers | [
"transformers",
"gguf",
"phi3",
"text-generation",
"custom_code",
"arxiv:2404.14219",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-04-24T01:51:31Z | ---
license: mit
---
4 bits quantized GGUF weight of phi-3-mini-4k-instruct. Mlx compatible.
Official model: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf (not supported by mlx)
Please note that the official phi-3-mini-4k-instruct.gguf model is of llama-2 architecture as stated in the paper (https://huggingface.co/papers/2404.14219) |
theprint/tinyllama_alpaca_cthulhu_small | theprint | 2024-05-04T22:39:52Z | 429 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"text-generation",
"conversational",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-04-24T19:54:18Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
- trl
- sft
base_model: unsloth/tinyllama-bnb-4bit
pipeline_tag: text-generation
---
**IA IA! A tiny Cthulhu cultist!**
This TinyLlama variant is fine tuned on Cthulhu Mythos, so you can have your very own cultist AI friend.
**5/3/24 Update:** The model was given a bit more training and several gguf files were uploaded.
This model was mainly created to test a cthulhu-fied data set. This tiny model is a proof of concept, before a larger model is trained on the full data set. At that point, I will also make the data set public.
The Cthulhu Mythos data set is based on alpaca-cleaned, except all the replies have been re-written to sound like they were given by a cultist of Cthulhu. Only a subset of the data (10k entries) was used to train the first iteration of this model
# Uploaded model
- **Developed by:** theprint
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
kuleshov-group/PlantCaduceus_l20 | kuleshov-group | 2024-06-18T01:11:59Z | 429 | 0 | transformers | [
"transformers",
"pytorch",
"caduceus",
"feature-extraction",
"custom_code",
"arxiv:2312.00752",
"license:apache-2.0",
"region:us"
]
| feature-extraction | 2024-05-19T16:25:03Z | ---
license: apache-2.0
---
## Model Overview
PlantCaduceus is a DNA language model pre-trained on 16 Angiosperm genomes. Utilizing the [Caduceus](https://caduceus-dna.github.io/) and [Mamba](https://arxiv.org/abs/2312.00752) architectures and a masked language modeling objective, PlantCaduceus is designed to learn evolutionary conservation and DNA sequence grammar from 16 species spanning a history of 160 million years. We have trained a series of PlantCaduceus models with varying parameter sizes:
- **[PlantCaduceus_l20](https://huggingface.co/kuleshov-group/PlantCaduceus_l20)**: 20 layers, 384 hidden size, 20M parameters
- **[PlantCaduceus_l24](https://huggingface.co/kuleshov-group/PlantCaduceus_l24)**: 24 layers, 512 hidden size, 40M parameters
- **[PlantCaduceus_l28](https://huggingface.co/kuleshov-group/PlantCaduceus_l28)**: 28 layers, 768 hidden size, 112M parameters
- **[PlantCaduceus_l32](https://huggingface.co/kuleshov-group/PlantCaduceus_l32)**: 32 layers, 1024 hidden size, 225M parameters
## How to use
```python
from transformers import AutoModel, AutoModelForMaskedLM, AutoTokenizer
import torch
model_path = 'kuleshov-group/PlantCaduceus_l20'
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = AutoModelForMaskedLM.from_pretrained(model_path, trust_remote_code=True, device_map=device)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
sequence = "ATGCGTACGATCGTAG"
encoding = tokenizer.encode_plus(
sequence,
return_tensors="pt",
return_attention_mask=False,
return_token_type_ids=False
)
input_ids = encoding["input_ids"].to(device)
with torch.inference_mode():
outputs = model(input_ids=input_ids, output_hidden_states=True)
```
## Citation
```bibtex
@article {Zhai2024.06.04.596709,
author = {Zhai, Jingjing and Gokaslan, Aaron and Schiff, Yair and Berthel, Ana and Liu, Zong-Yan and Miller, Zachary R and Scheben, Armin and Stitzer, Michelle C and Romay, Cinta and Buckler, Edward S. and Kuleshov, Volodymyr},
title = {Cross-species plant genomes modeling at single nucleotide resolution using a pre-trained DNA language model},
elocation-id = {2024.06.04.596709},
year = {2024},
doi = {10.1101/2024.06.04.596709},
URL = {https://www.biorxiv.org/content/early/2024/06/05/2024.06.04.596709},
eprint = {https://www.biorxiv.org/content/early/2024/06/05/2024.06.04.596709.full.pdf},
journal = {bioRxiv}
}
```
## Contact
Jingjing Zhai ([email protected]) |
PekingU/rtdetr_r50vd_coco_o365 | PekingU | 2024-07-01T14:17:52Z | 429 | 4 | transformers | [
"transformers",
"safetensors",
"rt_detr",
"object-detection",
"vision",
"en",
"dataset:coco",
"arxiv:2304.08069",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| object-detection | 2024-05-21T13:21:52Z | ---
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: object-detection
tags:
- object-detection
- vision
datasets:
- coco
widget:
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
example_title: Savanna
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
example_title: Airport
---
# Model Card for RT-DETR
## Table of Contents
1. [Model Details](#model-details)
2. [Model Sources](#model-sources)
3. [How to Get Started with the Model](#how-to-get-started-with-the-model)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Model Architecture and Objective](#model-architecture-and-objective)
7. [Citation](#citation)
## Model Details

> The YOLO series has become the most popular framework for real-time object detection due to its reasonable trade-off between speed and accuracy.
However, we observe that the speed and accuracy of YOLOs are negatively affected by the NMS.
Recently, end-to-end Transformer-based detectors (DETRs) have provided an alternative to eliminating NMS.
Nevertheless, the high computational cost limits their practicality and hinders them from fully exploiting the advantage of excluding NMS.
In this paper, we propose the Real-Time DEtection TRansformer (RT-DETR), the first real-time end-to-end object detector to our best knowledge that addresses the above dilemma.
We build RT-DETR in two steps, drawing on the advanced DETR:
first we focus on maintaining accuracy while improving speed, followed by maintaining speed while improving accuracy.
Specifically, we design an efficient hybrid encoder to expeditiously process multi-scale features by decoupling intra-scale interaction and cross-scale fusion to improve speed.
Then, we propose the uncertainty-minimal query selection to provide high-quality initial queries to the decoder, thereby improving accuracy.
In addition, RT-DETR supports flexible speed tuning by adjusting the number of decoder layers to adapt to various scenarios without retraining.
Our RT-DETR-R50 / R101 achieves 53.1% / 54.3% AP on COCO and 108 / 74 FPS on T4 GPU, outperforming previously advanced YOLOs in both speed and accuracy.
We also develop scaled RT-DETRs that outperform the lighter YOLO detectors (S and M models).
Furthermore, RT-DETR-R50 outperforms DINO-R50 by 2.2% AP in accuracy and about 21 times in FPS.
After pre-training with Objects365, RT-DETR-R50 / R101 achieves 55.3% / 56.2% AP. The project page: this [https URL](https://zhao-yian.github.io/RTDETR/).
This is the model card of a 🤗 [transformers](https://huggingface.co/docs/transformers/index) model that has been pushed on the Hub.
- **Developed by:** Yian Zhao and Sangbum Choi
- **Funded by:** National Key R&D Program of China (No.2022ZD0118201), Natural Science Foundation of China (No.61972217, 32071459, 62176249, 62006133, 62271465),
and the Shenzhen Medical Research Funds in China (No.
B2302037).
- **Shared by:** Sangbum Choi
- **Model type:** [RT-DETR](https://huggingface.co/docs/transformers/main/en/model_doc/rt_detr)
- **License:** Apache-2.0
### Model Sources
<!-- Provide the basic links for the model. -->
- **HF Docs:** [RT-DETR](https://huggingface.co/docs/transformers/main/en/model_doc/rt_detr)
- **Repository:** https://github.com/lyuwenyu/RT-DETR
- **Paper:** https://arxiv.org/abs/2304.08069
- **Demo:** [RT-DETR Tracking](https://huggingface.co/spaces/merve/RT-DETR-tracking-coco)
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import torch
import requests
from PIL import Image
from transformers import RTDetrForObjectDetection, RTDetrImageProcessor
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
image_processor = RTDetrImageProcessor.from_pretrained("PekingU/rtdetr_r50vd_coco_o365")
model = RTDetrForObjectDetection.from_pretrained("PekingU/rtdetr_r50vd_coco_o365")
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3)
for result in results:
for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]):
score, label = score.item(), label_id.item()
box = [round(i, 2) for i in box.tolist()]
print(f"{model.config.id2label[label]}: {score:.2f} {box}")
```
This should output
```
sofa: 0.97 [0.14, 0.38, 640.13, 476.21]
cat: 0.96 [343.38, 24.28, 640.14, 371.5]
cat: 0.96 [13.23, 54.18, 318.98, 472.22]
remote: 0.95 [40.11, 73.44, 175.96, 118.48]
remote: 0.92 [333.73, 76.58, 369.97, 186.99]
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The RTDETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
We conduct experiments on COCO and Objects365 datasets, where RT-DETR is trained on COCO train2017 and validated on COCO val2017 dataset.
We report the standard COCO metrics, including AP (averaged over uniformly sampled IoU thresholds ranging from 0.50-0.95 with a step size of 0.05),
AP50, AP75, as well as AP at different scales: APS, APM, APL.
### Preprocessing
Images are resized to 640x640 pixels and rescaled with `image_mean=[0.485, 0.456, 0.406]` and `image_std=[0.229, 0.224, 0.225]`.
### Training Hyperparameters
- **Training regime:** <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->

## Evaluation
| Model | #Epochs | #Params (M) | GFLOPs | FPS_bs=1 | AP (val) | AP50 (val) | AP75 (val) | AP-s (val) | AP-m (val) | AP-l (val) |
|----------------------------|---------|-------------|--------|----------|--------|-----------|-----------|----------|----------|----------|
| RT-DETR-R18 | 72 | 20 | 60.7 | 217 | 46.5 | 63.8 | 50.4 | 28.4 | 49.8 | 63.0 |
| RT-DETR-R34 | 72 | 31 | 91.0 | 172 | 48.5 | 66.2 | 52.3 | 30.2 | 51.9 | 66.2 |
| RT-DETR R50 | 72 | 42 | 136 | 108 | 53.1 | 71.3 | 57.7 | 34.8 | 58.0 | 70.0 |
| RT-DETR R101| 72 | 76 | 259 | 74 | 54.3 | 72.7 | 58.6 | 36.0 | 58.8 | 72.1 |
| RT-DETR-R18 (Objects 365 pretrained) | 60 | 20 | 61 | 217 | 49.2 | 66.6 | 53.5 | 33.2 | 52.3 | 64.8 |
| RT-DETR-R50 (Objects 365 pretrained) | 24 | 42 | 136 | 108 | 55.3 | 73.4 | 60.1 | 37.9 | 59.9 | 71.8 |
| RT-DETR-R101 (Objects 365 pretrained) | 24 | 76 | 259 | 74 | 56.2 | 74.6 | 61.3 | 38.3 | 60.5 | 73.5 |
### Model Architecture and Objective

Overview of RT-DETR. We feed the features from the last three stages of the backbone into the encoder. The efficient hybrid
encoder transforms multi-scale features into a sequence of image features through the Attention-based Intra-scale Feature Interaction (AIFI)
and the CNN-based Cross-scale Feature Fusion (CCFF). Then, the uncertainty-minimal query selection selects a fixed number of encoder
features to serve as initial object queries for the decoder. Finally, the decoder with auxiliary prediction heads iteratively optimizes object
queries to generate categories and boxes.
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@misc{lv2023detrs,
title={DETRs Beat YOLOs on Real-time Object Detection},
author={Yian Zhao and Wenyu Lv and Shangliang Xu and Jinman Wei and Guanzhong Wang and Qingqing Dang and Yi Liu and Jie Chen},
year={2023},
eprint={2304.08069},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Model Card Authors
[Sangbum Choi](https://huggingface.co/danelcsb)
[Pavel Iakubovskii](https://huggingface.co/qubvel-hf)
|
QuantFactory/Medichat-Llama3-8B-GGUF | QuantFactory | 2024-05-27T05:18:59Z | 429 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"medical",
"text-generation",
"en",
"dataset:mlabonne/orpo-dpo-mix-40k",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:jondurbin/airoboros-3.2",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:m-a-p/Code-Feedback",
"dataset:MaziyarPanahi/WizardLM_evol_instruct_V2_196k",
"dataset:ruslanmv/ai-medical-chatbot",
"base_model:sethuiyer/Medichat-Llama3-8B",
"license:other",
"model-index",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-27T04:19:21Z | ---
base_model: sethuiyer/Medichat-Llama3-8B
library_name: transformers
tags:
- mergekit
- merge
- medical
license: other
datasets:
- mlabonne/orpo-dpo-mix-40k
- Open-Orca/SlimOrca-Dedup
- jondurbin/airoboros-3.2
- microsoft/orca-math-word-problems-200k
- m-a-p/Code-Feedback
- MaziyarPanahi/WizardLM_evol_instruct_V2_196k
- ruslanmv/ai-medical-chatbot
model-index:
- name: Medichat-Llama3-8B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 59.13
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Medichat-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.9
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Medichat-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.35
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Medichat-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 49.65
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Medichat-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.93
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Medichat-Llama3-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.35
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Medichat-Llama3-8B
name: Open LLM Leaderboard
language:
- en
pipeline_tag: text-generation
---
# Medichat-Llama3-8B-GGUF
This is quantized version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B) created using llama.cpp
# Model Description
Built upon the powerful LLaMa-3 architecture and fine-tuned on an extensive dataset of health information, this model leverages its vast medical knowledge to offer clear, comprehensive answers.
This model is generally better for accurate and informative responses, particularly for users seeking in-depth medical advice.
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Undi95/Llama-3-Unholy-8B
parameters:
weight: [0.25, 0.35, 0.45, 0.35, 0.25]
density: [0.1, 0.25, 0.5, 0.25, 0.1]
- model: Locutusque/llama-3-neural-chat-v1-8b
- model: ruslanmv/Medical-Llama3-8B-16bit
parameters:
weight: [0.55, 0.45, 0.35, 0.45, 0.55]
density: [0.1, 0.25, 0.5, 0.25, 0.1]
merge_method: dare_ties
base_model: Locutusque/llama-3-neural-chat-v1-8b
parameters:
int8_mask: true
dtype: bfloat16
```
# Comparision Against Dr.Samantha 7B
| Subject | Medichat-Llama3-8B Accuracy (%) | Dr. Samantha Accuracy (%) |
|-------------------------|---------------------------------|---------------------------|
| Clinical Knowledge | 71.70 | 52.83 |
| Medical Genetics | 78.00 | 49.00 |
| Human Aging | 70.40 | 58.29 |
| Human Sexuality | 73.28 | 55.73 |
| College Medicine | 62.43 | 38.73 |
| Anatomy | 64.44 | 41.48 |
| College Biology | 72.22 | 52.08 |
| High School Biology | 77.10 | 53.23 |
| Professional Medicine | 63.97 | 38.73 |
| Nutrition | 73.86 | 50.33 |
| Professional Psychology | 68.95 | 46.57 |
| Virology | 54.22 | 41.57 |
| High School Psychology | 83.67 | 66.60 |
| **Average** | **70.33** | **48.85** |
The current model demonstrates a substantial improvement over the previous [Dr. Samantha](sethuiyer/Dr_Samantha-7b) model in terms of subject-specific knowledge and accuracy.
### Usage:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
class MedicalAssistant:
def __init__(self, model_name="sethuiyer/Medichat-Llama3-8B", device="cuda"):
self.device = device
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name).to(self.device)
self.sys_message = '''
You are an AI Medical Assistant trained on a vast dataset of health information. Please be thorough and
provide an informative answer. If you don't know the answer to a specific medical inquiry, advise seeking professional help.
'''
def format_prompt(self, question):
messages = [
{"role": "system", "content": self.sys_message},
{"role": "user", "content": question}
]
prompt = self.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
return prompt
def generate_response(self, question, max_new_tokens=512):
prompt = self.format_prompt(question)
inputs = self.tokenizer(prompt, return_tensors="pt").to(self.device)
with torch.no_grad():
outputs = self.model.generate(**inputs, max_new_tokens=max_new_tokens, use_cache=True)
answer = self.tokenizer.batch_decode(outputs, skip_special_tokens=True)[0].strip()
return answer
if __name__ == "__main__":
assistant = MedicalAssistant()
question = '''
Symptoms:
Dizziness, headache, and nausea.
What is the differential diagnosis?
'''
response = assistant.generate_response(question)
print(response)
```
## Ollama
This model is now also available on Ollama. You can use it by running the command ```ollama run monotykamary/medichat-llama3``` in your
terminal. If you have limited computing resources, check out this [video](https://www.youtube.com/watch?v=Qa1h7ygwQq8) to learn how to run it on
a Google Colab backend. |
not-lain/Finetuned_TinyLlama | not-lain | 2024-05-31T00:56:46Z | 429 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-30T22:37:31Z | ---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Finetuned_TinyLlama
results: []
---
# Finetuned_TinyLlama
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
## Model description
This model was made with [this tutorial by Noa](https://huggingface.co/blog/nroggendorff/finetune-tinyllama), you can find a more complete model and demo at [nroggendorff/mayo](https://huggingface.co/spaces/nroggendorff/mayo)
## Limitations
- The model is easily gaslit
- It is uncensored, and there are no safety features.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
- training_loss=2.0859998975481306
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
BeaverAI/Llama-3SOME-8B-v2a-GGUF | BeaverAI | 2024-06-03T23:41:20Z | 429 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-03T23:29:49Z | Entry not found |
tomasonjo/text2cypher-codestral-q4_k_m-gguf | tomasonjo | 2024-06-10T02:20:42Z | 429 | 2 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"dataset:tomasonjo/text2cypher-gpt4o-clean",
"base_model:mistralai/Codestral-22B-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-09T20:37:18Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: mistralai/Codestral-22B-v0.1
datasets:
- tomasonjo/text2cypher-gpt4o-clean
---
# Uploaded model
- **Developed by:** tomasonjo
- **License:** apache-2.0
- **Finetuned from model :** mistralai/Codestral-22B-v0.1
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
xX-FANE-Xx/RAFT-mistral-v1-merged-Q5_K_M-GGUF | xX-FANE-Xx | 2024-06-20T07:57:55Z | 429 | 1 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:miha-kac/RAFT-mistral-v1-merged",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-20T07:57:34Z | ---
base_model: miha-kac/RAFT-mistral-v1-merged
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# xX-FANE-Xx/RAFT-mistral-v1-merged-Q5_K_M-GGUF
This model was converted to GGUF format from [`miha-kac/RAFT-mistral-v1-merged`](https://huggingface.co/miha-kac/RAFT-mistral-v1-merged) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/miha-kac/RAFT-mistral-v1-merged) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo xX-FANE-Xx/RAFT-mistral-v1-merged-Q5_K_M-GGUF --hf-file raft-mistral-v1-merged-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo xX-FANE-Xx/RAFT-mistral-v1-merged-Q5_K_M-GGUF --hf-file raft-mistral-v1-merged-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo xX-FANE-Xx/RAFT-mistral-v1-merged-Q5_K_M-GGUF --hf-file raft-mistral-v1-merged-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo xX-FANE-Xx/RAFT-mistral-v1-merged-Q5_K_M-GGUF --hf-file raft-mistral-v1-merged-q5_k_m.gguf -c 2048
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.