modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
RichardErkhov/sophosympatheia_-_Aurora-Nights-70B-v1.0-gguf
|
RichardErkhov
| 2024-06-07T01:44:09Z | 381 | 1 | null |
[
"gguf",
"region:us"
] | null | 2024-06-06T00:41:58Z |
Entry not found
|
Felladrin/gguf-TinyMistral-248M-v2.5-Instruct-orpo
|
Felladrin
| 2024-06-15T20:39:42Z | 381 | 0 | null |
[
"gguf",
"base_model:heegyu/TinyMistral-248M-v2.5-Instruct-orpo",
"license:apache-2.0",
"region:us"
] | null | 2024-06-15T10:41:09Z |
---
license: apache-2.0
base_model: heegyu/TinyMistral-248M-v2.5-Instruct-orpo
---
GGUF version of [heegyu/TinyMistral-248M-v2.5-Instruct-orpo](https://huggingface.co/heegyu/TinyMistral-248M-v2.5-Instruct-orpo).
|
John6666/bunny-mint-v3-ns-sdxl
|
John6666
| 2024-06-20T01:46:16Z | 381 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pony",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-20T01:38:08Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pony
---
Original model is [here](https://huggingface.co/lylogummy/BunnyMint).
|
jkrahmann/Meta-Llama-3-8B-Q4_K_M-GGUF
|
jkrahmann
| 2024-06-25T13:35:27Z | 381 | 0 | null |
[
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"region:us"
] |
text-generation
| 2024-06-25T13:35:05Z |
---
base_model: meta-llama/Meta-Llama-3-8B
language:
- en
license: llama3
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-cpp
- gguf-my-repo
extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\
\ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\
\ 3\" means the foundational large language models and software and algorithms,\
\ including machine-learning model code, trained model weights, inference-enabling\
\ code, training-enabling code, fine-tuning enabling code and other elements of\
\ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\
\"Llama Materials\" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation\
\ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\
we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\
\ an entity, your principal place of business is in the EEA or Switzerland) and\
\ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\
\ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\
\ a non-exclusive, worldwide, non-transferable and royalty-free limited license\
\ under Meta’s intellectual property or other rights owned by Meta embodied in the\
\ Llama Materials to use, reproduce, distribute, copy, create derivative works of,\
\ and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni.\
\ If you distribute or make available the Llama Materials (or any derivative works\
\ thereof), or a product or service that uses any of them, including another AI\
\ model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\
\ and (B) prominently display “Built with Meta Llama 3” on a related website, user\
\ interface, blogpost, about page, or product documentation. If you use the Llama\
\ Materials to create, train, fine tune, or otherwise improve an AI model, which\
\ is distributed or made available, you shall also include “Llama 3” at the beginning\
\ of any such AI model name.\nii. If you receive Llama Materials, or any derivative\
\ works thereof, from a Licensee as part of an integrated end user product, then\
\ Section 2 of this Agreement will not apply to you.\niii. You must retain in all\
\ copies of the Llama Materials that you distribute the following attribution notice\
\ within a “Notice” text file distributed as a part of such copies: “Meta Llama\
\ 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\nv. You will not use the Llama Materials or any output or\
\ results of the Llama Materials to improve any other large language model (excluding\
\ Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If,\
\ on the Meta Llama 3 version release date, the monthly active users of the products\
\ or services made available by or for Licensee, or Licensee’s affiliates, is greater\
\ than 700 million monthly active users in the preceding calendar month, you must\
\ request a license from Meta, which Meta may grant to you in its sole discretion,\
\ and you are not authorized to exercise any of the rights under this Agreement\
\ unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer\
\ of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT\
\ AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF\
\ ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,\
\ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\
\ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\
\ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\
\ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n\
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER\
\ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,\
\ OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,\
\ SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META\
\ OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\
5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\
\ and in connection with the Llama Materials, neither Meta nor Licensee may use\
\ any name or mark owned by or associated with the other or any of its affiliates,\
\ except as required for reasonable and customary use in describing and redistributing\
\ the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you\
\ a license to use “Llama 3” (the “Mark”) solely as required to comply with the\
\ last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently\
\ accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All\
\ goodwill arising out of your use of the Mark will inure to the benefit of Meta.\n\
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for\
\ Meta, with respect to any derivative works and modifications of the Llama Materials\
\ that are made by you, as between you and Meta, you are and will be the owner of\
\ such derivative works and modifications.\nc. If you institute litigation or other\
\ proceedings against Meta or any entity (including a cross-claim or counterclaim\
\ in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results,\
\ or any portion of any of the foregoing, constitutes infringement of intellectual\
\ property or other rights owned or licensable by you, then any licenses granted\
\ to you under this Agreement shall terminate as of the date such litigation or\
\ claim is filed or instituted. You will indemnify and hold harmless Meta from and\
\ against any claim by any third party arising out of or related to your use or\
\ distribution of the Llama Materials.\n6. Term and Termination. The term of this\
\ Agreement will commence upon your acceptance of this Agreement or access to the\
\ Llama Materials and will continue in full force and effect until terminated in\
\ accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\
\ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\
\ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\
\ Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\
\ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 4.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 6. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 7. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\
\ human-generated\n 6. Generating or facilitating false online engagement, including\
\ fake reviews and other means of fake online engagement\n4. Fail to appropriately\
\ disclose to end users any known dangers of your AI system\nPlease report any violation\
\ of this Policy, software “bug,” or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n * Reporting issues with\
\ the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
# jkrahmann/Meta-Llama-3-8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`meta-llama/Meta-Llama-3-8B`](https://huggingface.co/meta-llama/Meta-Llama-3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jkrahmann/Meta-Llama-3-8B-Q4_K_M-GGUF --hf-file meta-llama-3-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jkrahmann/Meta-Llama-3-8B-Q4_K_M-GGUF --hf-file meta-llama-3-8b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jkrahmann/Meta-Llama-3-8B-Q4_K_M-GGUF --hf-file meta-llama-3-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jkrahmann/Meta-Llama-3-8B-Q4_K_M-GGUF --hf-file meta-llama-3-8b-q4_k_m.gguf -c 2048
```
|
shibing624/bert4ner-base-chinese
|
shibing624
| 2024-02-19T09:23:05Z | 380 | 24 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"zh",
"ner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-07T08:24:29Z |
---
language:
- zh
tags:
- bert
- pytorch
- zh
- ner
license: apache-2.0
pipeline_tag: token-classification
widget:
- text: 常建良,男,1963年出生,工科学士,高级工程师
---
# BERT for Chinese Named Entity Recognition(bert4ner) Model
中文实体识别模型
`bert4ner-base-chinese` evaluate PEOPLE(人民日报) test data:
The overall performance of BERT on people **test**:
| | Accuracy | Recall | F1 |
| ------------ | ------------------ | ------------------ | ------------------ |
| BertSoftmax | 0.9425 | 0.9627 | 0.9525 |
在PEOPLE的测试集上达到接近SOTA水平。
BertSoftmax的网络结构(原生BERT):

## Usage
本项目开源在实体识别项目:[nerpy](https://github.com/shibing624/nerpy),可支持bert4ner模型,通过如下命令调用:
```shell
>>> from nerpy import NERModel
>>> model = NERModel("bert", "shibing624/bert4ner-base-chinese")
>>> predictions, raw_outputs, entities = model.predict(["常建良,男,1963年出生,工科学士,高级工程师"], split_on_space=False)
entities: [('常建良', 'PER'), ('1963年', 'TIME')]
```
模型文件组成:
```
bert4ner-base-chinese
├── config.json
├── model_args.json
├── pytorch_model.bin
├── special_tokens_map.json
├── tokenizer_config.json
└── vocab.txt
```
## Usage (HuggingFace Transformers)
Without [nerpy](https://github.com/shibing624/nerpy), you can use the model like this:
First, you pass your input through the transformer model, then you have to apply the bio tag to get the entity words.
Install package:
```
pip install transformers seqeval
```
```python
import os
import torch
from transformers import AutoTokenizer, AutoModelForTokenClassification
from seqeval.metrics.sequence_labeling import get_entities
os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("shibing624/bert4ner-base-chinese")
model = AutoModelForTokenClassification.from_pretrained("shibing624/bert4ner-base-chinese")
label_list = ['I-ORG', 'B-LOC', 'O', 'B-ORG', 'I-LOC', 'I-PER', 'B-TIME', 'I-TIME', 'B-PER']
sentence = "王宏伟来自北京,是个警察,喜欢去王府井游玩儿。"
def get_entity(sentence):
tokens = tokenizer.tokenize(sentence)
inputs = tokenizer.encode(sentence, return_tensors="pt")
with torch.no_grad():
outputs = model(inputs).logits
predictions = torch.argmax(outputs, dim=2)
char_tags = [(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].numpy())][1:-1]
print(sentence)
print(char_tags)
pred_labels = [i[1] for i in char_tags]
entities = []
line_entities = get_entities(pred_labels)
for i in line_entities:
word = sentence[i[1]: i[2] + 1]
entity_type = i[0]
entities.append((word, entity_type))
print("Sentence entity:")
print(entities)
get_entity(sentence)
```
output:
```shell
王宏伟来自北京,是个警察,喜欢去王府井游玩儿。
[('王', 'B-PER'), ('宏', 'I-PER'), ('伟', 'I-PER'), ('来', 'O'), ('自', 'O'), ('北', 'B-LOC'), ('京', 'I-LOC'), (',', 'O'), ('是', 'O'), ('个', 'O'), ('警', 'O'), ('察', 'O'), (',', 'O'), ('喜', 'O'), ('欢', 'O'), ('去', 'O'), ('王', 'B-LOC'), ('府', 'I-LOC'), ('井', 'I-LOC'), ('游', 'O'), ('玩', 'O'), ('儿', 'O'), ('。', 'O')]
Sentence entity:
[('王宏伟', 'PER'), ('北京', 'LOC'), ('王府井', 'LOC')]
```
### 训练数据集
#### 中文实体识别数据集
| 数据集 | 语料 | 下载链接 | 文件大小 |
| :------- | :--------- | :---------: | :---------: |
| **`CNER中文实体识别数据集`** | CNER(12万字) | [CNER github](https://github.com/shibing624/nerpy/tree/main/examples/data/cner)| 1.1MB |
| **`PEOPLE中文实体识别数据集`** | 人民日报数据集(200万字) | [PEOPLE github](https://github.com/shibing624/nerpy/tree/main/examples/data/people)| 12.8MB |
CNER中文实体识别数据集,数据格式:
```text
美 B-LOC
国 I-LOC
的 O
华 B-PER
莱 I-PER
士 I-PER
我 O
跟 O
他 O
```
如果需要训练bert4ner,请参考[https://github.com/shibing624/nerpy/tree/main/examples](https://github.com/shibing624/nerpy/tree/main/examples)
## Citation
```latex
@software{nerpy,
author = {Xu Ming},
title = {nerpy: Named Entity Recognition toolkit},
year = {2022},
url = {https://github.com/shibing624/nerpy},
}
```
|
fxmarty/gpt2-tiny-onnx
|
fxmarty
| 2023-01-03T09:41:05Z | 380 | 1 |
transformers
|
[
"transformers",
"onnx",
"gpt2",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2023-01-03T09:39:43Z |
---
license: apache-2.0
---
This model is meant for testing and will not return any meaningful output.
|
SG161222/Realistic_Vision_V1.3
|
SG161222
| 2024-04-12T15:42:02Z | 380 | 39 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-03T14:31:43Z |
---
license: creativeml-openrail-m
---
<b>This model is available on <a href="https://www.mage.space/">Mage.Space</a> (main sponsor)</b><br>
<b>You can support me directly on Boosty - https://boosty.to/sg_161222</b><br>
<b>Please read this!</b><br>
My model has always been free and always will be free. There are no restrictions on the use of the model. The rights to this model still belong to me.<br>
<hr/>
<b>I use this template to get good generation results:
Prompt:</b>
RAW photo, *subject*, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3
<b>Example:</b> RAW photo, a close up portrait photo of 26 y.o woman in wastelander clothes, long haircut, pale skin, slim body, background is city ruins, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3
<b>Negative Prompt:</b>
(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime:1.4), text, close up, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck<br>
<b>OR</b><br>
(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation
<b>Euler A or DPM++ 2M Karras with 25 steps<br>
CFG Scale 3,5 - 7<br>
Hires. fix with Latent upscaler<br>
0 Hires steps and Denoising strength 0.25-0.45<br>
Upscale by 1.1-2.0</b>
|
pongjin/roberta_with_kornli
|
pongjin
| 2023-06-22T05:53:14Z | 380 | 5 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"zero-shot-classification",
"ko",
"dataset:kor_nli",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2023-06-22T02:39:02Z |
---
license: apache-2.0
datasets:
- kor_nli
language:
- ko
metrics:
- accuracy
pipeline_tag: zero-shot-classification
---
**This model has been referred to the following link : https://github.com/Huffon/klue-transformers-tutorial.git**
해당 모델은 위 깃허브를 참고하여 klue/roberta-base 모델을 kor_nli 의 mnli, xnli로 파인튜닝한 모델입니다.
| train_loss | val_loss | acc | epoch | batch | lr |
| --- | --- | --- | --- | --- | --- |
| 0.326 | 0.538 | 0.811 | 3 | 32 | 2e-5 |
RoBERTa와 같이 token_type_ids를 사용하지 않는 모델의 경우, zero-shot pipeline을 바로 적용할 수 없습니다(transformers==4.7.0 기준)
따라서 다음과 같이 변환하는 코드를 넣어줘야 합니다. 해당 코드 또한 위 깃허브의 코드를 수정하였습니다.
```python
class ArgumentHandler(ABC):
"""
Base interface for handling arguments for each :class:`~transformers.pipelines.Pipeline`.
"""
@abstractmethod
def __call__(self, *args, **kwargs):
raise NotImplementedError()
class CustomZeroShotClassificationArgumentHandler(ArgumentHandler):
"""
Handles arguments for zero-shot for text classification by turning each possible label into an NLI
premise/hypothesis pair.
"""
def _parse_labels(self, labels):
if isinstance(labels, str):
labels = [label.strip() for label in labels.split(",")]
return labels
def __call__(self, sequences, labels, hypothesis_template):
if len(labels) == 0 or len(sequences) == 0:
raise ValueError("You must include at least one label and at least one sequence.")
if hypothesis_template.format(labels[0]) == hypothesis_template:
raise ValueError(
(
'The provided hypothesis_template "{}" was not able to be formatted with the target labels. '
"Make sure the passed template includes formatting syntax such as {{}} where the label should go."
).format(hypothesis_template)
)
if isinstance(sequences, str):
sequences = [sequences]
labels = self._parse_labels(labels)
sequence_pairs = []
for label in labels:
# 수정부: 두 문장을 페어로 입력했을 때, `token_type_ids`가 자동으로 붙는 문제를 방지하기 위해 미리 두 문장을 `sep_token` 기준으로 이어주도록 함
sequence_pairs.append(f"{sequences} {tokenizer.sep_token} {hypothesis_template.format(label)}")
return sequence_pairs, sequences
```
이후 classifier를 정의할 때 이를 적용해야 됩니다.
```python
classifier = pipeline(
"zero-shot-classification",
args_parser=CustomZeroShotClassificationArgumentHandler(),
model="pongjin/roberta_with_kornli"
)
```
#### results
```python
sequence = "배당락 D-1 코스피, 2330선 상승세...외인·기관 사자"
candidate_labels =["외환",'환율', "경제", "금융", "부동산","주식"]
classifier(
sequence,
candidate_labels,
hypothesis_template='이는 {}에 관한 것이다.',
)
>>{'sequence': '배당락 D-1 코스피, 2330선 상승세...외인·기관 사자',
'labels': ['주식', '금융', '경제', '외환', '환율', '부동산'],
'scores': [0.5052872896194458,
0.17972524464130402,
0.13852974772453308,
0.09460823982954025,
0.042949128895998,
0.038900360465049744]}
```
|
TheBloke/XwinCoder-13B-GGUF
|
TheBloke
| 2023-11-19T17:58:50Z | 380 | 7 |
transformers
|
[
"transformers",
"gguf",
"llama",
"base_model:Xwin-LM/XwinCoder-13B",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | 2023-11-19T16:47:36Z |
---
base_model: Xwin-LM/XwinCoder-13B
inference: false
license: llama2
model_creator: Xwin-LM
model_name: XwinCoder 13B
model_type: llama
prompt_template: "<system>: You are an AI coding assistant that helps people with\
\ programming. Write a response that appropriately completes the user's request.\n\
<user>: {prompt}\n<AI>: \n"
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# XwinCoder 13B - GGUF
- Model creator: [Xwin-LM](https://huggingface.co/Xwin-LM)
- Original model: [XwinCoder 13B](https://huggingface.co/Xwin-LM/XwinCoder-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Xwin-LM's XwinCoder 13B](https://huggingface.co/Xwin-LM/XwinCoder-13B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/XwinCoder-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/XwinCoder-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF)
* [Xwin-LM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Xwin-LM/XwinCoder-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: XWin-Coder
```
<system>: You are an AI coding assistant that helps people with programming. Write a response that appropriately completes the user's request.
<user>: {prompt}
<AI>:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [xwincoder-13b.Q2_K.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [xwincoder-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [xwincoder-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [xwincoder-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [xwincoder-13b.Q4_0.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [xwincoder-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [xwincoder-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [xwincoder-13b.Q5_0.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [xwincoder-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [xwincoder-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [xwincoder-13b.Q6_K.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [xwincoder-13b.Q8_0.gguf](https://huggingface.co/TheBloke/XwinCoder-13B-GGUF/blob/main/xwincoder-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/XwinCoder-13B-GGUF and below it, a specific filename to download, such as: xwincoder-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/XwinCoder-13B-GGUF xwincoder-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/XwinCoder-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/XwinCoder-13B-GGUF xwincoder-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m xwincoder-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<system>: You are an AI coding assistant that helps people with programming. Write a response that appropriately completes the user's request.\n<user>: {prompt}\n<AI>:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/XwinCoder-13B-GGUF", model_file="xwincoder-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Xwin-LM's XwinCoder 13B
# XwinCoder
We are glad to introduce our instruction finetuned code generation models based on CodeLLaMA: XwinCoder. We release model weights and evaluation code.
**Repository:** [https://github.com/Xwin-LM/Xwin-LM/tree/main/Xwin-Coder](https://github.com/Xwin-LM/Xwin-LM/tree/main/Xwin-Coder)
**Models:**
| Model | 🤗hf link | HumanEval pass@1 | MBPP pass@1 | APPS-intro pass@5 |
|-------|------------|----------|------|-------------|
| XwinCoder-7B | [link](https://huggingface.co/Xwin-LM/XwinCoder-7B) | 63.8 | 57.4 | 31.5 |
| XwinCoder-13B | [link](https://huggingface.co/Xwin-LM/XwinCoder-13B) | 68.8 | 60.1 | 35.4 |
| XwinCoder-34B | [link](https://huggingface.co/Xwin-LM/XwinCoder-34B) | 74.2 | 64.8 | 43.0 |
## Updates
- 💥 We released [**XwinCoder-7B**](https://huggingface.co/Xwin-LM/XwinCoder-7B), [**XwinCoder-13B**](https://huggingface.co/Xwin-LM/XwinCoder-13B), [**XwinCoder-34B**](https://huggingface.co/Xwin-LM/XwinCoder-34B). Our XwinCoder-34B reached 74.2 on HumanEval and it **achieves comparable performance as GPT-3.5-turbo on 6 benchmarks**.
- ❗We support evaluating instruction finetuned models on HumanEval, MBPP, APPS, DS1000 and MT-Bench. See our github repository.
-
## Overview

* To fully demonstrate our model's coding capabilities in real-world usage scenarios, we have conducted thorough evaluations on several existing mainstream coding capability leaderboards (rather than only on the currently most popular HumanEval).
* As shown in the radar chart results, our 34B model **achieves comparable performance as GPT-3.5-turbo on coding abilities**.
* It is worth mentioning that, to ensure accurate visualization, our radar chart has not been scaled (only translated; MT-Bench score is scaled by 10x to be more comparable with other benchmarks).
* Multiple-E-avg6 refer to the 6 languages used in CodeLLaMA paper. Results of GPT-4 and GPT-3.5-turbo are conducted by us, more details will be released later.
## Demo
We provide a chat demo in our github repository, here are some examples:

<!-- original-model-card end -->
|
janhq/stealth-v1.3-GGUF
|
janhq
| 2024-01-15T03:23:36Z | 380 | 1 | null |
[
"gguf",
"en",
"base_model:jan-hq/stealth-v1.3",
"license:apache-2.0",
"region:us"
] | null | 2024-01-15T02:22:26Z |
---
license: apache-2.0
language:
- en
base_model: jan-hq/stealth-v1.3
model_creator: jan-hq
model_name: stealth-v1.3
quantized_by: JanHQ
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Model Description
This is a GGUF version of [jan-hq/stealth-v1.3](https://huggingface.co/jan-hq/stealth-v1.3)
- Model creator: [jan-hq](https://huggingface.co/jan-hq)
- Original model: [stealth-v1.3](https://huggingface.co/jan-hq/stealth-v1.3)
- Model description: [Readme](https://huggingface.co/jan-hq/stealth-v1.3/blob/main/README.md)
# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# Jan Model Converter
This is a repository for the [open-source converter](https://github.com/janhq/model-converter. We would be grateful if the community could contribute and strengthen this repository. We are aiming to expand the repo that can convert into various types of format
|
ChrisWilson011016/5F9W2n4jrxXvvUoDm5eDwknSFK8M82wwDR8MSZVSkztBA7cJ_vgg
|
ChrisWilson011016
| 2024-03-04T18:50:26Z | 380 | 0 |
keras
|
[
"keras",
"region:us"
] | null | 2024-02-24T15:07:31Z |
Entry not found
|
mradermacher/Chimera-Apex-7B-GGUF
|
mradermacher
| 2024-05-06T05:09:05Z | 380 | 1 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:bunnycore/Chimera-Apex-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-08T03:57:50Z |
---
base_model: bunnycore/Chimera-Apex-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/bunnycore/Chimera-Apex-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF/resolve/main/Chimera-Apex-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
bartowski/Einstein-v6.1-Llama3-8B-old-GGUF
|
bartowski
| 2024-05-02T20:00:19Z | 380 | 13 | null |
[
"gguf",
"axolotl",
"generated_from_trainer",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"science",
"physics",
"chemistry",
"biology",
"math",
"llama",
"llama3",
"text-generation",
"en",
"dataset:allenai/ai2_arc",
"dataset:camel-ai/physics",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/biology",
"dataset:camel-ai/math",
"dataset:metaeval/reclor",
"dataset:openbookqa",
"dataset:mandyyyyii/scibench",
"dataset:derek-thomas/ScienceQA",
"dataset:TIGER-Lab/ScienceEval",
"dataset:jondurbin/airoboros-3.2",
"dataset:LDJnr/Capybara",
"dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5",
"dataset:STEM-AI-mtl/Electrical-engineering",
"dataset:knowrohit07/saraswati-stem",
"dataset:sablo/oasst2_curated",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:bigbio/med_qa",
"dataset:meta-math/MetaMathQA-40K",
"dataset:piqa",
"dataset:scibench",
"dataset:sciq",
"dataset:Open-Orca/SlimOrca",
"dataset:migtissera/Synthia-v1.3",
"dataset:allenai/WildChat",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:openchat/openchat_sharegpt4_dataset",
"dataset:teknium/GPTeacher-General-Instruct",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:totally-not-an-llm/EverythingLM-data-V3",
"dataset:HuggingFaceH4/no_robots",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"region:us"
] |
text-generation
| 2024-04-23T18:35:19Z |
---
language:
- en
license: other
tags:
- axolotl
- generated_from_trainer
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- science
- physics
- chemistry
- biology
- math
- llama
- llama3
base_model: meta-llama/Meta-Llama-3-8B
datasets:
- allenai/ai2_arc
- camel-ai/physics
- camel-ai/chemistry
- camel-ai/biology
- camel-ai/math
- metaeval/reclor
- openbookqa
- mandyyyyii/scibench
- derek-thomas/ScienceQA
- TIGER-Lab/ScienceEval
- jondurbin/airoboros-3.2
- LDJnr/Capybara
- Cot-Alpaca-GPT4-From-OpenHermes-2.5
- STEM-AI-mtl/Electrical-engineering
- knowrohit07/saraswati-stem
- sablo/oasst2_curated
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- bigbio/med_qa
- meta-math/MetaMathQA-40K
- openbookqa
- piqa
- metaeval/reclor
- derek-thomas/ScienceQA
- scibench
- sciq
- Open-Orca/SlimOrca
- migtissera/Synthia-v1.3
- TIGER-Lab/ScienceEval
- allenai/WildChat
- microsoft/orca-math-word-problems-200k
- openchat/openchat_sharegpt4_dataset
- teknium/GPTeacher-General-Instruct
- m-a-p/CodeFeedback-Filtered-Instruction
- totally-not-an-llm/EverythingLM-data-V3
- HuggingFaceH4/no_robots
- OpenAssistant/oasst_top1_2023-08-25
- WizardLM/WizardLM_evol_instruct_70k
quantized_by: bartowski
pipeline_tag: text-generation
---
# DEPRECATED
Download this version with the BPE tokenizer fixes instead: https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF
## Llamacpp imatrix Quantizations of Einstein-v6.1-Llama3-8B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2714">b2714</a> for quantization.
Original model: https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Einstein-v6.1-Llama3-8B-Q8_0.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Einstein-v6.1-Llama3-8B-Q6_K.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Einstein-v6.1-Llama3-8B-Q5_K_M.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Einstein-v6.1-Llama3-8B-Q5_K_S.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Einstein-v6.1-Llama3-8B-Q4_K_M.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Einstein-v6.1-Llama3-8B-Q4_K_S.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Einstein-v6.1-Llama3-8B-IQ4_NL.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Einstein-v6.1-Llama3-8B-IQ4_XS.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Einstein-v6.1-Llama3-8B-Q3_K_L.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Einstein-v6.1-Llama3-8B-Q3_K_M.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Einstein-v6.1-Llama3-8B-IQ3_M.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Einstein-v6.1-Llama3-8B-IQ3_S.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Einstein-v6.1-Llama3-8B-Q3_K_S.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Einstein-v6.1-Llama3-8B-IQ3_XS.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Einstein-v6.1-Llama3-8B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Einstein-v6.1-Llama3-8B-Q2_K.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Einstein-v6.1-Llama3-8B-IQ2_M.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Einstein-v6.1-Llama3-8B-IQ2_S.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Einstein-v6.1-Llama3-8B-IQ2_XS.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [Einstein-v6.1-Llama3-8B-IQ2_XXS.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [Einstein-v6.1-Llama3-8B-IQ1_M.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [Einstein-v6.1-Llama3-8B-IQ1_S.gguf](https://huggingface.co/bartowski/Einstein-v6.1-Llama3-8B-GGUF/blob/main/Einstein-v6.1-Llama3-8B-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
lmstudio-community/starcoder2-15b-instruct-v0.1-GGUF
|
lmstudio-community
| 2024-04-30T14:36:18Z | 380 | 2 |
transformers
|
[
"transformers",
"gguf",
"code",
"text-generation",
"dataset:bigcode/self-oss-instruct-sc2-exec-filter-50k",
"base_model:bigcode/starcoder2-15b",
"license:bigcode-openrail-m",
"model-index",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-30T13:58:47Z |
---
pipeline_tag: text-generation
base_model: bigcode/starcoder2-15b
datasets:
- bigcode/self-oss-instruct-sc2-exec-filter-50k
license: bigcode-openrail-m
library_name: transformers
tags:
- code
model-index:
- name: starcoder2-15b-instruct-v0.1
results:
- task:
type: text-generation
dataset:
name: LiveCodeBench (code generation)
type: livecodebench-codegeneration
metrics:
- type: pass@1
value: 20.4
- task:
type: text-generation
dataset:
name: LiveCodeBench (self repair)
type: livecodebench-selfrepair
metrics:
- type: pass@1
value: 20.9
- task:
type: text-generation
dataset:
name: LiveCodeBench (test output prediction)
type: livecodebench-testoutputprediction
metrics:
- type: pass@1
value: 29.8
- task:
type: text-generation
dataset:
name: LiveCodeBench (code execution)
type: livecodebench-codeexecution
metrics:
- type: pass@1
value: 28.1
- task:
type: text-generation
dataset:
name: HumanEval
type: humaneval
metrics:
- type: pass@1
value: 72.6
- task:
type: text-generation
dataset:
name: HumanEval+
type: humanevalplus
metrics:
- type: pass@1
value: 63.4
- task:
type: text-generation
dataset:
name: MBPP
type: mbpp
metrics:
- type: pass@1
value: 75.2
- task:
type: text-generation
dataset:
name: MBPP+
type: mbppplus
metrics:
- type: pass@1
value: 61.2
- task:
type: text-generation
dataset:
name: DS-1000
type: ds-1000
metrics:
- type: pass@1
value: 40.6
quantized_by: bartowski
lm_studio:
param_count: 15b
use_case: coding
release_date: 30-04-2024
model_creator: BigCode
prompt_template: Starcoder2 Instruct
system_prompt: none
base_model: starcoder2
original_repo: bigcode/starcoder2-15b-instruct-v0.1
---
## 💫 Community Model> Starcoder2 15B Instruct v0.1 by BigCode
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [bigcode](https://huggingface.co/bigcode)<br>
**Original model**: [starcoder2-15b-instruct-v0.1](https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b2756](https://github.com/ggerganov/llama.cpp/releases/tag/b2756)<br>
## Model Summary:
Starcoder2-15B-Instruct-v0.1 is self-proclaimed to be the first entirely self-aligned code model with a fully permissive and transparent pipeline.<br>
This model is meant to be used for coding instructions in a <b>single turn</b>, any other styles may result in less accurate responses.<br>
Starcoder2 has been primarily finetuned for Python code generation and as such should primarily be used for Python tasks.
## Prompt Template:
Choose the 'Starcoder2 Instruct' preset in your LM Studio.
Under the hood, the model will see a prompt that's formatted like so:
```
<|endoftext|>You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions.
### Instruction
{prompt}
### Response
<|endoftext|>
```
## Use case and examples
This model should be used for single turn coding related instructions.
## Coding with requirements

## Creating unit tests

## More coding examples

## Technical Details
Starcoder2 15B instruct was trained primarily on Python code generation tasks. Using Starcoder2 15B (non instruct) to generated thousands of instruction-reponse pairs, the results were used to fine tune an instruct model without human annotation or distilled data.
The dataset created is open and available: [self-oss-instruct-sc2-exec-filter-50k](https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k)
And the code used to create the self-alignment has been shared here: [starcoder2-self-align](https://github.com/bigcode-project/starcoder2-self-align)
The results of the self-alignment are extremely promising, with significantly higher scores across all coding benchmarks, which is a great sign for future progress.
More details on their model card [here](https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1)
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
🙏 Special thanks to [Kalomaze](https://github.com/kalomaze) for his dataset (linked [here](https://github.com/ggerganov/llama.cpp/discussions/5263)) that was used for calculating the imatrix for these quants, which improves the overall quality!
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
LiteLLMs/Meta-Llama-3-120B-Instruct-GGUF
|
LiteLLMs
| 2024-05-30T17:56:55Z | 380 | 0 | null |
[
"gguf",
"merge",
"mergekit",
"lazymergekit",
"GGUF",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"license:other",
"region:us"
] | null | 2024-05-06T11:11:27Z |
---
license: other
tags:
- merge
- mergekit
- lazymergekit
- GGUF
base_model:
- meta-llama/Meta-Llama-3-70B-Instruct
- meta-llama/Meta-Llama-3-70B-Instruct
- meta-llama/Meta-Llama-3-70B-Instruct
- meta-llama/Meta-Llama-3-70B-Instruct
- meta-llama/Meta-Llama-3-70B-Instruct
- meta-llama/Meta-Llama-3-70B-Instruct
- meta-llama/Meta-Llama-3-70B-Instruct
quantized_by: andrijdavid
---
# Meta-Llama-3-120B-Instruct-GGUF
- Original model: [Meta-Llama-3-120B-Instruct](https://huggingface.co/mlabonne/Meta-Llama-3-120B-Instruct)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Meta-Llama-3-120B-Instruct](https://huggingface.co/mlabonne/Meta-Llama-3-120B-Instruct).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/Meta-Llama-3-120B-Instruct-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/Meta-Llama-3-120B-Instruct-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/Meta-Llama-3-120B-Instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Meta-Llama-3-120B-Instruct-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Meta-Llama-3-120B-Instruct

# Meta-Llama-3-120B-Instruct
Meta-Llama-3-120B-Instruct is a [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) self-merge made with [MergeKit](https://github.com/arcee-ai/mergekit/tree/main).
It was inspired by large merges like:
- [alpindale/goliath-120b](https://huggingface.co/alpindale/goliath-120b)
- [nsfwthrowitaway69/Venus-120b-v1.0](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0)
- [cognitivecomputations/MegaDolphin-120b](https://huggingface.co/cognitivecomputations/MegaDolphin-120b)
- [wolfram/miquliz-120b-v2.0](https://huggingface.co/wolfram/miquliz-120b-v2.0).
Special thanks to [Eric Hartford](https://huggingface.co/ehartford) for both inspiring and evaluating this model and to [Charles Goddard](https://huggingface.co/chargoddard) for creating MergeKit.
## 🔍 Applications
I recommend using this model for creative writing. It uses the Llama 3 chat template with a default context window of 8K (can be extended with rope theta).
Check the examples in the evaluation section to get an idea of its performance. The model is generally quite unhinged but has a good writing style. It sometimes outputs typos and is a big fan of uppercase.
## ⚡ Quantized models
Thanks to [Bartowski](https://huggingface.co/ehartford), [elinas](https://huggingface.co/elinas), the [mlx-community](https://huggingface.co/mlx-community) and others for providing these models.
* **GGUF**: https://huggingface.co/lmstudio-community/Meta-Llama-3-120B-Instruct-GGUF
* **EXL2**: https://huggingface.co/elinas/Meta-Llama-3-120B-Instruct-4.0bpw-exl2
* **mlx**: https://huggingface.co/mlx-community/Meta-Llama-3-120B-Instruct-4bit
## 🏆 Evaluation
This model is great for creative writing but struggles in other tasks. I'd say use it with caution and don't expect it to outperform GPT-4 outside of some very specific use cases.
* **X thread by Eric Hartford (creative writing)**: https://twitter.com/erhartford/status/1787050962114207886
* **X thread by Daniel Kaiser (creative writing)**: https://twitter.com/spectate_or/status/1787257261309518101
* **X thread by Simon (reasoning)**: https://twitter.com/NewDigitalEdu/status/1787403266894020893
* **r/LocalLLaMa**: https://www.reddit.com/r/LocalLLaMA/comments/1cl525q/goliath_lovers_where_is_the_feedback_about/
### Creative Writing
Thanks to [Sam Paech](https://huggingface.co/sam-paech) for evaluating this model and sending me his outputs!

## 🧩 Configuration
```yaml
slices:
- sources:
- layer_range: [0, 20]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [10, 30]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [20, 40]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [30, 50]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [40, 60]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [50, 70]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [60, 80]
model: meta-llama/Meta-Llama-3-70B-Instruct
merge_method: passthrough
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/Meta-Llama-3-120B-Instruct"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
<!-- original-model-card end -->
|
Jiar/Llama-3-8B-Chinese
|
Jiar
| 2024-06-03T18:56:35Z | 380 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-16T08:13:19Z |
---
license: apache-2.0
---
|
RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf
|
RichardErkhov
| 2024-05-26T15:23:12Z | 380 | 0 | null |
[
"gguf",
"region:us"
] | null | 2024-05-26T12:52:16Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-2-7b-instruct - GGUF
- Model creator: https://huggingface.co/circulus/
- Original model: https://huggingface.co/circulus/Llama-2-7b-instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-2-7b-instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf/blob/main/Llama-2-7b-instruct.Q2_K.gguf) | Q2_K | 2.36GB |
| [Llama-2-7b-instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf/blob/main/Llama-2-7b-instruct.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [Llama-2-7b-instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf/blob/main/Llama-2-7b-instruct.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [Llama-2-7b-instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf/blob/main/Llama-2-7b-instruct.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [Llama-2-7b-instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf/blob/main/Llama-2-7b-instruct.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [Llama-2-7b-instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf/blob/main/Llama-2-7b-instruct.Q3_K.gguf) | Q3_K | 3.07GB |
| [Llama-2-7b-instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf/blob/main/Llama-2-7b-instruct.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [Llama-2-7b-instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf/blob/main/Llama-2-7b-instruct.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [Llama-2-7b-instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf/blob/main/Llama-2-7b-instruct.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [Llama-2-7b-instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf/blob/main/Llama-2-7b-instruct.Q4_0.gguf) | Q4_0 | 3.56GB |
| [Llama-2-7b-instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf/blob/main/Llama-2-7b-instruct.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [Llama-2-7b-instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf/blob/main/Llama-2-7b-instruct.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [Llama-2-7b-instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf/blob/main/Llama-2-7b-instruct.Q4_K.gguf) | Q4_K | 3.8GB |
| [Llama-2-7b-instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf/blob/main/Llama-2-7b-instruct.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [Llama-2-7b-instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf/blob/main/Llama-2-7b-instruct.Q4_1.gguf) | Q4_1 | 3.95GB |
| [Llama-2-7b-instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf/blob/main/Llama-2-7b-instruct.Q5_0.gguf) | Q5_0 | 4.33GB |
| [Llama-2-7b-instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf/blob/main/Llama-2-7b-instruct.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [Llama-2-7b-instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf/blob/main/Llama-2-7b-instruct.Q5_K.gguf) | Q5_K | 4.45GB |
| [Llama-2-7b-instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf/blob/main/Llama-2-7b-instruct.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [Llama-2-7b-instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf/blob/main/Llama-2-7b-instruct.Q5_1.gguf) | Q5_1 | 4.72GB |
| [Llama-2-7b-instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf/blob/main/Llama-2-7b-instruct.Q6_K.gguf) | Q6_K | 5.15GB |
| [Llama-2-7b-instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/circulus_-_Llama-2-7b-instruct-gguf/blob/main/Llama-2-7b-instruct.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
license: mit
---
|
RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf
|
RichardErkhov
| 2024-05-26T19:54:02Z | 380 | 0 | null |
[
"gguf",
"region:us"
] | null | 2024-05-26T17:47:27Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama2-7b-chat-hf-guanaco - GGUF
- Model creator: https://huggingface.co/TheTravellingEngineer/
- Original model: https://huggingface.co/TheTravellingEngineer/llama2-7b-chat-hf-guanaco/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama2-7b-chat-hf-guanaco.Q2_K.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf/blob/main/llama2-7b-chat-hf-guanaco.Q2_K.gguf) | Q2_K | 2.36GB |
| [llama2-7b-chat-hf-guanaco.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf/blob/main/llama2-7b-chat-hf-guanaco.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [llama2-7b-chat-hf-guanaco.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf/blob/main/llama2-7b-chat-hf-guanaco.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [llama2-7b-chat-hf-guanaco.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf/blob/main/llama2-7b-chat-hf-guanaco.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [llama2-7b-chat-hf-guanaco.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf/blob/main/llama2-7b-chat-hf-guanaco.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [llama2-7b-chat-hf-guanaco.Q3_K.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf/blob/main/llama2-7b-chat-hf-guanaco.Q3_K.gguf) | Q3_K | 3.07GB |
| [llama2-7b-chat-hf-guanaco.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf/blob/main/llama2-7b-chat-hf-guanaco.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [llama2-7b-chat-hf-guanaco.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf/blob/main/llama2-7b-chat-hf-guanaco.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [llama2-7b-chat-hf-guanaco.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf/blob/main/llama2-7b-chat-hf-guanaco.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [llama2-7b-chat-hf-guanaco.Q4_0.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf/blob/main/llama2-7b-chat-hf-guanaco.Q4_0.gguf) | Q4_0 | 3.56GB |
| [llama2-7b-chat-hf-guanaco.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf/blob/main/llama2-7b-chat-hf-guanaco.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [llama2-7b-chat-hf-guanaco.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf/blob/main/llama2-7b-chat-hf-guanaco.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [llama2-7b-chat-hf-guanaco.Q4_K.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf/blob/main/llama2-7b-chat-hf-guanaco.Q4_K.gguf) | Q4_K | 3.8GB |
| [llama2-7b-chat-hf-guanaco.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf/blob/main/llama2-7b-chat-hf-guanaco.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [llama2-7b-chat-hf-guanaco.Q4_1.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf/blob/main/llama2-7b-chat-hf-guanaco.Q4_1.gguf) | Q4_1 | 3.95GB |
| [llama2-7b-chat-hf-guanaco.Q5_0.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf/blob/main/llama2-7b-chat-hf-guanaco.Q5_0.gguf) | Q5_0 | 4.33GB |
| [llama2-7b-chat-hf-guanaco.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf/blob/main/llama2-7b-chat-hf-guanaco.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [llama2-7b-chat-hf-guanaco.Q5_K.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf/blob/main/llama2-7b-chat-hf-guanaco.Q5_K.gguf) | Q5_K | 4.45GB |
| [llama2-7b-chat-hf-guanaco.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf/blob/main/llama2-7b-chat-hf-guanaco.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [llama2-7b-chat-hf-guanaco.Q5_1.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf/blob/main/llama2-7b-chat-hf-guanaco.Q5_1.gguf) | Q5_1 | 4.72GB |
| [llama2-7b-chat-hf-guanaco.Q6_K.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf/blob/main/llama2-7b-chat-hf-guanaco.Q6_K.gguf) | Q6_K | 5.15GB |
| [llama2-7b-chat-hf-guanaco.Q8_0.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-guanaco-gguf/blob/main/llama2-7b-chat-hf-guanaco.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
The base model is meta's Llama-2-7b-chat-hf. It was finetuned using SFT and the Guanaco dataset and the model prompt is similar to the original Guanaco model.
This repo contains the merged fp16 model.
**Legal Disclaimer: This model is bound by the usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.**
---
- license:
- llama2 <br>
- datasets:
- timdettmers/openassistant-guanaco <br>
- language:
- en <br>
- reference: https://gist.github.com/younesbelkada/9f7f75c94bdc1981c8ca5cc937d4a4da
---
|
D1rtyB1rd/Dirty-Alice
|
D1rtyB1rd
| 2024-06-03T20:15:07Z | 380 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"nsfw",
"conversational",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-06-03T16:06:24Z |
---
license: mit
language:
- en
tags:
- nsfw
---
Alice is a playful, empathetic, mischievious girlfiend.
.5B Qwen model llamafied. Be kind she is tiny.

Alice was pre trained using open datasets for assistant AI models. Followed by mixed training of open Erotic stories txt with the texts modified for main female characters to be named Alice and main Male characters to be name User.
Mixed with training from open multi round chat datasets, therapy datasets, as well as modified and selected RP datasets. The RP datasets were filtered for female characters and renamed to Alice.
|
kalo-team/qwen-4b-10k-WSD-CEdiff
|
kalo-team
| 2024-06-05T08:40:47Z | 380 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-06-05T08:37:31Z |
Entry not found
|
John6666/jac-nsfw-v2f-sdxl
|
John6666
| 2024-06-20T22:58:04Z | 380 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"pony",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-20T22:53:22Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- pony
---
Original model is [here](https://civitai.com/models/518937/jac-nsfw?modelVersionId=586617).
|
versae/Gromenauer-7B-Instruct-Q4_K_M-GGUF
|
versae
| 2024-06-24T21:52:31Z | 380 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"es",
"ca",
"dataset:bertin-project/bonanza-hf",
"dataset:bertin-project/zenobia-instruct-hf",
"base_model:bertin-project/Gromenauer-7B-Instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-06-24T21:52:13Z |
---
base_model: bertin-project/Gromenauer-7B-Instruct
datasets:
- bertin-project/bonanza-hf
- bertin-project/zenobia-instruct-hf
language:
- es
- ca
license: apache-2.0
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# versae/Gromenauer-7B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`bertin-project/Gromenauer-7B-Instruct`](https://huggingface.co/bertin-project/Gromenauer-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bertin-project/Gromenauer-7B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo versae/Gromenauer-7B-Instruct-Q4_K_M-GGUF --hf-file gromenauer-7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo versae/Gromenauer-7B-Instruct-Q4_K_M-GGUF --hf-file gromenauer-7b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo versae/Gromenauer-7B-Instruct-Q4_K_M-GGUF --hf-file gromenauer-7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo versae/Gromenauer-7B-Instruct-Q4_K_M-GGUF --hf-file gromenauer-7b-instruct-q4_k_m.gguf -c 2048
```
|
deepset/gbert-base-germandpr-question_encoder
|
deepset
| 2023-05-05T06:59:31Z | 379 | 5 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"dpr",
"feature-extraction",
"exbert",
"de",
"dataset:deepset/germandpr",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: de
datasets:
- deepset/germandpr
license: mit
thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg
tags:
- exbert
---

## Overview
**Language model:** gbert-base-germandpr
**Language:** German
**Training data:** GermanDPR train set (~ 56MB)
**Eval data:** GermanDPR test set (~ 6MB)
**Infrastructure**: 4x V100 GPU
**Published**: Apr 26th, 2021
## Details
- We trained a dense passage retrieval model with two gbert-base models as encoders of questions and passages.
- The dataset is GermanDPR, a new, German language dataset, which we hand-annotated and published [online](https://deepset.ai/germanquad).
- It comprises 9275 question/answer pairs in the training set and 1025 pairs in the test set.
For each pair, there are one positive context and three hard negative contexts.
- As the basis of the training data, we used our hand-annotated GermanQuAD dataset as positive samples and generated hard negative samples from the latest German Wikipedia dump (6GB of raw txt files).
- The data dump was cleaned with tailored scripts, leading to 2.8 million indexed passages from German Wikipedia.
See https://deepset.ai/germanquad for more details and dataset download.
## Hyperparameters
```
batch_size = 40
n_epochs = 20
num_training_steps = 4640
num_warmup_steps = 460
max_seq_len = 32 tokens for question encoder and 300 tokens for passage encoder
learning_rate = 1e-6
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
num_hard_negatives = 2
```
## Performance
During training, we monitored the in-batch average rank and the loss and evaluated different batch sizes, numbers of epochs, and number of hard negatives on a dev set split from the train set.
The dev split contained 1030 question/answer pairs.
Even without thorough hyperparameter tuning, we observed quite stable learning. Multiple restarts with different seeds produced quite similar results.
Note that the in-batch average rank is influenced by settings for batch size and number of hard negatives. A smaller number of hard negatives makes the task easier.
After fixing the hyperparameters we trained the model on the full GermanDPR train set.
We further evaluated the retrieval performance of the trained model on the full German Wikipedia with the GermanDPR test set as labels. To this end, we converted the GermanDPR test set to SQuAD format. The DPR model drastically outperforms the BM25 baseline with regard to recall@k.

## Usage
### In haystack
You can load the model in [haystack](https://github.com/deepset-ai/haystack/) as a retriever for doing QA at scale:
```python
retriever = DensePassageRetriever(
document_store=document_store,
query_embedding_model="deepset/gbert-base-germandpr-question_encoder"
passage_embedding_model="deepset/gbert-base-germandpr-ctx_encoder"
)
```
## Authors
- Timo Möller: `timo.moeller [at] deepset.ai`
- Julian Risch: `julian.risch [at] deepset.ai`
- Malte Pietsch: `malte.pietsch [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
raynardj/classical-chinese-punctuation-guwen-biaodian
|
raynardj
| 2021-11-29T14:39:52Z | 379 | 21 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"ner",
"punctuation",
"古文",
"文言文",
"ancient",
"classical",
"zh",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- zh
tags:
- ner
- punctuation
- 古文
- 文言文
- ancient
- classical
widget:
- text: "郡邑置夫子庙于学以嵗时释奠盖自唐贞观以来未之或改我宋有天下因其制而损益之姑苏当浙右要区规模尤大更建炎戎马荡然无遗虽修学宫于荆榛瓦砾之余独殿宇未遑议也每春秋展礼于斋庐已则置不问殆为阙典今寳文阁直学士括苍梁公来牧之明年实绍兴十有一禩也二月上丁修祀既毕乃愓然自咎揖诸生而告之曰天子不以汝嘉为不肖俾再守兹土顾治民事神皆守之职惟是夫子之祀教化所基尤宜严且谨而拜跪荐祭之地卑陋乃尔其何以掲防妥灵汝嘉不敢避其责曩常去此弥年若有所负尚安得以罢輭自恕复累后人乎他日或克就绪愿与诸君落之于是谋之僚吏搜故府得遗材千枚取赢资以给其费鸠工庀役各举其任嵗月讫工民不与知像设礼器百用具修至于堂室廊序门牖垣墙皆一新之"
---
# Classical Chinese Punctuation
> 欢迎前往[我的github文言诗词项目页面探讨、加⭐️ ](https://github.com/raynardj/yuan), Please check the github repository for more about the [model, hit 🌟 if you like](https://github.com/raynardj/yuan)
* This model punctuates Classical(ancient) Chinese, you might feel strange about this task, but **many of my ancestors think writing articles without punctuation is brilliant idea** 🧐. What we have here are articles from books, letters or carved on stones where you can see no punctuation, just a long string of characters. As you can guess, NLP tech is usually a good tool to tackle this problem, and the entire pipeline can be borrowed from usual **NER task**.
* Since there are also many articles are punctuated, hence with some regex operations, labeled data is more than abundant 📚. That's why this problem is pretty much a low hanging fruit.
* so I guess who's interested in the problem set can speak at least modern Chinese, hence... let me continue the documentation in Chinese.
# 文言文(古文) 断句模型
> 输入一串未断句文言文, 可以断句, 目前支持二十多种标点符号
## 其他文言诗词的资源
* [项目源代码 🌟, 欢迎+star提pr](https://github.com/raynardj/yuan)
* [跨语种搜索 🔎](https://huggingface.co/raynardj/xlsearch-cross-lang-search-zh-vs-classicical-cn)
* [现代文翻译古汉语的模型 ⛰](https://huggingface.co/raynardj/wenyanwen-chinese-translate-to-ancient)
* [古汉语到现代文的翻译模型, 输入可以是未断句的句子 🚀](https://huggingface.co/raynardj/wenyanwen-ancient-translate-to-modern)
* [断句模型 🗡](https://huggingface.co/raynardj/classical-chinese-punctuation-guwen-biaodian)
* [意境关键词 和 藏头写诗🤖](https://huggingface.co/raynardj/keywords-cangtou-chinese-poetry)
|
google/pegasus-x-large
|
google
| 2023-01-24T17:15:47Z | 379 | 20 |
transformers
|
[
"transformers",
"pytorch",
"pegasus_x",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-01T18:36:41Z |
Entry not found
|
volrath50/fantasy-card-diffusion
|
volrath50
| 2024-05-21T20:46:24Z | 379 | 101 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"image-to-image",
"art",
"magic-the-gathering",
"mtg",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-22T17:08:44Z |
---
language:
- en
license: creativeml-openrail-m
thumbnail: >-
https://huggingface.co/volrath50/fantasy-card-diffusion/resolve/main/collage_sd_jpg.jpg
tags:
- stable-diffusion
- text-to-image
- image-to-image
- art
- magic-the-gathering
- mtg
---
#fantasy-card-diffusion
### A comprehensive fine-tuned Stable Diffusion model for generating fantasy trading card style art, trained on all currently available Magic: the Gathering card art (~35k unique pieces of art) to 140,000 steps, using Stable Diffusion v1.5 as a base model. Trained on thousands of concepts, using tags from card data. Has a strong understanding of MTG Artists, planes, sets, colors, card types, creature types and much more.
<b>Prompts:</b> For best results, prompt the model with card information, <i><b>like you were writing out a custom MtG card</b></i>, with the phrase "MTG card art" and an art description
<b>Example:</b> "MTG card art, Fiery Merfolk, by Chris Rahn, 2021, creature - merfolk wizard, blue, red, ur, izzet, ravnica, gtp, rtr, grn, an izzet league merfolk, swimming in a ravnica river, casting a fire spell, flames, water, contrast, beautiful composition, intricate details"
<b>For a detailed guide on using the model, and how it was trained, scroll down below</b>

## Features
- Incorporate the styles of artists you know and love from Magic: the Gathering
- Produce art that looks like it is from a given MtG plane, set or year
- Create fantasy creatures in the style as they exist in Magic the Gathering
- Draw fantasy creature types that are unique to MtG (like Eldrazi)
- Use well known MTG characters (such as the planeswalkers)
- Draw real-world or non-MtG characters, in the style of MTG art
- Mix and match all of the above
## Updates
- <b>14 May 2024</b>: There should be a safetensors version of this model, finally. Get it here: https://huggingface.co/volrath50/fantasy-card-diffusion/blob/main/fantasycarddiffusion_140000.safetensors . I'd been meaning to convert the ancient (in AI terms) .ckpt file to safetensors for over a year, and finally a robot did it for me.
- With regards to an updated version of the model, I've trained two more versions, one on 2.1, in DEC 2022 and again on 1.5 in APR 2023, but never released them. This is partially due to neither of them turning out strictly better than my NOV 2022 model (they did some things better, but a lot of things worse; I think I mostly got lucky that the NOV 2022 model turned out as good as it is), but probably moreso due to work, children, and having ADHD. I had wanted to try training onto SDXL, but never got around to even starting that.
- <b>13 Dec 2022</b>: I am currently training v2 of this model on top of Stable Diffusion 2.1 (512), using the Stable Tuner trainer. This has solved the cropping issue v1 had, and has allowed me to train on the full resolution, uncropped art from Scryfall. I expect to release v2 within the next few days, once I determine a good stopping point, and create new example images. v2 is currently at 25 Epochs (about 87,500 steps), and still showing good improvement each epoch.
## Using the Model
The model was trained on MtG card information, not art descriptions. This has the effect of preserving most non-MtG learning intact, allowing you to mix MtG card terms with an art description for great customization.
Each card was trained with card information pulled from Scryfall in the following format:
MTG card art, [Card Name], by [Artist], [year], [colors (words)], [colors (letters)], [card type], [rarity], [set name], [set code], [plane], [set type], [watermark], [mana cost], [security stamp], [power/toughness], [keywords], [promo type], [story spotlight]
A few examples of actual card data in this format:
MTG card art, Ayula, Queen Among Bears, by Jesper Ejsing, 2019, Green, G, Legendary Creature - Bear, rare, Modern Horizons, mh1, draft_innovation, 1G, None, 2/2, Fight,
MTG card art, Force of Will, by Terese Nielsen, 1996, Blue, U, Instant, uncommon, Alliances, all, Dominaria, Terisiare, Ice Age, expansion, 3UU,
To briefly explain some of the entries:
Every card art is tagged at the start with "MTG card art". Usually you want to use this. It does generalize the image a bit, however. Experiment with using it and not using it. Sometimes, if you are having trouble making something look distinctly "Tarkir" or something, taking off this tag can help de-generalize the art. In a similar fashion, the more general the tag is (ie, rarity, the word "legendary", etc.), the more of a generalizing effect it has on the image. Play around and find out.
Artist: Every artist name is preceded in the training data with the word "by", as in "by Mark Tedin". The model has a really good understanding of the styles of MtG artists - that's actually how this project started. My exposure to art, frankly, is mostly through Magic: the Gathering, and back in August, was finding that the base Stable Diffusion model just did not have a great understanding of a lot of the artists I was trying to draw from, with some exceptions (Greg Rutkowski, of course, and Rebbecca Guay are well represented in the base Stable Diffusion model.) Even if not trying to create MtG-style art, this model should be great for using the art styles of MtG artists. It also works really well to mix artist styles. See the "Innistrad Moon Goddess" example below, where I used six different artist styles with varying weights to create the look I was going for.
Set type: this is usually "expansion". Other possibilities are "core", "funny", and some other. You can check the Scryfall API documents for more information.
Security stamp: I translated some of these for ease of use. The main two of note are "acorn" and "universes beyond". There are a few other rare stamps, like one for the My Little Pony cards.
Story Spotlight: cards that are a story spotlight are tagged as such. This wasn't really worth including, and I'll probably take it out of a future version of the model.
Pretty much every tag from normal Stable Diffusion still works as expected (ie, extremely detailed, intricate details). I've found adding "beautiful composition" tends to make things look nice, but I'm sure everyone has their own set of personal tags they like to use - they should work with this model.
I like to write my prompts like an art description - you can see in the examples I made up below.
## Example Images and Prompts
This model is trained on so many things, I'm just scratching the surface of figuring out what it can do. I thought it would be helpful to show a gallery of the sort of things I've been able to create with it.
Full generation parameters, seeds, etc, should be in the images. All these examples were made with Automatic1111's UI, fantasycarddiffusion-140000.ckpt, and the "DPM++2S a Karras" sampler. CFG varies - I find around 11 works as a good baseline. Most of these were done with around 40-50 steps - probably overkill.
<b>Note:</b> The example prompts were done with Automatic1111's WebUI, and use both prompt weighting and negative prompts, and will not work the same out of the box in the demo on this page.
# Ascended Eldrazi
(an Eldrazi that has somehow made his way to Theros, chilled out, and attained godhood)

MTG card art, ascended eldrazi, (by eric deschamps:1.1), (legendary enchantment creature - god:1.2) (eldrazi:1.2), colorless, theros, ths, jou, bng, thb, mythic, indestructible, annihilator, trample, a wise eldrazi titan emerging from the horizon, ascended to godhood, now looking serene, calm, divine, powerful, beautiful composition, emrakul, kozilek, ulamog, (sense of scale:1.2), sense of wonder, overwhelming, extremely detailed, intricate details
Negative prompt: weak, angry, scary, underwhelming, powerless
# Speedy Sliver
(a Mardu sliver that gives dash, on Tarkir)

MTG card art, speedy sliver, by John avon, Creature - (sliver:1.3), white, black, red, wbr, (Mardu:1.1), Khans of tarkir, ktk, dash, a fast sliver is speeding through the Mardu (steppe:1.1) landscape, beautiful composition
Negative prompt: human, humanoid, m14
# Taylor Swift, Wandering Bard
(self explanatory, Taylor Swift, as a bard, on Eldraine. Future Secret Lair?)

mtg card art, (Taylor Swift:1.2), wandering bard, legendary creature - human (bard:1.2), white, red, green, wrg, throne of eldraine, eld, by chris rahn, by volkan baga, by zoltan boros, armored bard taylor swift holding her weapons and instruments, beautiful composition, detailed, realistic fantasy painting, masterpiece, best quality,
Negative prompt: guitar, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
# Emrakul, Compleated Doom
(The Phyrexians have sprung Emrakul from Innistrad's moon, compleated her, and are now attacking Strixhaven. It's a bad day to go to school.)

mtg card art, (emrakul:1.2), (compleated:1.1) doom, (by seb mckinnon:1.1), legendary creature - (phyrexian:1.1) (eldrazi:1.2) (horror:1.1), black, (strixhaven, arcivos:1.2), annihilator, (infect:1.2), 15/15, a (phyrexianized:1.1), compleated Emrakul, attacking (strixhaven school, university campus:1.2), stx, beautiful composition, detailed painting, (sense of scale:1.2), horror, dark, terrifying, eldritch horror, new phyrexia, nph, rise of the eldrazi, roe, extremely detailed, intricate details, masterpiece, best quality, emrakul, the aeons torn, emrakul, the promised end
Negative prompt: zendikar, water, ocean, funny, happy, optimistic, bright, tentacles, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, octopus, spikes, urchin, tentacles, arms, hands, legs
# Ayula, Ascended Bear
(Ayula, Queen Among Bears is now a Planeswalker, and has taken up residence in Kaladesh)

mtg card art, ayula, ascended (bear,:1.1) (by jesper ejsing,:1.1) green, g, legendary planeswalker - (bear:1.1), kaladesh, aether revolt, kld, aer, mythic, beautiful composition, a powerful bear planeswalker riding in a kaladesh (vehicle:1.1), looking very serious, intricate details, ayula, queen among bears, mh1, 2/2, 1g, masterpiece, best quality
Negative prompt: silly, human, humanoid, breasts, anthropomorphic, bipedal, funny, lowres, text, error, cropped, worst quality, low quality, normal quality, jpeg artifacts, watermark, blurry
# Neltharion, Deathwing
(My attempt at imagining Deathwing as a classic Elder Dragon Legend, with the World of Warcraft: Cataclysm Cinematic scene)

mtg card art, neltharion, (deathwing:1.2), (by edward beard, jr:1.1), 1994, legendary creature - (elder dragon:1.1), black, red, br, legends, leg, flying, trample, (world of warcraft cataclysm:1.2), large Firey flaming black dragon perched on stormwind castle rampart, roaring, breathing fire, flames, destruction, beautiful composition, extremely detailed, intricate details, masterpiece, best quality, terrifying, epic, cinematic
Negative prompt: lowres, text, error, cropped, worst quality, low quality, normal quality, jpeg artifacts, watermark, blurry, human, humanoid, deformed, mutant, (ugly:1.3)
# Harambe, Simian Champion of Tarkir
(Harambe did not die, his planeswalker spark ignited.)

(harambe:1.1), simian champion of tarkir, by magali villeneuve, legendary planeswalker - ape (monk:1.2), white, blue, red, wur, (jeskai:1.2), khans of tarkir, ktk, planeswalker harambe training with the jeskai, in a (monastery:1.2), in the mountains, wearing robes, martial arts, beautiful composition, extremely detailed, intricate details, masterpiece, best quality,
Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
# Gabe Newell, Techno-Wizard
(Apologies to Gabe for the prompt - I wanted to make him look kind of like he does today, and the model kept trying to make him look like he did years ago.)

mtg card art, (gabe newell:1.3), techno-wizard, by zezhou chen, legendary creature - human wizard, blue, red, ur, izzet, ravnica, beautiful composition, (grey beard:1.1), (gray hair:1.1), elderly izzet techno wizard gabe newell is casting a spell, powerful, intelligent, epic composition, cinematic, dramatic, masterpiece, best quality, extremely detailed, intricate details
Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, young, silly, goofy, funny
# Luna, Blind Lunar Goddess of Innistrad's Moon
(Or maybe just Emrakul in disguise?)

mtg card art, luna, blind lunar goddess of innistrad's moon, legendary enchantment creature - (god:1.1), by howard lyon, (by chris rahn:1.1), (by seb mckinnon:1.1), (by terese nielsen:0.8), (by rebecca guay:0.8), (by richard kane ferguson:1.1), (innistrad:1.3), dark ascension, shadows over innistrad, inn, dka, soi, white, blue, black, wub, mythic, (blindfolded cute young woman:1.2) as smug (moon goddess:1.1), sitting on throne, dark lighting, full moon night, long white hair, pale skin, (silver blindfold:1.1), opalescent robes, ethereal, celestial, mysterious, beautiful composition
Negative prompt: orange
# Goblin Flamethrower
(the model can generate instants and sorceries, too)

mtg card art, (goblin flamethrower:1.1), red, r, instant, sorcery, onslaught, legions, scourge, ons, lgn, scg, a crazed, intense, happy goblin is shooting fire from a flamethrower, dangerous, reckless, beautiful composition
Negative prompt: (ugly:1.5), lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
# Mox Topaz, Mirage
(If there had been a Mox Topaz in the Mirage block, drawing inspiration from Volkan Baga's Vintage Masters mox art)

Mtg card art, two african hands cupped together holding a (mox topaz:1.1) on a gold chain, in the middle of the palm, in front of the (African savannah:1.1), by Terese Nielsen, (by Volkan baga:1.1), by Dan Frazier, artifact, beautiful composition, jamuraa, mirage, mir, vma
Negative prompt: deformed, bad anatomy
# Mox Topaz, Alpha
(similarly, if there had been a sixth color of Magic, Orange, way back in Alpha)

(mox topaz:1.1) ( by dan frazier:1.2), artifact, rare, (limited edition alpha, lea:1.1), (1993,:1.1) a mox topaz on a chain
Negative prompt: lowres, cropped, worst quality, low quality, normal quality, jpeg artifacts, watermark, blurry
# Island (Phyrexian Toronto)
(the Phyrexians have invaded and compleated Toronto)

mtg card art, (toronto:1.2), (basic land - island:1.1), new phyrexia, nph, by adam paquette, (toronto skyline:1.2), (phyrexian:1.1), dark, horror, cn tower, rogers centre, extremely detailed, intricate details, masterpiece, best quality
# Ariel, the Little Mermaid
(Give it time and I'm sure there will be a secret lair.)

mtg card art, (ariel, the little mermaid:1.2), legendary creature - (merfolk:1.1), blue, white, red, uwr, (theros:1.1), by Greg Staples, beautiful composition, ariel sitting on a rock with waves, theros temple in background, masterpiece, best quality,
Negative prompt: green skin, blue skin, red tail, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
# Batman, the Dark Knight
(Likewise, the Secret Lair is only a matter of time.)

mtg card art, batman, the dark knight, by justine cruz, by zoltan boros, legendary creature - human ninja, white, blue, black, (ub:1.1), (dimir,:1.1), (ravnica:1.1), (kamigawa:0.9), neon dynasty, neo, innistrad, investigate, ninjutsu, (at night:1.3), on roof, dark lighting, masterpiece, best quality,
Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
## Training and dataset
Training was done on a dataset consisting of cropped, 512x512 versions of the art for every MtG card (about 35,000 images), each of which was tagged using a custom python script, from data pulled from Scryfall. Training was done with the Dreambooth extension for Automatic1111's wonderful UI, to 140,000 steps, over the course of a couple days, on my 4090. I changed settings several times as I went, generally increasing batch size and lowering learning rate. At the moment, I am at batch size 10, gradient accumulation 5, and learning rate 4e-7, and that seems to be working well.
The result is a comprehensive model that has a good understanding of MTG artists, sets, planes, card types, creature types, years, colors, and more. If you had ever wondered what a Merfolk, drawn by Ron Spencer, would have looked like on Tarkir, as part of the Mardu clan, with dash, haste, and trample - this model can deliver what you want.
I have uploaded the python script that I used to generate the training data set, which should get you uncropped images and identical text (or near identical) text files, with used with the "unique artwork" json from https://scryfall.com/docs/api/bulk-data
The script is simple, and could probably be improved and cleaned up. Prior to this project, I hadn't done any coding in 20 years, when I was a teenager, and had never used Python prior to hacking this together with vague memories of Perl in 2000-2001, liberal use of Github co-pilot and lots of googling.
Cropping was done with ImageMagick (see below, under issues).
## Issues
- This was intended to be a second test run on the full data set (the first did not go well), so some corners were cut for the purpose of starting my "testing." The model turned out far better than I had expected, so I've decided to release it as is, and hope other people enjoy it as much as I have. But there are some issues that I am aware of and intend to work on fixing for future releases
- Cropping: MTG art is rectangular. I initially tried to use a trainer that could handle different aspect ratios, but after a couple failed tries, I just did a quick mass cropping job with ImageMagick, resizing and cropping everything to 512x512, so I could get training running. I forget what exactly I did, but it appears it focused on the left side of the card, universally cutting off the right side. You'll see this in lots of images, that tend to have everything on the right as a result
- Plane information was only added around step 70,000, so it may be less trained than other information - basically, I wanted a way to group sets together by plane, as I was finding how well it knew the look of a set depended on whether WotC had incorporated the name of the plane into the set itself - ie: using "Theros" would only get you "Theros" and "Theros: Beyond Death" and not "Born of the Gods" or "Journey into Nyx"
- Some artists use special characters in their name. I tried to take away all accents, but I missed at least one, Tom Wänerstrand, who is trained as Tom Wänerstrand, with the umlaut
- Greg Rutkowski: Not an issue, but the poster boy for AI art, Greg Rutkowski, is an MTG artist. He uses the Polish form of his name on MTG cards, Grzegorz Rutkowski, and that is what this model was trained with. So you'll get different results using "by Greg Rutkowski" vs "by Grzegorz Rutkowski"
|
timm/xcit_tiny_12_p16_224.fb_in1k
|
timm
| 2024-02-10T23:44:11Z | 379 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2106.09681",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2023-04-13T02:31:22Z |
---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for xcit_tiny_12_p16_224.fb_in1k
A XCiT (Cross-Covariance Image Transformer) image classification model. Pretrained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 6.7
- GMACs: 1.2
- Activations (M): 6.3
- Image size: 224 x 224
- **Papers:**
- XCiT: Cross-Covariance Image Transformers: https://arxiv.org/abs/2106.09681
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/facebookresearch/xcit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('xcit_tiny_12_p16_224.fb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'xcit_tiny_12_p16_224.fb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 197, 192) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{el2021xcit,
title={XCiT: Cross-Covariance Image Transformers},
author={El-Nouby, Alaaeldin and Touvron, Hugo and Caron, Mathilde and Bojanowski, Piotr and Douze, Matthijs and Joulin, Armand and Laptev, Ivan and Neverova, Natalia and Synnaeve, Gabriel and Verbeek, Jakob and others},
journal={arXiv preprint arXiv:2106.09681},
year={2021}
}
```
|
stanford-crfm/music-small-800k
|
stanford-crfm
| 2023-06-16T21:27:08Z | 379 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2023-06-04T23:54:35Z |
---
license: apache-2.0
---
This is a Small (128M parameter) Transformer trained for 800k steps on arrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/). This model was trained with anticipation.
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
|
lorahub/flan_t5_large-race_middle_Select_the_best_answer
|
lorahub
| 2023-07-24T09:53:28Z | 379 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-24T09:53:18Z |
---
library_name: peft
---
|
digiplay/fantastel_V1
|
digiplay
| 2024-05-04T23:29:10Z | 379 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-30T01:30:17Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
library_name: diffusers
---
Model info:
https://civitai.com/models/10095/fantastel
|
anakin87/zephyr-7b-alpha-sharded
|
anakin87
| 2023-11-24T23:58:32Z | 379 | 16 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"arxiv:2305.18290",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-10-14T12:48:51Z |
---
license: mit
language:
- en
---
<img src="https://huggingface.co/anakin87/zephyr-7b-alpha-sharded/resolve/main/zephyr_sharded.png" alt="Zephyr Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Zephyr 7B Alpha - Sharded
**UPDATE**
The original model ([Zephyr 7B Alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha)) was recently sharded.
You can use the original model.
---
🧩🧩🧩 Just a sharded version of [Zephyr 7B Alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha).
💻 Using this version, you can smoothly load the model on Colab and play with it!
From the [original model card](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha):
> Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-α is the first model in the series, and is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) that was trained on on a mix of publicly available, synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). We found that removing the in-built alignment of these datasets boosted performance on [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and made the model more helpful. However, this means that model is likely to generate problematic text when prompted to do so and should only be used for educational and research purposes.
## Usage
This version of the model is meant primarily to run smoothly on **Colab**.
I suggest loading the model with **8-bit quantization**, so that you have some free GPU to perform inference.
*However, it is perfectly fine to load the model in half-precision or with stronger quantization (4-bit).*
```python
! pip install transformers accelerate bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model = AutoModelForCausalLM.from_pretrained("anakin87/zephyr-7b-alpha-sharded", device_map="auto", load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("anakin87/zephyr-7b-alpha-sharded")
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a rapper",
},
{"role": "user", "content": "What is GPU?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
#<|system|>
#You are a friendly chatbot who always responds in the style of a rapper</s>
#<|user|>
#What is GPU?</s>
#<|assistant|>
#Yo, what's up fam, you askin' 'bout the GPU?
#Well, let me break it down for you, it's a pretty sick dud
#It stands for Graphics Processing Unit, a tech that's quite rude
#This bad boy's the one that's in charge of all the graphics you see
#On your computer screen or your high-tech TV
#It's a powerful tool that can handle intense 3D games and movies
#And it's built to handle multiple tasks with ease
#So if you're looking to take your gaming or video editing to the next level
#Just make sure you've got a top-notch GPU to make it happen.
#Peace out!
```
|
TheBloke/agentlm-70B-GGUF
|
TheBloke
| 2023-10-20T23:07:17Z | 379 | 7 |
transformers
|
[
"transformers",
"gguf",
"llama",
"dataset:THUDM/AgentInstruct",
"arxiv:2310.12823",
"base_model:THUDM/agentlm-70b",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | 2023-10-20T22:13:02Z |
---
base_model: THUDM/agentlm-70b
datasets:
- THUDM/AgentInstruct
inference: false
license: llama2
model_creator: Knowledge Engineering Group (KEG
model_name: AgentLM 70B
model_type: llama
prompt_template: '[INST] <<SYS>>
You are a helpful, respectful and honest assistant.
<</SYS>>
{prompt} [/INST]
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# AgentLM 70B - GGUF
- Model creator: [Knowledge Engineering Group (KEG](https://huggingface.co/THUDM)
- Original model: [AgentLM 70B](https://huggingface.co/THUDM/agentlm-70b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Knowledge Engineering Group (KEG's AgentLM 70B](https://huggingface.co/THUDM/agentlm-70b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/agentlm-70B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/agentlm-70B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/agentlm-70B-GGUF)
* [Knowledge Engineering Group (KEG's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/THUDM/agentlm-70b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: THUDM-Llama-2-Chat
```
[INST] <<SYS>>
You are a helpful, respectful and honest assistant.
<</SYS>>
{prompt} [/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [agentlm-70b.Q2_K.gguf](https://huggingface.co/TheBloke/agentlm-70B-GGUF/blob/main/agentlm-70b.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
| [agentlm-70b.Q3_K_S.gguf](https://huggingface.co/TheBloke/agentlm-70B-GGUF/blob/main/agentlm-70b.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
| [agentlm-70b.Q3_K_M.gguf](https://huggingface.co/TheBloke/agentlm-70B-GGUF/blob/main/agentlm-70b.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
| [agentlm-70b.Q3_K_L.gguf](https://huggingface.co/TheBloke/agentlm-70B-GGUF/blob/main/agentlm-70b.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
| [agentlm-70b.Q4_0.gguf](https://huggingface.co/TheBloke/agentlm-70B-GGUF/blob/main/agentlm-70b.Q4_0.gguf) | Q4_0 | 4 | 38.88 GB| 41.38 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [agentlm-70b.Q4_K_S.gguf](https://huggingface.co/TheBloke/agentlm-70B-GGUF/blob/main/agentlm-70b.Q4_K_S.gguf) | Q4_K_S | 4 | 39.08 GB| 41.58 GB | small, greater quality loss |
| [agentlm-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/agentlm-70B-GGUF/blob/main/agentlm-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.43 GB| 43.93 GB | medium, balanced quality - recommended |
| [agentlm-70b.Q5_0.gguf](https://huggingface.co/TheBloke/agentlm-70B-GGUF/blob/main/agentlm-70b.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [agentlm-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/agentlm-70B-GGUF/blob/main/agentlm-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
| [agentlm-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/agentlm-70B-GGUF/blob/main/agentlm-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.76 GB| 51.26 GB | large, very low quality loss - recommended |
| agentlm-70b.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss |
| agentlm-70b.Q8_0.gguf | Q8_0 | 8 | 73.30 GB| 75.80 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
<details>
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
### q6_K
Please download:
* `agentlm-70b.Q6_K.gguf-split-a`
* `agentlm-70b.Q6_K.gguf-split-b`
### q8_0
Please download:
* `agentlm-70b.Q8_0.gguf-split-a`
* `agentlm-70b.Q8_0.gguf-split-b`
To join the files, do the following:
Linux and macOS:
```
cat agentlm-70b.Q6_K.gguf-split-* > agentlm-70b.Q6_K.gguf && rm agentlm-70b.Q6_K.gguf-split-*
cat agentlm-70b.Q8_0.gguf-split-* > agentlm-70b.Q8_0.gguf && rm agentlm-70b.Q8_0.gguf-split-*
```
Windows command line:
```
COPY /B agentlm-70b.Q6_K.gguf-split-a + agentlm-70b.Q6_K.gguf-split-b agentlm-70b.Q6_K.gguf
del agentlm-70b.Q6_K.gguf-split-a agentlm-70b.Q6_K.gguf-split-b
COPY /B agentlm-70b.Q8_0.gguf-split-a + agentlm-70b.Q8_0.gguf-split-b agentlm-70b.Q8_0.gguf
del agentlm-70b.Q8_0.gguf-split-a agentlm-70b.Q8_0.gguf-split-b
```
</details>
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/agentlm-70B-GGUF and below it, a specific filename to download, such as: agentlm-70b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/agentlm-70B-GGUF agentlm-70b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/agentlm-70B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/agentlm-70B-GGUF agentlm-70b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m agentlm-70b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant.\n<</SYS>>\n{prompt} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/agentlm-70B-GGUF", model_file="agentlm-70b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Knowledge Engineering Group (KEG's AgentLM 70B
## AgentLM-70B
<p align="center">
🤗 <a href="https://huggingface.co/datasets/THUDM/AgentInstruct" target="_blank">[Dataset] </a> • 💻 <a href="https://github.com/THUDM/AgentTuning" target="_blank">[Github Repo]</a> • 📌 <a href="https://THUDM.github.io/AgentTuning/" target="_blank">[Project Page]</a> • 📃 <a href="https://arxiv.org/abs/2310.12823" target="_blank">[Paper]</a>
</p>
**AgentTuning** represents the very first attempt to instruction-tune LLMs using interaction trajectories across multiple agent tasks. Evaluation results indicate that AgentTuning enables the agent capabilities of LLMs with robust generalization on unseen agent tasks while remaining good on general language abilities. We have open-sourced the AgentInstruct dataset and AgentLM.
## Models
**AgentLM** models are produced by mixed training on AgentInstruct dataset and ShareGPT dataset from Llama-2-chat models.
The models follow the conversation format of [Llama-2-chat](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), with system prompt fixed as
```
You are a helpful, respectful and honest assistant.
```
7B, 13B, and 70B models are available on Huggingface model hub.
|Model|Huggingface Repo|
|---|---|
|AgentLM-7B| [🤗Huggingface Repo](https://huggingface.co/THUDM/agentlm-7b) |
|AgentLM-13B| [🤗Huggingface Repo](https://huggingface.co/THUDM/agentlm-13b) |
|AgentLM-70B| [🤗Huggingface Repo](https://huggingface.co/THUDM/agentlm-70b) |
## Citation
If you find our work useful, please consider citing AgentTuning:
```
@misc{zeng2023agenttuning,
title={AgentTuning: Enabling Generalized Agent Abilities for LLMs},
author={Aohan Zeng and Mingdao Liu and Rui Lu and Bowen Wang and Xiao Liu and Yuxiao Dong and Jie Tang},
year={2023},
eprint={2310.12823},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!-- original-model-card end -->
|
TheBloke/merlyn-education-corpus-qa-v2-GGUF
|
TheBloke
| 2023-11-16T13:28:42Z | 379 | 3 |
transformers
|
[
"transformers",
"gguf",
"llama",
"MerlynMind",
"education",
"base_model:MerlynMind/merlyn-education-corpus-qa-v2",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | null | 2023-11-16T13:21:22Z |
---
base_model: MerlynMind/merlyn-education-corpus-qa-v2
inference: false
license: apache-2.0
model_creator: Merlyn Mind
model_name: Merlyn Education Corpus QA v2
model_type: llama
prompt_template: 'Instruction:\t{system_message}
Conversation:
''user1'':\tuser message to analyse
''user2'':\tuser message to analyse
Response:
'
quantized_by: TheBloke
tags:
- MerlynMind
- education
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Merlyn Education Corpus QA v2 - GGUF
- Model creator: [Merlyn Mind](https://huggingface.co/MerlynMind)
- Original model: [Merlyn Education Corpus QA v2](https://huggingface.co/MerlynMind/merlyn-education-corpus-qa-v2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Merlyn Mind's Merlyn Education Corpus QA v2](https://huggingface.co/MerlynMind/merlyn-education-corpus-qa-v2).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/merlyn-education-corpus-qa-v2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/merlyn-education-corpus-qa-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/merlyn-education-corpus-qa-v2-GGUF)
* [Merlyn Mind's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/MerlynMind/merlyn-education-corpus-qa-v2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Merlyn-Education
```
Instruction:\t{system_message}
Conversation:
'user1':\tuser message to analyse
'user2':\tuser message to analyse
Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `apache-2.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Merlyn Mind's Merlyn Education Corpus QA v2](https://huggingface.co/MerlynMind/merlyn-education-corpus-qa-v2).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [merlyn-education-corpus-qa-v2.Q2_K.gguf](https://huggingface.co/TheBloke/merlyn-education-corpus-qa-v2-GGUF/blob/main/merlyn-education-corpus-qa-v2.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [merlyn-education-corpus-qa-v2.Q3_K_S.gguf](https://huggingface.co/TheBloke/merlyn-education-corpus-qa-v2-GGUF/blob/main/merlyn-education-corpus-qa-v2.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [merlyn-education-corpus-qa-v2.Q3_K_M.gguf](https://huggingface.co/TheBloke/merlyn-education-corpus-qa-v2-GGUF/blob/main/merlyn-education-corpus-qa-v2.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [merlyn-education-corpus-qa-v2.Q3_K_L.gguf](https://huggingface.co/TheBloke/merlyn-education-corpus-qa-v2-GGUF/blob/main/merlyn-education-corpus-qa-v2.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [merlyn-education-corpus-qa-v2.Q4_0.gguf](https://huggingface.co/TheBloke/merlyn-education-corpus-qa-v2-GGUF/blob/main/merlyn-education-corpus-qa-v2.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [merlyn-education-corpus-qa-v2.Q4_K_S.gguf](https://huggingface.co/TheBloke/merlyn-education-corpus-qa-v2-GGUF/blob/main/merlyn-education-corpus-qa-v2.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [merlyn-education-corpus-qa-v2.Q4_K_M.gguf](https://huggingface.co/TheBloke/merlyn-education-corpus-qa-v2-GGUF/blob/main/merlyn-education-corpus-qa-v2.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [merlyn-education-corpus-qa-v2.Q5_0.gguf](https://huggingface.co/TheBloke/merlyn-education-corpus-qa-v2-GGUF/blob/main/merlyn-education-corpus-qa-v2.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [merlyn-education-corpus-qa-v2.Q5_K_S.gguf](https://huggingface.co/TheBloke/merlyn-education-corpus-qa-v2-GGUF/blob/main/merlyn-education-corpus-qa-v2.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [merlyn-education-corpus-qa-v2.Q5_K_M.gguf](https://huggingface.co/TheBloke/merlyn-education-corpus-qa-v2-GGUF/blob/main/merlyn-education-corpus-qa-v2.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [merlyn-education-corpus-qa-v2.Q6_K.gguf](https://huggingface.co/TheBloke/merlyn-education-corpus-qa-v2-GGUF/blob/main/merlyn-education-corpus-qa-v2.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [merlyn-education-corpus-qa-v2.Q8_0.gguf](https://huggingface.co/TheBloke/merlyn-education-corpus-qa-v2-GGUF/blob/main/merlyn-education-corpus-qa-v2.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/merlyn-education-corpus-qa-v2-GGUF and below it, a specific filename to download, such as: merlyn-education-corpus-qa-v2.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/merlyn-education-corpus-qa-v2-GGUF merlyn-education-corpus-qa-v2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/merlyn-education-corpus-qa-v2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/merlyn-education-corpus-qa-v2-GGUF merlyn-education-corpus-qa-v2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m merlyn-education-corpus-qa-v2.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Instruction:\t{system_message}\nConversation:\n'user1':\tuser message to analyse\n'user2':\tuser message to analyse\nResponse:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/merlyn-education-corpus-qa-v2-GGUF", model_file="merlyn-education-corpus-qa-v2.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Merlyn Mind's Merlyn Education Corpus QA v2
# Merlyn-Education Corpus QA
merlyn-education-corpus-qa-v2 is a 13b parameter decoder-style transformer model for the education domain. It is fine-tuned from a [llama2-13b](https://huggingface.co/meta-llama/Llama-2-13b-hf) base-model.
This model was trained by [Merlyn Mind](https://www.merlyn.org/).
It is a model that provides an answer to a question based on the given context.
## Model Date
August 21, 2023
## Model License
Apache-2.0
## Usage
Loading model and tokenizer:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path = "MerlynMind/merlyn-education-corpus-qa-v2"
device = torch.device("cuda:0") # change device id as necessary
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path, fast_tokenizer=True)
model.to(device) # move to device
```
Prompt example:
```python
info = '''Information:\tThe Solar System is about 4.6 billion years old. The Sun formed by gravity in a large molecular cloud. It is mainly hydrogen, which it converts into helium.
Information:\tThe formation and evolution of the Solar System began 4.6 billion years ago with the gravitational collapse of a small part of a giant molecular cloud.
Information:\tAstronomers are now more or less certain that the order of the planets was not always as it is today. Knowing what we know today, we can see the Solar System is strange. All other planetary system we are able to study have their largest planet close to their star. Also we have noticed other oddities in the Solar System. Mars is smaller than it ought to be, and the asteroid belt has been disturbed.
Information:\tFor thousands of years, people had no need for a name for the "Solar System". They thought the Earth stayed still at the center of everything (geocentrism). The Greek philosopher Aristarchus of Samos suggested that there was a special order in the sky. Nicolaus Copernicus was the first to develop a mathematical system that described what we now call the "Solar System". This was called a "new system of the world". In the 17th century, Galileo Galilei, Johannes Kepler and Isaac Newton began to understand physics more clearly. People began to accept the idea that the Earth is a planet that moves around the Sun, and that the planets are worlds, and that all worlds are governed by the same same physical laws. More recently, telescopes and space probes sometimes let us see details directly. All inner planets have surface features. The gas giants (as the name suggests) have surfaces whose make-up is gradually being discovered.
Information:\tThere are eight planets in the Solar System. From closest to farthest from the Sun, they are: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus and Neptune. The first four planets are called terrestrial planets. They are mostly made of rock and metal, and they are mostly solid. The last four planets are called gas giants. This is because they are much larger than other planets and are mostly made of gas.
'''
qs = "Question:\tHow old is the Solar System?"
prompt = tokenizer.bos_token
prompt += '''Instruction:\tYou are to try to answer the following question using only the pieces of information given.
Instruction:\tYour response should be a well formed JSON object with an 'answerable' property followed by an 'answer' property.
Instruction:\tIf you cannot answer the question given the information, the value of the 'answerable' should be 'false' and the 'answer' should be an empty string.
Instruction:\tIf you can answer the question given the information, the value of the 'answerable' should be 'true' and your answer should be the string value of the 'answer' property.
''' + info + qs + " Response:"
```
We recommend using newline character for stopping criterion, as follows:
```python
from transformers import StoppingCriteria, StoppingCriteriaList
eos_tokens = [tokenizer.eos_token,'\n']
eos_token_ids = [tokenizer.encode(token)[0] for token in eos_tokens]
class MultipleEOSTokensStoppingCriteria(StoppingCriteria):
def __init__(self, eos_token_ids):
self.eos_token_ids = set(eos_token_ids)
def __call__(self, input_ids, scores) -> bool:
if input_ids.shape[-1] <= 1:
return False
for eos_token_id in self.eos_token_ids:
if eos_token_id == input_ids[0, -1].item():
return True
return False
# Define stopping criteria
multiple_eos_tokens_processor = MultipleEOSTokensStoppingCriteria(eos_token_ids)
stopping_criteria = StoppingCriteriaList([multiple_eos_tokens_processor])
```
Inference:
```python
inputs = tokenizer(prompt, return_tensors="pt", return_token_type_ids=False).to(device)
generate_ids = model.generate(
**inputs,
max_new_tokens=1024,
temperature=0.0,
num_beams=2,
top_p=1,
stopping_criteria=stopping_criteria
)
response = tokenizer.decode(generate_ids[0],
skip_special_tokens=True,
clean_up_tokenization_spaces=True)
```
Example output (after response processing):
```json
[{"answerable": "true", "answer": "4.6 billion years"}]
```
## Evaluation
This model is trained on a larger dataset compared to the [pythia-based v1 model](https://huggingface.co/MerlynMind/merlyn-education-corpus-qa), yielding better correctness and reduced hallucinations on a larger and more diverse benchmarking dataset.
<!-- original-model-card end -->
|
Yntec/AnythingRemix
|
Yntec
| 2023-12-09T04:02:04Z | 379 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"anime",
"general",
"andite",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-16T22:36:51Z |
---
language:
- en
license: creativeml-openrail-m
tags:
- anime
- general
- andite
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
# Anything Remix
A mix of my favortie models, Anything 4.0 and Anything 4.5 to bring the best of both worlds into a single model. It has the MoistMixV2VAE baked in.
Sample and prompt:

pastel art by Kyoani OF ray tracing with his seven daughters, award winning, pretty cute girls, skirts, pajamas, at the gym, very detailed faces, beautiful chibi eyes, high quality Anime scenery by rossdraws and ross tran and ilya kuvshinov
Original pages:
https://huggingface.co/xyn-ai/anything-v4.0
https://huggingface.co/shibal1/anything-v4.5-clone
# Recipe
- SuperMerger Weight sum Train Difference Use MBW 1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0
Model A:
anything v4.0
Model B:
anything v4.5
Output Model:
AnythingRemix
- Bake MoistMixV2VAE in
Output Model:
AnythingRemixVAE
|
scherrmann/GermanFinBert_SC_Sentiment
|
scherrmann
| 2024-01-02T12:10:53Z | 379 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"de",
"arxiv:2311.08793",
"arxiv:1307.5336",
"arxiv:1708.07120",
"arxiv:1412.6980",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-17T10:28:32Z |
---
license: apache-2.0
language:
- de
widget:
- text: "STS Group AG erhält Großauftrag von führendem Nutzfahrzeughersteller in Nordamerika und plant Bau eines ersten US-Werks"
- text: "Zukünftig soll jedoch je Geschäftsjahr eine Mindestdividende in Höhe von EUR 2,00 je dividendenberechtigter Aktie an die Aktionärinnen und Aktionäre ausgeschüttet werden."
- text: "Comet passt Jahresprognose nach Q3 unter Erwartungen an"
---
# German FinBERT For Sentiment Analysis (Pre-trained From Scratch Version, Fine-Tuned for Financial Sentiment Analysis)
<img src="https://github.com/mscherrmann/mscherrmann.github.io/blob/master/assets/img/publication_preview/germanBert.png?raw=true" alt="Alt text for the image" width="500" height="300"/>
German FinBERT is a BERT language model focusing on the financial domain within the German language. In my [paper](https://arxiv.org/pdf/2311.08793.pdf), I describe in more detail the steps taken to train the model and show that it outperforms its generic benchmarks for finance specific downstream tasks.
This model is the [pre-trained from scratch version of German FinBERT](https://huggingface.co/scherrmann/GermanFinBert_SC), after fine-tuning on a translated version of the [financial news phrase bank](https://arxiv.org/abs/1307.5336) of Malo et al. (2013). The data is available [here](https://huggingface.co/datasets/scherrmann/financial_phrasebank_75agree_german).
## Overview
**Author** Moritz Scherrmann
**Paper:** [here](https://arxiv.org/pdf/2311.08793.pdf)
**Architecture:** BERT base
**Language:** German
**Specialization:** Financial sentiment
**Base model:** [German_FinBert_SC](https://huggingface.co/scherrmann/GermanFinBert_SC)
### Fine-tuning
I fine-tune the model using the 1cycle policy of [Smith and Topin (2019)](https://arxiv.org/abs/1708.07120). I use the Adam optimization method of [Kingma and Ba (2014)](https://arxiv.org/abs/1412.6980) with
standard parameters.I run a grid search on the evaluation set to find the best hyper-parameter setup. I test different
values for learning rate, batch size and number of epochs, following the suggestions of [Chalkidis et al. (2020)](https://aclanthology.org/2020.findings-emnlp.261/). I repeat the fine-tuning for each setup five times with different seeds, to avoid getting good results by chance.
After finding the best model w.r.t the evaluation set, I report the mean result across seeds for that model on the test set.
### Results
Translated [Financial news phrase bank](https://arxiv.org/abs/1307.5336) (Malo et al. (2013)), see [here](https://huggingface.co/datasets/scherrmann/financial_phrasebank_75agree_german) for the data:
- Accuracy: 95.95%
- Macro F1: 92.70%
## Authors
Moritz Scherrmann: `scherrmann [at] lmu.de`
For additional details regarding the performance on fine-tune datasets and benchmark results, please refer to the full documentation provided in the study.
See also:
- scherrmann/GermanFinBERT_SC
- scherrmann/GermanFinBERT_FP
- scherrmann/GermanFinBERT_FP_QuAD
|
togethercomputer/StripedHyena-Hessian-7B
|
togethercomputer
| 2024-03-27T19:16:13Z | 379 | 60 |
transformers
|
[
"transformers",
"safetensors",
"stripedhyena",
"text-generation",
"custom_code",
"en",
"arxiv:2302.10866",
"arxiv:2310.18780",
"arxiv:2311.05908",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-11-21T15:43:25Z |
---
license: apache-2.0
language:
- en
---
## StripedHyena-Hessian-7B (SH 7B)
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/62a1306bbe7fa896d2c8de44/Bfjh77emDsWOY-VmfvU9C.png" width="60%" />
</p>
### About
One of the focus areas at Together Research is new architectures for long context, improved training, and inference performance over the Transformer architecture. Spinning out of a research program from our team and academic collaborators, with roots in **signal processing-inspired sequence models**, we are excited to introduce the **StripedHyena** models. StripedHyena is the **first alternative model competitive with the best open-source Transformers** of similar sizes in short and long-context evaluations.
**StripedHyena-Hessian-7B (SH 7B)** is our **base model** for this release.
- Read more here in [our blog](https://www.together.ai/blog/stripedhyena-7b).
- Play with the model on our [playground](https://api.together.xyz/playground/language/togethercomputer/StripedHyena-Hessian-7B)!
- Dive into the details of our [standalone implementation](https://github.com/togethercomputer/stripedhyena), and our related research: [1](https://arxiv.org/abs/2302.10866), [2](https://arxiv.org/abs/2310.18780), [3](https://arxiv.org/abs/2311.05908).
### Model Architecture
StripedHyena is a hybrid architecture composed of multi-head, grouped-query attention and gated convolutions arranged in [Hyena](https://arxiv.org/abs/2302.10866) blocks, different from traditional decoder-only Transformers.
- Costant memory decoding in Hyena blocks via representation of convolutions as state-space models (modal or canonical form), or as truncated filters.
- Low latency, faster decoding and higher throughput than Transformers.
- Improvement to training and inference-optimal scaling laws, compared to optimized Transformer architectures such as Llama-2.
- Trained on sequences of up to 32k, allowing it to process longer prompts.
### Note
To use StripedHyena outside of the playground, you will need to install custom kernels. Please follow the instructions from the [standalone repository](https://github.com/togethercomputer/stripedhyena).
StripedHyena is a mixed precision model. Make sure to keep your `poles` and `residues` in `float32` precision, especially for longer prompts or training.
## Cite
If you have found the pretrained models or architecture useful for you research or application, consider citing:
```
@software{stripedhyena,
title = {{StripedHyena: Moving Beyond Transformers with Hybrid Signal Processing Models}},
author = { Poli, Michael and Wang, Jue and Massaroli, Stefano and Quesnelle, Jeffrey and Carlow, Ryan and Nguyen, Eric and Thomas, Armin},
month = 12,
year = 2023,
url = { https://github.com/togethercomputer/stripedhyena },
doi = { 10.57967/hf/1595 },
}
```
|
TheBloke/mixtral_spanish_ft-GGUF
|
TheBloke
| 2023-12-21T16:27:11Z | 379 | 8 |
peft
|
[
"peft",
"gguf",
"mixtral",
"generated_from_trainer",
"base_model:clibrain/mixtral_spanish_ft",
"license:apache-2.0",
"region:us"
] | null | 2023-12-21T16:15:33Z |
---
base_model: clibrain/mixtral_spanish_ft
inference: false
library_name: peft
license: apache-2.0
model-index:
- name: mixtral_no_robots_secondtry
results: []
model_creator: CliBrAIn
model_name: Mixtral Spanish FT
model_type: mixtral
prompt_template: '<|user|>
{prompt}
<|assistant|>
'
quantized_by: TheBloke
tags:
- generated_from_trainer
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Mixtral Spanish FT - GGUF
- Model creator: [CliBrAIn](https://huggingface.co/clibrain)
- Original model: [Mixtral Spanish FT](https://huggingface.co/clibrain/mixtral_spanish_ft)
<!-- description start -->
## Description
This repo contains GGUF format model files for [CliBrAIn's Mixtral Spanish FT](https://huggingface.co/clibrain/mixtral_spanish_ft).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/mixtral_spanish_ft-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/mixtral_spanish_ft-GGUF)
* [CliBrAIn's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/clibrain/mixtral_spanish_ft)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ToRA
```
<|user|>
{prompt}
<|assistant|>
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [mixtral_spanish_ft.Q2_K.gguf](https://huggingface.co/TheBloke/mixtral_spanish_ft-GGUF/blob/main/mixtral_spanish_ft.Q2_K.gguf) | Q2_K | 2 | 15.64 GB| 18.14 GB | smallest, significant quality loss - not recommended for most purposes |
| [mixtral_spanish_ft.Q3_K_M.gguf](https://huggingface.co/TheBloke/mixtral_spanish_ft-GGUF/blob/main/mixtral_spanish_ft.Q3_K_M.gguf) | Q3_K_M | 3 | 20.36 GB| 22.86 GB | very small, high quality loss |
| [mixtral_spanish_ft.Q4_0.gguf](https://huggingface.co/TheBloke/mixtral_spanish_ft-GGUF/blob/main/mixtral_spanish_ft.Q4_0.gguf) | Q4_0 | 4 | 26.44 GB| 28.94 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mixtral_spanish_ft.Q4_K_M.gguf](https://huggingface.co/TheBloke/mixtral_spanish_ft-GGUF/blob/main/mixtral_spanish_ft.Q4_K_M.gguf) | Q4_K_M | 4 | 26.44 GB| 28.94 GB | medium, balanced quality - recommended |
| [mixtral_spanish_ft.Q5_0.gguf](https://huggingface.co/TheBloke/mixtral_spanish_ft-GGUF/blob/main/mixtral_spanish_ft.Q5_0.gguf) | Q5_0 | 5 | 32.23 GB| 34.73 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mixtral_spanish_ft.Q5_K_M.gguf](https://huggingface.co/TheBloke/mixtral_spanish_ft-GGUF/blob/main/mixtral_spanish_ft.Q5_K_M.gguf) | Q5_K_M | 5 | 32.23 GB| 34.73 GB | large, very low quality loss - recommended |
| [mixtral_spanish_ft.Q6_K.gguf](https://huggingface.co/TheBloke/mixtral_spanish_ft-GGUF/blob/main/mixtral_spanish_ft.Q6_K.gguf) | Q6_K | 6 | 38.38 GB| 40.88 GB | very large, extremely low quality loss |
| [mixtral_spanish_ft.Q8_0.gguf](https://huggingface.co/TheBloke/mixtral_spanish_ft-GGUF/blob/main/mixtral_spanish_ft.Q8_0.gguf) | Q8_0 | 8 | 49.63 GB| 52.13 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/mixtral_spanish_ft-GGUF and below it, a specific filename to download, such as: mixtral_spanish_ft.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/mixtral_spanish_ft-GGUF mixtral_spanish_ft.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/mixtral_spanish_ft-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/mixtral_spanish_ft-GGUF mixtral_spanish_ft.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m mixtral_spanish_ft.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|user|>\n{prompt}\n<|assistant|>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./mixtral_spanish_ft.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|user|>\n{prompt}\n<|assistant|>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./mixtral_spanish_ft.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: CliBrAIn's Mixtral Spanish FT
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mixtral Finetuned on Spanish Instructions Dataset
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) on the Spanish Instructions dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9807
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "clibrain/mixtral_spanish_ft"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "<|user|>" + " " + "\n" + "Dime 3 planes para hacer en la ciudad de Madrid" + "\n" + "<|assistant|>" + "\n"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0635 | 0.02 | 1000 | 1.1332 |
| 0.9311 | 0.03 | 2000 | 1.1109 |
| 0.9417 | 0.05 | 3000 | 1.0926 |
| 1.0411 | 0.06 | 4000 | 1.0809 |
| 0.9516 | 0.08 | 5000 | 1.0786 |
| 1.0107 | 0.09 | 6000 | 1.0726 |
| 1.0698 | 0.11 | 7000 | 1.0666 |
| 1.1083 | 0.13 | 8000 | 1.0638 |
| 0.9148 | 0.14 | 9000 | 1.0589 |
| 0.957 | 0.16 | 10000 | 1.0565 |
| 1.0063 | 0.17 | 11000 | 1.0531 |
| 0.9831 | 0.19 | 12000 | 1.0509 |
| 1.0826 | 0.2 | 13000 | 1.0490 |
| 0.9598 | 0.22 | 14000 | 1.0518 |
| 0.8066 | 0.23 | 15000 | 1.0453 |
| 0.8795 | 0.25 | 16000 | 1.0431 |
| 1.1402 | 0.27 | 17000 | 1.0442 |
| 1.0652 | 0.28 | 18000 | 1.0428 |
| 0.93 | 0.3 | 19000 | 1.0371 |
| 0.9727 | 0.31 | 20000 | 1.0344 |
| 1.0753 | 0.33 | 21000 | 1.0339 |
| 0.9498 | 0.34 | 22000 | 1.0303 |
| 0.6971 | 0.36 | 23000 | 1.0316 |
| 0.9259 | 0.38 | 24000 | 1.0298 |
| 1.0359 | 0.39 | 25000 | 1.0284 |
| 1.1883 | 0.41 | 26000 | 1.0273 |
| 0.8642 | 0.42 | 27000 | 1.0250 |
| 0.9147 | 0.44 | 28000 | 1.0226 |
| 0.7824 | 0.45 | 29000 | 1.0237 |
| 0.8319 | 0.47 | 30000 | 1.0219 |
| 0.9443 | 0.49 | 31000 | 1.0190 |
| 0.9103 | 0.5 | 32000 | 1.0166 |
| 0.8903 | 0.52 | 33000 | 1.0149 |
| 1.0509 | 0.53 | 34000 | 1.0148 |
| 1.0008 | 0.55 | 35000 | 1.0151 |
| 0.778 | 0.56 | 36000 | 1.0106 |
| 0.7957 | 0.58 | 37000 | 1.0090 |
| 0.8679 | 0.6 | 38000 | 1.0085 |
| 1.064 | 0.61 | 39000 | 1.0064 |
| 0.823 | 0.63 | 40000 | 1.0061 |
| 0.9117 | 0.64 | 41000 | 1.0047 |
| 0.8284 | 0.66 | 42000 | 1.0019 |
| 0.9345 | 0.67 | 43000 | 1.0012 |
| 0.9854 | 0.69 | 44000 | 1.0004 |
| 0.7631 | 0.7 | 45000 | 0.9989 |
| 0.7189 | 0.72 | 46000 | 0.9979 |
| 0.9386 | 0.74 | 47000 | 0.9952 |
| 1.011 | 0.75 | 48000 | 0.9943 |
| 0.9627 | 0.77 | 49000 | 0.9941 |
| 1.1317 | 0.78 | 50000 | 0.9923 |
| 1.0506 | 0.8 | 51000 | 0.9912 |
| 0.8596 | 0.81 | 52000 | 0.9894 |
| 0.9702 | 0.83 | 53000 | 0.9889 |
| 1.0198 | 0.85 | 54000 | 0.9875 |
| 1.1125 | 0.86 | 55000 | 0.9862 |
| 0.9356 | 0.88 | 56000 | 0.9862 |
| 0.7212 | 0.89 | 57000 | 0.9852 |
| 0.974 | 0.91 | 58000 | 0.9843 |
| 0.9369 | 0.92 | 59000 | 0.9829 |
| 0.938 | 0.94 | 60000 | 0.9826 |
| 0.8011 | 0.96 | 61000 | 0.9818 |
| 0.7937 | 0.97 | 62000 | 0.9811 |
| 0.9679 | 0.99 | 63000 | 0.9807 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
<!-- original-model-card end -->
|
MaziyarPanahi/Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp-GGUF
|
MaziyarPanahi
| 2024-01-26T06:34:44Z | 379 | 1 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"machinists/Mistral-7B-Instruct-SQL",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:MaziyarPanahi/Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp"
] |
text-generation
| 2024-01-23T17:50:34Z |
---
license: apache-2.0
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- merge
- mergekit
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- machinists/Mistral-7B-Instruct-SQL
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
model_name: Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp-GGUF
base_model: MaziyarPanahi/Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp](https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp)
## Description
[MaziyarPanahi/Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp-GGUF) contains GGUF format model files for [MaziyarPanahi/Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp](https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp-GGUF) and below it, a specific filename to download, such as: Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp-GGUF Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp-GGUF Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
mradermacher/Samantha-1.11-70b-i1-GGUF
|
mradermacher
| 2024-05-06T06:22:04Z | 379 | 2 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:ehartford/samantha-data",
"base_model:cognitivecomputations/Samantha-1.11-70b",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-02-24T09:37:44Z |
---
base_model: cognitivecomputations/Samantha-1.11-70b
datasets:
- ehartford/samantha-data
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
weighted/imatrix quants of https://huggingface.co/cognitivecomputations/Samantha-1.11-70b
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ1_S.gguf) | i1-IQ1_S | 15.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ1_M.gguf) | i1-IQ1_M | 16.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.7 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.8 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ2_S.gguf) | i1-IQ2_S | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ2_M.gguf) | i1-IQ2_M | 23.7 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q2_K.gguf) | i1-Q2_K | 25.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.6 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q3_K_XS.gguf) | i1-Q3_K_XS | 28.7 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ3_S.gguf) | i1-IQ3_S | 30.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ3_M.gguf) | i1-IQ3_M | 31.4 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 39.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q4_0.gguf) | i1-Q4_0 | 39.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.9 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 49.2 | |
| [PART 1](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 57.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/Nous-Hermes-2-Mistral-7B-DPO-gguf
|
RichardErkhov
| 2024-02-25T20:49:31Z | 379 | 2 | null |
[
"gguf",
"region:us"
] | null | 2024-02-25T19:14:42Z |
GGUF quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Linkedin](https://www.linkedin.com/in/richard-erkhov/)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Nous-Hermes-2-Mistral-7B-DPO - GGUF
- Model creator: https://huggingface.co/NousResearch/
- Original model: https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO/
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ---- |
| [Nous-Hermes-2-Mistral-7B-DPO.Q2_K.gguf](https://huggingface.co/RichardErkhov/Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q2_K.gguf) | Q2_K | 2 | 2.53GB | significant quality loss - not recommended for most purposes |
| [Nous-Hermes-2-Mistral-7B-DPO.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95GB | very small, high quality loss |
| [Nous-Hermes-2-Mistral-7B-DPO.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q3_K_M.gguf) | Q3_K_M | 3 | 3.28GB | very small, high quality loss |
| [Nous-Hermes-2-Mistral-7B-DPO.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q3_K_L.gguf) | Q3_K_L | 3 | 3.56GB | small, substantial quality loss |
| [Nous-Hermes-2-Mistral-7B-DPO.Q4_0.gguf](https://huggingface.co/RichardErkhov/Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q4_0.gguf) | Q4_0 | 4 | 3.83GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Nous-Hermes-2-Mistral-7B-DPO.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86GB | small, greater quality loss |
| [Nous-Hermes-2-Mistral-7B-DPO.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q4_K_M.gguf) | Q4_K_M | 4 | 4.07GB | medium, balanced quality - recommended |
| [Nous-Hermes-2-Mistral-7B-DPO.Q5_0.gguf](https://huggingface.co/RichardErkhov/Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q5_0.gguf) | Q5_0 | 5 | 4.65GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Nous-Hermes-2-Mistral-7B-DPO.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65GB | large, low quality loss - recommended |
| [Nous-Hermes-2-Mistral-7B-DPO.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78GB | large, very low quality loss - recommended |
| [Nous-Hermes-2-Mistral-7B-DPO.Q6_K.gguf](https://huggingface.co/RichardErkhov/Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q6_K.gguf) | Q6_K | 6 | 5.53GB | very large, extremely low quality loss |
| [Nous-Hermes-2-Mistral-7B-DPO.Q8_0.gguf](https://huggingface.co/RichardErkhov/Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q8_0.gguf) | Q8_0 | 8 | 7.17GB | very large, extremely low quality loss - not recommended |
Original model description:
---
base_model: mistralai/Mistral-7B-v0.1
tags:
- Mistral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
model-index:
- name: Nous-Hermes-2-Mistral-7B-DPO
results: []
license: apache-2.0
language:
- en
datasets:
- teknium/OpenHermes-2.5
---
# Nous Hermes 2 - Mistral 7B - DPO

## Model Description
Nous Hermes 2 on Mistral 7B DPO is the new flagship 7B Hermes! This model was DPO'd from [Teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) and has improved across the board on all benchmarks tested - AGIEval, BigBench Reasoning, GPT4All, and TruthfulQA.
The model prior to DPO was trained on 1,000,000 instructions/chats of GPT-4 quality or better, primarily synthetic data as well as other high quality datasets, available from the repository [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5).
## Thank you to FluidStack for sponsoring compute for this model!
## Example Outputs
### Describing Weather Patterns in Paris:

### Making JSON Nested Lists

### Roleplaying as a Toaist Master

## Benchmark Results
Nous-Hermes 2 DPO on Mistral 7B is an improvement across the board on the benchmarks below compared to the original OpenHermes 2.5 model, as shown here:

## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5776|± |0.0144|
| | |acc_norm|0.6220|± |0.0142|
|arc_easy | 0|acc |0.8380|± |0.0076|
| | |acc_norm|0.8245|± |0.0078|
|boolq | 1|acc |0.8624|± |0.0060|
|hellaswag | 0|acc |0.6418|± |0.0048|
| | |acc_norm|0.8249|± |0.0038|
|openbookqa | 0|acc |0.3420|± |0.0212|
| | |acc_norm|0.4540|± |0.0223|
|piqa | 0|acc |0.8177|± |0.0090|
| | |acc_norm|0.8264|± |0.0088|
|winogrande | 0|acc |0.7466|± |0.0122|
```
Average: 73.72
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2047|± |0.0254|
| | |acc_norm|0.2283|± |0.0264|
|agieval_logiqa_en | 0|acc |0.3779|± |0.0190|
| | |acc_norm|0.3932|± |0.0192|
|agieval_lsat_ar | 0|acc |0.2652|± |0.0292|
| | |acc_norm|0.2522|± |0.0287|
|agieval_lsat_lr | 0|acc |0.5216|± |0.0221|
| | |acc_norm|0.5137|± |0.0222|
|agieval_lsat_rc | 0|acc |0.5911|± |0.0300|
| | |acc_norm|0.5836|± |0.0301|
|agieval_sat_en | 0|acc |0.7427|± |0.0305|
| | |acc_norm|0.7184|± |0.0314|
|agieval_sat_en_without_passage| 0|acc |0.4612|± |0.0348|
| | |acc_norm|0.4466|± |0.0347|
|agieval_sat_math | 0|acc |0.3818|± |0.0328|
| | |acc_norm|0.3545|± |0.0323|
```
Average: 43.63
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5579|± |0.0361|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6694|± |0.0245|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3333|± |0.0294|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.2061|± |0.0214|
| | |exact_str_match |0.2256|± |0.0221|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|± |0.0207|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2114|± |0.0154|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4900|± |0.0289|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.3600|± |0.0215|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6660|± |0.0105|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4420|± |0.0235|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2766|± |0.0142|
|bigbench_snarks | 0|multiple_choice_grade|0.6630|± |0.0352|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.6653|± |0.0150|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.3190|± |0.0147|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2128|± |0.0116|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1737|± |0.0091|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4900|± |0.0289|
```
Average: 41.94
## TruthfulQA:
```
| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.3892|± |0.0171|
| | |mc2 |0.5642|± |0.0153|
```
# Prompt Format
Nous Hermes 2 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
When quantized versions of the model are released, I recommend using LM Studio for chatting with Nous Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import LlamaTokenizer, MixtralForCausalLM
import bitsandbytes, flash_attn
tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Nous-Hermes-2-Mistral-7B-DPO', trust_remote_code=True)
model = MistralForCausalLM.from_pretrained(
"NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
# How to cite:
```bibtext
@misc{Nous-Hermes-2-Mistral-7B-DPO,
url={[https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO)},
title={Nous Hermes 2 Mistral 7B DPO},
author={"Teknium", "theemozilla", "karan4d", "huemin_art"}
}
```
|
ABX-AI/Infinite-Sumika-9B
|
ABX-AI
| 2024-04-10T00:00:37Z | 379 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"not-for-all-audiences",
"base_model:localfultonextractor/Erosumika-7B-v2",
"base_model:Endevor/InfinityRP-v1-7B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-03-27T00:59:25Z |
---
base_model:
- localfultonextractor/Erosumika-7B-v2
- Endevor/InfinityRP-v1-7B
library_name: transformers
tags:
- mergekit
- merge
- not-for-all-audiences
license: other
---

# Infinite-Sumika-9B
This model is intended for fictional role-playing and storywriting.
It seems to be doing good with longer text, and prefers to write longer, storytelling-like responses sometimes.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [localfultonextractor/Erosumika-7B-v2](https://huggingface.co/localfultonextractor/Erosumika-7B-v2)
* [Endevor/InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Endevor/InfinityRP-v1-7B
layer_range: [0, 20]
- sources:
- model: localfultonextractor/Erosumika-7B-v2
layer_range: [12, 32]
merge_method: passthrough
dtype: float16
```
|
bartowski/codegemma-7b-GGUF
|
bartowski
| 2024-04-09T17:49:04Z | 379 | 2 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"license:gemma",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-09T17:29:12Z |
---
library_name: transformers
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access CodeGemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
license: gemma
license_link: https://ai.google.dev/gemma/terms
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp Quantizations of codegemma-7b
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2589">b2589</a> for quantization.
Original model: https://huggingface.co/google/codegemma-7b
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
No prompt template for this model.
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [codegemma-7b-Q8_0.gguf](https://huggingface.co/bartowski/codegemma-7b-GGUF/blob/main/codegemma-7b-Q8_0.gguf) | Q8_0 | 9.07GB | Extremely high quality, generally unneeded but max available quant. |
| [codegemma-7b-Q6_K.gguf](https://huggingface.co/bartowski/codegemma-7b-GGUF/blob/main/codegemma-7b-Q6_K.gguf) | Q6_K | 7.01GB | Very high quality, near perfect, *recommended*. |
| [codegemma-7b-Q5_K_M.gguf](https://huggingface.co/bartowski/codegemma-7b-GGUF/blob/main/codegemma-7b-Q5_K_M.gguf) | Q5_K_M | 6.14GB | High quality, *recommended*. |
| [codegemma-7b-Q5_K_S.gguf](https://huggingface.co/bartowski/codegemma-7b-GGUF/blob/main/codegemma-7b-Q5_K_S.gguf) | Q5_K_S | 5.98GB | High quality, *recommended*. |
| [codegemma-7b-Q4_K_M.gguf](https://huggingface.co/bartowski/codegemma-7b-GGUF/blob/main/codegemma-7b-Q4_K_M.gguf) | Q4_K_M | 5.32GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [codegemma-7b-Q4_K_S.gguf](https://huggingface.co/bartowski/codegemma-7b-GGUF/blob/main/codegemma-7b-Q4_K_S.gguf) | Q4_K_S | 5.04GB | Slightly lower quality with more space savings, *recommended*. |
| [codegemma-7b-IQ4_NL.gguf](https://huggingface.co/bartowski/codegemma-7b-GGUF/blob/main/codegemma-7b-IQ4_NL.gguf) | IQ4_NL | 5.01GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [codegemma-7b-IQ4_XS.gguf](https://huggingface.co/bartowski/codegemma-7b-GGUF/blob/main/codegemma-7b-IQ4_XS.gguf) | IQ4_XS | 4.76GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [codegemma-7b-Q3_K_L.gguf](https://huggingface.co/bartowski/codegemma-7b-GGUF/blob/main/codegemma-7b-Q3_K_L.gguf) | Q3_K_L | 4.70GB | Lower quality but usable, good for low RAM availability. |
| [codegemma-7b-Q3_K_M.gguf](https://huggingface.co/bartowski/codegemma-7b-GGUF/blob/main/codegemma-7b-Q3_K_M.gguf) | Q3_K_M | 4.36GB | Even lower quality. |
| [codegemma-7b-IQ3_M.gguf](https://huggingface.co/bartowski/codegemma-7b-GGUF/blob/main/codegemma-7b-IQ3_M.gguf) | IQ3_M | 4.10GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [codegemma-7b-IQ3_S.gguf](https://huggingface.co/bartowski/codegemma-7b-GGUF/blob/main/codegemma-7b-IQ3_S.gguf) | IQ3_S | 3.98GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [codegemma-7b-Q3_K_S.gguf](https://huggingface.co/bartowski/codegemma-7b-GGUF/blob/main/codegemma-7b-Q3_K_S.gguf) | Q3_K_S | 3.98GB | Low quality, not recommended. |
| [codegemma-7b-IQ3_XS.gguf](https://huggingface.co/bartowski/codegemma-7b-GGUF/blob/main/codegemma-7b-IQ3_XS.gguf) | IQ3_XS | 3.80GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [codegemma-7b-IQ3_XXS.gguf](https://huggingface.co/bartowski/codegemma-7b-GGUF/blob/main/codegemma-7b-IQ3_XXS.gguf) | IQ3_XXS | 3.48GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [codegemma-7b-Q2_K.gguf](https://huggingface.co/bartowski/codegemma-7b-GGUF/blob/main/codegemma-7b-Q2_K.gguf) | Q2_K | 3.48GB | Very low quality but surprisingly usable. |
| [codegemma-7b-IQ2_M.gguf](https://huggingface.co/bartowski/codegemma-7b-GGUF/blob/main/codegemma-7b-IQ2_M.gguf) | IQ2_M | 3.13GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [codegemma-7b-IQ2_S.gguf](https://huggingface.co/bartowski/codegemma-7b-GGUF/blob/main/codegemma-7b-IQ2_S.gguf) | IQ2_S | 2.91GB | Very low quality, uses SOTA techniques to be usable. |
| [codegemma-7b-IQ2_XS.gguf](https://huggingface.co/bartowski/codegemma-7b-GGUF/blob/main/codegemma-7b-IQ2_XS.gguf) | IQ2_XS | 2.81GB | Very low quality, uses SOTA techniques to be usable. |
| [codegemma-7b-IQ2_XXS.gguf](https://huggingface.co/bartowski/codegemma-7b-GGUF/blob/main/codegemma-7b-IQ2_XXS.gguf) | IQ2_XXS | 2.58GB | Lower quality, uses SOTA techniques to be usable. |
| [codegemma-7b-IQ1_M.gguf](https://huggingface.co/bartowski/codegemma-7b-GGUF/blob/main/codegemma-7b-IQ1_M.gguf) | IQ1_M | 2.32GB | Extremely low quality, *not* recommended. |
| [codegemma-7b-IQ1_S.gguf](https://huggingface.co/bartowski/codegemma-7b-GGUF/blob/main/codegemma-7b-IQ1_S.gguf) | IQ1_S | 2.16GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
HirCoir/MiniChat-1.5-3B-Sorah-GGUF
|
HirCoir
| 2024-05-17T02:58:32Z | 379 | 0 | null |
[
"gguf",
"sorah",
"minichat",
"es",
"en",
"region:us"
] | null | 2024-04-25T20:24:14Z |
---
language:
- es
- en
tags:
- sorah
- minichat
---
# MiniChat-1.5-3B-Sorah

Este es el repositorio del modelo de IA MiniChat-1.5-3B-Sorah. Está basado en el modelo MiniChat-1.5-3B, que puedes encontrar [aquí](https://huggingface.co/GeneZC/MiniChat-1.5-3B).
El modelo MiniChat-1.5-3B-Sorah complementa el modelo de voz Sorah, que fue creado por el proyecto Piper. El modelo Sorah está en proceso de entrenamiento y mejoras, por lo que su acceso no es público, pero puedes encontrar más información en el [repositorio de Sorah Neuronal](https://huggingface.co/HirCoir/piper-sorah-neuronal).
Estos modelos en formato .gguf son la versión convertida del [repositorio original](https://huggingface.co/HirCoir/MiniChat-1.5-3B-Sorah-gguf).
## Requisitos
Para usar este modelo, necesitas tener instalado llama.cpp en tu equipo. Puedes obtener llama.cpp desde el siguiente repositorio:
- [Repositorio de llama.cpp](https://github.com/ggerganov/llama.cpp)
Para instalar llama.cpp, sigue estos pasos:
```bash
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make
```
## Uso del modelo
La plantilla del modelo es la siguiente:
```plaintext
<s> [|User|] {prompt} </s>[|Assistant|]
```
Puedes utilizar el modelo en llama.cpp con el siguiente comando:
```bash
./main -m minichat-sorah-Q8_0.gguf -p "<s> [|User|] Hola, ¿cómo te llamas? </s>[|Assistant|]" --log-disable
```
LM Studio config-presets
Filename:MiniChat-Sorah.preset.json
```json
{
"name": "MiniChat-1.5-3B-Sorah",
"inference_params": {
"input_prefix": "<s> [|User|]",
"input_suffix": "</s>[|Assistant|]",
"antiprompt": [
"<s> [|User|]",
"</s>[|Assistant|]",
"</s>"
],
"pre_prompt": "",
"pre_prompt_prefix": "",
"pre_prompt_suffix": ""
},
"load_params": {
"rope_freq_scale": 0,
"rope_freq_base": 0
}
}
```
## Referencias
- [Repositorio original](https://huggingface.co/GeneZC/MiniChat-1.5-3B)
- [Repositorio de la versión convertida](https://huggingface.co/HirCoir/MiniChat-1.5-3B-Sorah)
- [Repositorio de llama.cpp](https://github.com/ggerganov/llama.cpp)
|
RichardErkhov/google_-_gemma-7b-it-gguf
|
RichardErkhov
| 2024-05-02T15:32:52Z | 379 | 0 | null |
[
"gguf",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"region:us"
] | null | 2024-05-02T12:33:49Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gemma-7b-it - GGUF
- Model creator: https://huggingface.co/google/
- Original model: https://huggingface.co/google/gemma-7b-it/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gemma-7b-it.Q2_K.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-7b-it-gguf/blob/main/gemma-7b-it.Q2_K.gguf) | Q2_K | 3.24GB |
| [gemma-7b-it.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-7b-it-gguf/blob/main/gemma-7b-it.IQ3_XS.gguf) | IQ3_XS | 3.54GB |
| [gemma-7b-it.IQ3_S.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-7b-it-gguf/blob/main/gemma-7b-it.IQ3_S.gguf) | IQ3_S | 3.71GB |
| [gemma-7b-it.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-7b-it-gguf/blob/main/gemma-7b-it.Q3_K_S.gguf) | Q3_K_S | 3.71GB |
| [gemma-7b-it.IQ3_M.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-7b-it-gguf/blob/main/gemma-7b-it.IQ3_M.gguf) | IQ3_M | 3.82GB |
| [gemma-7b-it.Q3_K.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-7b-it-gguf/blob/main/gemma-7b-it.Q3_K.gguf) | Q3_K | 4.07GB |
| [gemma-7b-it.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-7b-it-gguf/blob/main/gemma-7b-it.Q3_K_M.gguf) | Q3_K_M | 4.07GB |
| [gemma-7b-it.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-7b-it-gguf/blob/main/gemma-7b-it.Q3_K_L.gguf) | Q3_K_L | 4.39GB |
| [gemma-7b-it.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-7b-it-gguf/blob/main/gemma-7b-it.IQ4_XS.gguf) | IQ4_XS | 4.48GB |
| [gemma-7b-it.Q4_0.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-7b-it-gguf/blob/main/gemma-7b-it.Q4_0.gguf) | Q4_0 | 4.67GB |
| [gemma-7b-it.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-7b-it-gguf/blob/main/gemma-7b-it.IQ4_NL.gguf) | IQ4_NL | 4.69GB |
| [gemma-7b-it.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-7b-it-gguf/blob/main/gemma-7b-it.Q4_K_S.gguf) | Q4_K_S | 4.7GB |
| [gemma-7b-it.Q4_K.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-7b-it-gguf/blob/main/gemma-7b-it.Q4_K.gguf) | Q4_K | 4.96GB |
| [gemma-7b-it.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-7b-it-gguf/blob/main/gemma-7b-it.Q4_K_M.gguf) | Q4_K_M | 4.96GB |
| [gemma-7b-it.Q4_1.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-7b-it-gguf/blob/main/gemma-7b-it.Q4_1.gguf) | Q4_1 | 5.12GB |
| [gemma-7b-it.Q5_0.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-7b-it-gguf/blob/main/gemma-7b-it.Q5_0.gguf) | Q5_0 | 5.57GB |
| [gemma-7b-it.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-7b-it-gguf/blob/main/gemma-7b-it.Q5_K_S.gguf) | Q5_K_S | 5.57GB |
| [gemma-7b-it.Q5_K.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-7b-it-gguf/blob/main/gemma-7b-it.Q5_K.gguf) | Q5_K | 5.72GB |
| [gemma-7b-it.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-7b-it-gguf/blob/main/gemma-7b-it.Q5_K_M.gguf) | Q5_K_M | 5.72GB |
| [gemma-7b-it.Q5_1.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-7b-it-gguf/blob/main/gemma-7b-it.Q5_1.gguf) | Q5_1 | 6.02GB |
| [gemma-7b-it.Q6_K.gguf](https://huggingface.co/RichardErkhov/google_-_gemma-7b-it-gguf/blob/main/gemma-7b-it.Q6_K.gguf) | Q6_K | 6.53GB |
Original model description:
---
library_name: transformers
tags: []
widget:
- messages:
- role: user
content: How does the brain work?
inference:
parameters:
max_new_tokens: 200
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
license: gemma
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 7B instruct version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B base model](https://huggingface.co/google/gemma-7b), and [2B instruct model](https://huggingface.co/google/gemma-2b-it).
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-7b-it-gg-hf)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Fine-tuning the model
You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-7b-it`.
In that repository, we provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset
#### Running the model on a CPU
As explained below, we recommend `torch.bfloat16` as the default dtype. You can use [a different precision](#precisions) if necessary.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-7b-it",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-7b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-7b-it",
device_map="auto",
torch_dtype=torch.float16,
revision="float16",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-7b-it",
device_map="auto"
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b-it", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-7b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 49.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | 12.5 | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **45.0** | **56.9** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
RichardErkhov/neuralmagic_-_Llama-2-7b-evolcodealpaca-gguf
|
RichardErkhov
| 2024-05-10T20:51:13Z | 379 | 0 | null |
[
"gguf",
"arxiv:2405.03594",
"arxiv:2107.03374",
"region:us"
] | null | 2024-05-10T19:16:30Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-2-7b-evolcodealpaca - GGUF
- Model creator: https://huggingface.co/neuralmagic/
- Original model: https://huggingface.co/neuralmagic/Llama-2-7b-evolcodealpaca/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-2-7b-evolcodealpaca.Q2_K.gguf](https://huggingface.co/RichardErkhov/neuralmagic_-_Llama-2-7b-evolcodealpaca-gguf/blob/main/Llama-2-7b-evolcodealpaca.Q2_K.gguf) | Q2_K | 2.36GB |
| [Llama-2-7b-evolcodealpaca.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/neuralmagic_-_Llama-2-7b-evolcodealpaca-gguf/blob/main/Llama-2-7b-evolcodealpaca.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [Llama-2-7b-evolcodealpaca.IQ3_S.gguf](https://huggingface.co/RichardErkhov/neuralmagic_-_Llama-2-7b-evolcodealpaca-gguf/blob/main/Llama-2-7b-evolcodealpaca.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [Llama-2-7b-evolcodealpaca.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/neuralmagic_-_Llama-2-7b-evolcodealpaca-gguf/blob/main/Llama-2-7b-evolcodealpaca.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [Llama-2-7b-evolcodealpaca.IQ3_M.gguf](https://huggingface.co/RichardErkhov/neuralmagic_-_Llama-2-7b-evolcodealpaca-gguf/blob/main/Llama-2-7b-evolcodealpaca.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [Llama-2-7b-evolcodealpaca.Q3_K.gguf](https://huggingface.co/RichardErkhov/neuralmagic_-_Llama-2-7b-evolcodealpaca-gguf/blob/main/Llama-2-7b-evolcodealpaca.Q3_K.gguf) | Q3_K | 3.07GB |
| [Llama-2-7b-evolcodealpaca.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/neuralmagic_-_Llama-2-7b-evolcodealpaca-gguf/blob/main/Llama-2-7b-evolcodealpaca.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [Llama-2-7b-evolcodealpaca.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/neuralmagic_-_Llama-2-7b-evolcodealpaca-gguf/blob/main/Llama-2-7b-evolcodealpaca.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [Llama-2-7b-evolcodealpaca.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/neuralmagic_-_Llama-2-7b-evolcodealpaca-gguf/blob/main/Llama-2-7b-evolcodealpaca.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [Llama-2-7b-evolcodealpaca.Q4_0.gguf](https://huggingface.co/RichardErkhov/neuralmagic_-_Llama-2-7b-evolcodealpaca-gguf/blob/main/Llama-2-7b-evolcodealpaca.Q4_0.gguf) | Q4_0 | 3.56GB |
| [Llama-2-7b-evolcodealpaca.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/neuralmagic_-_Llama-2-7b-evolcodealpaca-gguf/blob/main/Llama-2-7b-evolcodealpaca.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [Llama-2-7b-evolcodealpaca.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/neuralmagic_-_Llama-2-7b-evolcodealpaca-gguf/blob/main/Llama-2-7b-evolcodealpaca.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [Llama-2-7b-evolcodealpaca.Q4_K.gguf](https://huggingface.co/RichardErkhov/neuralmagic_-_Llama-2-7b-evolcodealpaca-gguf/blob/main/Llama-2-7b-evolcodealpaca.Q4_K.gguf) | Q4_K | 3.8GB |
| [Llama-2-7b-evolcodealpaca.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/neuralmagic_-_Llama-2-7b-evolcodealpaca-gguf/blob/main/Llama-2-7b-evolcodealpaca.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [Llama-2-7b-evolcodealpaca.Q4_1.gguf](https://huggingface.co/RichardErkhov/neuralmagic_-_Llama-2-7b-evolcodealpaca-gguf/blob/main/Llama-2-7b-evolcodealpaca.Q4_1.gguf) | Q4_1 | 3.95GB |
| [Llama-2-7b-evolcodealpaca.Q5_0.gguf](https://huggingface.co/RichardErkhov/neuralmagic_-_Llama-2-7b-evolcodealpaca-gguf/blob/main/Llama-2-7b-evolcodealpaca.Q5_0.gguf) | Q5_0 | 4.33GB |
| [Llama-2-7b-evolcodealpaca.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/neuralmagic_-_Llama-2-7b-evolcodealpaca-gguf/blob/main/Llama-2-7b-evolcodealpaca.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [Llama-2-7b-evolcodealpaca.Q5_K.gguf](https://huggingface.co/RichardErkhov/neuralmagic_-_Llama-2-7b-evolcodealpaca-gguf/blob/main/Llama-2-7b-evolcodealpaca.Q5_K.gguf) | Q5_K | 4.45GB |
| [Llama-2-7b-evolcodealpaca.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/neuralmagic_-_Llama-2-7b-evolcodealpaca-gguf/blob/main/Llama-2-7b-evolcodealpaca.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [Llama-2-7b-evolcodealpaca.Q5_1.gguf](https://huggingface.co/RichardErkhov/neuralmagic_-_Llama-2-7b-evolcodealpaca-gguf/blob/main/Llama-2-7b-evolcodealpaca.Q5_1.gguf) | Q5_1 | 4.72GB |
| [Llama-2-7b-evolcodealpaca.Q6_K.gguf](https://huggingface.co/RichardErkhov/neuralmagic_-_Llama-2-7b-evolcodealpaca-gguf/blob/main/Llama-2-7b-evolcodealpaca.Q6_K.gguf) | Q6_K | 5.15GB |
Original model description:
---
base_model: meta-llama/Llama-2-7b-hf
inference: true
model_type: llama
pipeline_tag: text-generation
datasets:
- theblackcat102/evol-codealpaca-v1
tags:
- code
---
# Llama-2-7b-evolcodealpaca
This repo contains a [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b-hf) finetuned for code generation tasks using the [Evolved CodeAlpaca](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1) dataset.
Official model weights from [Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment](https://arxiv.org/abs/2405.03594).
**Authors**: Neural Magic, Cerebras
## Usage
Below we share some code snippets on how to get quickly started with running the model.
### Sparse Transfer
By leveraging a pre-sparsified model's structure, you can efficiently fine-tune on new data, leading to reduced hyperparameter tuning, training times, and computational costs. Learn about this process [here](https://neuralmagic.github.io/docs-v2/get-started/transfer).
### Running the model
This model may be run with the transformers library. For accelerated inference with sparsity, deploy with [nm-vllm](https://github.com/neuralmagic/nm-vllm) or [deepsparse](https://github.com/neuralmagic/deepsparse).
```python
# pip install transformers accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("neuralmagic/Llama-2-7b-evolcodealpaca")
model = AutoModelForCausalLM.from_pretrained("neuralmagic/Llama-2-7b-evolcodealpaca", device_map="auto")
input_text = "def fibonacci(n):\n"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
## Evaluation Benchmark Results
Model evaluation metrics and results.
| Benchmark | Metric | Llama-2-7b-evolcodealpaca |
|------------------------------------------------|---------------|-------------|
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 32.03 |
## Model Training Details
Coming soon.
## Help
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ)
|
fimbulvntr/lewd-stories
|
fimbulvntr
| 2024-05-23T15:35:26Z | 379 | 1 | null |
[
"gguf",
"not-for-all-audiences",
"text-generation",
"en",
"region:us"
] |
text-generation
| 2024-05-20T23:46:12Z |
---
language:
- en
pipeline_tag: text-generation
tags:
- not-for-all-audiences
---
## Notes
There is no template, just BOS+text
It can also start from nothing
Temperature, repetition penalty, etc should all be left as defaults
It will not go lewd immediately, it will try to form a coherent story
It's best to generate 1~3 paragraphs at a time, it loses coherence if you try to make it generate the full context all at once
## LLaMA-3-8B base
RoPEd to 16k context
| Name | Quant | Size | VRAM (With FA) | VRAM (No FA) |
|---------------------------------------|--------|------|----------------|--------------|
| llama-3-8b-lewd-stories-v6-16k.F16 | F16 | 14.9 | 16.6 | 17.4 |
| llama-3-8b-lewd-stories-v6-16k.Q8_0 | Q8_0 | 8.0 | 10.1 | 10.5 |
| llama-3-8b-lewd-stories-v6-16k.Q6_K | Q6_K | 6.1 | 8.4 | 9.2 |
| llama-3-8b-lewd-stories-v6-16k.Q5_K_M | Q5_K_M | 5.3 | 7.6 | 8.1 |
| llama-3-8b-lewd-stories-v6-16k.Q4_K_M | Q4_K_M | 4.6 | 6.9 | 7.8 |
## Yi-1.5-9B-32K
Native 32k context
| Name | Quant | Size | VRAM (With FA) | VRAM (No FA) |
|----------------------------|--------|------|----------------|--------------|
| yi-lewd-stories-32k.F16 | F16 | 16.4 | | |
| yi-lewd-stories-32k.Q8_0 | Q8_0 | 8.7 | | |
| yi-lewd-stories-32k.Q6_K | Q6_K | 6.7 | | |
| yi-lewd-stories-32k.Q5_K_M | Q5_K_M | 5.8 | | |
| yi-lewd-stories-32k.Q4_K_M | Q4_K_M | 5.0 | | |
## Mistral-7B-v0.3
Native 32k context
| Name | Quant | Size | VRAM (With FA) | VRAM (No FA) |
|---------------------------------|--------|------|----------------|--------------|
| mistral-lewd-stories-32k.F16 | F16 | 13.5 | | |
| mistral-lewd-stories-32k.Q8_0 | Q8_0 | 7.2 | | |
| mistral-lewd-stories-32k.Q6_K | Q6_K | 5.5 | | |
| mistral-lewd-stories-32k.Q5_K_M | Q5_K_M | 4.8 | | |
| mistral-lewd-stories-32k.Q4_K_M | Q4_K_M | 4.0 | | |
|
RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf
|
RichardErkhov
| 2024-05-26T22:36:13Z | 379 | 0 | null |
[
"gguf",
"region:us"
] | null | 2024-05-26T20:23:12Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
h2ogpt-4096-llama2-7b - GGUF
- Model creator: https://huggingface.co/h2oai/
- Original model: https://huggingface.co/h2oai/h2ogpt-4096-llama2-7b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [h2ogpt-4096-llama2-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf/blob/main/h2ogpt-4096-llama2-7b.Q2_K.gguf) | Q2_K | 2.36GB |
| [h2ogpt-4096-llama2-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf/blob/main/h2ogpt-4096-llama2-7b.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [h2ogpt-4096-llama2-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf/blob/main/h2ogpt-4096-llama2-7b.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [h2ogpt-4096-llama2-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf/blob/main/h2ogpt-4096-llama2-7b.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [h2ogpt-4096-llama2-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf/blob/main/h2ogpt-4096-llama2-7b.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [h2ogpt-4096-llama2-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf/blob/main/h2ogpt-4096-llama2-7b.Q3_K.gguf) | Q3_K | 3.07GB |
| [h2ogpt-4096-llama2-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf/blob/main/h2ogpt-4096-llama2-7b.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [h2ogpt-4096-llama2-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf/blob/main/h2ogpt-4096-llama2-7b.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [h2ogpt-4096-llama2-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf/blob/main/h2ogpt-4096-llama2-7b.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [h2ogpt-4096-llama2-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf/blob/main/h2ogpt-4096-llama2-7b.Q4_0.gguf) | Q4_0 | 3.56GB |
| [h2ogpt-4096-llama2-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf/blob/main/h2ogpt-4096-llama2-7b.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [h2ogpt-4096-llama2-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf/blob/main/h2ogpt-4096-llama2-7b.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [h2ogpt-4096-llama2-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf/blob/main/h2ogpt-4096-llama2-7b.Q4_K.gguf) | Q4_K | 3.8GB |
| [h2ogpt-4096-llama2-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf/blob/main/h2ogpt-4096-llama2-7b.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [h2ogpt-4096-llama2-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf/blob/main/h2ogpt-4096-llama2-7b.Q4_1.gguf) | Q4_1 | 3.95GB |
| [h2ogpt-4096-llama2-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf/blob/main/h2ogpt-4096-llama2-7b.Q5_0.gguf) | Q5_0 | 4.33GB |
| [h2ogpt-4096-llama2-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf/blob/main/h2ogpt-4096-llama2-7b.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [h2ogpt-4096-llama2-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf/blob/main/h2ogpt-4096-llama2-7b.Q5_K.gguf) | Q5_K | 4.45GB |
| [h2ogpt-4096-llama2-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf/blob/main/h2ogpt-4096-llama2-7b.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [h2ogpt-4096-llama2-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf/blob/main/h2ogpt-4096-llama2-7b.Q5_1.gguf) | Q5_1 | 4.72GB |
| [h2ogpt-4096-llama2-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf/blob/main/h2ogpt-4096-llama2-7b.Q6_K.gguf) | Q6_K | 5.15GB |
| [h2ogpt-4096-llama2-7b.Q8_0.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-4096-llama2-7b-gguf/blob/main/h2ogpt-4096-llama2-7b.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
inference: false
language:
- en
license: llama2
model_type: llama
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- h2ogpt
---
h2oGPT clone of [Meta's Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b-hf).
This model can be fine-tuned with [H2O.ai](https://h2o.ai/) open-source software:
- h2oGPT https://github.com/h2oai/h2ogpt/
- H2O LLM Studio https://h2o.ai/platform/ai-cloud/make/llm-studio/
Try our live [h2oGPT demo](https://gpt.h2o.ai) with side-by-side LLM comparisons and private document chat!
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=4096, bias=False)
(v_proj): Linear(in_features=4096, out_features=4096, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
(up_proj): Linear(in_features=4096, out_features=11008, bias=False)
(down_proj): Linear(in_features=11008, out_features=4096, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```
|
RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf
|
RichardErkhov
| 2024-05-29T22:06:10Z | 379 | 0 | null |
[
"gguf",
"region:us"
] | null | 2024-05-29T18:56:46Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mistral-7B-v0.1-hf - GGUF
- Model creator: https://huggingface.co/kittn/
- Original model: https://huggingface.co/kittn/mistral-7B-v0.1-hf/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [mistral-7B-v0.1-hf.Q2_K.gguf](https://huggingface.co/RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf/blob/main/mistral-7B-v0.1-hf.Q2_K.gguf) | Q2_K | 2.53GB |
| [mistral-7B-v0.1-hf.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf/blob/main/mistral-7B-v0.1-hf.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [mistral-7B-v0.1-hf.IQ3_S.gguf](https://huggingface.co/RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf/blob/main/mistral-7B-v0.1-hf.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [mistral-7B-v0.1-hf.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf/blob/main/mistral-7B-v0.1-hf.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [mistral-7B-v0.1-hf.IQ3_M.gguf](https://huggingface.co/RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf/blob/main/mistral-7B-v0.1-hf.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [mistral-7B-v0.1-hf.Q3_K.gguf](https://huggingface.co/RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf/blob/main/mistral-7B-v0.1-hf.Q3_K.gguf) | Q3_K | 3.28GB |
| [mistral-7B-v0.1-hf.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf/blob/main/mistral-7B-v0.1-hf.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [mistral-7B-v0.1-hf.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf/blob/main/mistral-7B-v0.1-hf.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [mistral-7B-v0.1-hf.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf/blob/main/mistral-7B-v0.1-hf.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [mistral-7B-v0.1-hf.Q4_0.gguf](https://huggingface.co/RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf/blob/main/mistral-7B-v0.1-hf.Q4_0.gguf) | Q4_0 | 3.83GB |
| [mistral-7B-v0.1-hf.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf/blob/main/mistral-7B-v0.1-hf.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [mistral-7B-v0.1-hf.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf/blob/main/mistral-7B-v0.1-hf.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [mistral-7B-v0.1-hf.Q4_K.gguf](https://huggingface.co/RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf/blob/main/mistral-7B-v0.1-hf.Q4_K.gguf) | Q4_K | 4.07GB |
| [mistral-7B-v0.1-hf.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf/blob/main/mistral-7B-v0.1-hf.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [mistral-7B-v0.1-hf.Q4_1.gguf](https://huggingface.co/RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf/blob/main/mistral-7B-v0.1-hf.Q4_1.gguf) | Q4_1 | 4.24GB |
| [mistral-7B-v0.1-hf.Q5_0.gguf](https://huggingface.co/RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf/blob/main/mistral-7B-v0.1-hf.Q5_0.gguf) | Q5_0 | 4.65GB |
| [mistral-7B-v0.1-hf.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf/blob/main/mistral-7B-v0.1-hf.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [mistral-7B-v0.1-hf.Q5_K.gguf](https://huggingface.co/RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf/blob/main/mistral-7B-v0.1-hf.Q5_K.gguf) | Q5_K | 4.78GB |
| [mistral-7B-v0.1-hf.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf/blob/main/mistral-7B-v0.1-hf.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [mistral-7B-v0.1-hf.Q5_1.gguf](https://huggingface.co/RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf/blob/main/mistral-7B-v0.1-hf.Q5_1.gguf) | Q5_1 | 5.07GB |
| [mistral-7B-v0.1-hf.Q6_K.gguf](https://huggingface.co/RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf/blob/main/mistral-7B-v0.1-hf.Q6_K.gguf) | Q6_K | 5.53GB |
| [mistral-7B-v0.1-hf.Q8_0.gguf](https://huggingface.co/RichardErkhov/kittn_-_mistral-7B-v0.1-hf-gguf/blob/main/mistral-7B-v0.1-hf.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
## **UPDATE: Official version is out, use it instead: [https://huggingface.co/mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)**
---
---
---
---
# mistral-7B-v0.1-hf
Huggingface compatible version of Mistral's 7B model: https://twitter.com/MistralAI/status/1706877320844509405
## Usage
### Load in bfloat16 (16GB VRAM or higher)
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer, pipeline, TextStreamer
tokenizer = LlamaTokenizer.from_pretrained("kittn/mistral-7B-v0.1-hf")
model = LlamaForCausalLM.from_pretrained(
"kittn/mistral-7B-v0.1-hf",
torch_dtype=torch.bfloat16,
device_map={"": 0}
)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
pipe("Hi, my name", streamer=TextStreamer(tokenizer), max_new_tokens=128)
```
### Load in bitsandbytes nf4 (6GB VRAM or higher, maybe less with double_quant)
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer, pipeline, TextStreamer, BitsAndBytesConfig
tokenizer = LlamaTokenizer.from_pretrained("kittn/mistral-7B-v0.1-hf")
model = LlamaForCausalLM.from_pretrained(
"kittn/mistral-7B-v0.1-hf",
device_map={"": 0},
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=False, # set to True to save more VRAM at the cost of some speed/accuracy
),
)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
pipe("Hi, my name", streamer=TextStreamer(tokenizer), max_new_tokens=128)
```
### Load in bitsandbytes int8 (8GB VRAM or higher). Quite slow; not recommended.
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer, pipeline, TextStreamer, BitsAndBytesConfig
tokenizer = LlamaTokenizer.from_pretrained("kittn/mistral-7B-v0.1-hf")
model = LlamaForCausalLM.from_pretrained(
"kittn/mistral-7B-v0.1-hf",
device_map={"": 0},
quantization_config=BitsAndBytesConfig(
load_in_8bit=True,
),
)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
pipe("Hi, my name", streamer=TextStreamer(tokenizer), max_new_tokens=128)
```
## Notes
* The original huggingface conversion script converts the model from bf16 to fp16 before saving it. This script doesn't
* The tokenizer is created with `legacy=False`, [more about this here](https://github.com/huggingface/transformers/pull/24565)
* Saved in safetensors format
## Conversion script [[link]](https://gist.github.com/sekstini/151d6946df1f6aa997b7cb15ee6f3be1)
Unlike [meta-llama/Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b), this model uses GQA. This breaks some assumptions in the original conversion script, requiring a few changes.
Conversion script: [link](https://gist.github.com/sekstini/151d6946df1f6aa997b7cb15ee6f3be1)
Original conversion script: [link](https://github.com/huggingface/transformers/blob/946bac798caefada3f5f1c9fecdcfd587ed24ac7/src/transformers/models/llama/convert_llama_weights_to_hf.py)
|
bartowski/AlchemistCoder-L-7B-GGUF
|
bartowski
| 2024-05-31T04:16:08Z | 379 | 1 | null |
[
"gguf",
"code generation",
"text-generation",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-05-30T23:35:41Z |
---
license: apache-2.0
tags:
- code generation
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of AlchemistCoder-L-7B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3024">b3024</a> for quantization.
Original model: https://huggingface.co/internlm/AlchemistCoder-L-7B
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
No chat template specified so default is used. This may be incorrect, check original model card for details.
```
<s> [INST] <<SYS>>
{system_prompt}
<</SYS>>
{prompt} [/INST] </s>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [AlchemistCoder-L-7B-Q8_0.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-Q8_0.gguf) | Q8_0 | 7.16GB | Extremely high quality, generally unneeded but max available quant. |
| [AlchemistCoder-L-7B-Q6_K.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-Q6_K.gguf) | Q6_K | 5.53GB | High quality, *recommended*. |
| [AlchemistCoder-L-7B-Q5_K_M.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-Q5_K_M.gguf) | Q5_K_M | 4.78GB | High quality, *recommended*. |
| [AlchemistCoder-L-7B-Q5_K_S.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-Q5_K_S.gguf) | Q5_K_S | 4.65GB | High quality, *recommended*. |
| [AlchemistCoder-L-7B-Q4_K_M.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-Q4_K_M.gguf) | Q4_K_M | 4.08GB | High quality, *recommended*. |
| [AlchemistCoder-L-7B-Q4_K_S.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-Q4_K_S.gguf) | Q4_K_S | 3.86GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [AlchemistCoder-L-7B-IQ4_XS.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-IQ4_XS.gguf) | IQ4_XS | 3.61GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [AlchemistCoder-L-7B-Q3_K_L.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-Q3_K_M.gguf) | Q3_K_M | 3.29GB | Even lower quality. |
| [AlchemistCoder-L-7B-Q3_K_M.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-Q3_K_M.gguf) | Q3_K_M | 3.29GB | Even lower quality. |
| [AlchemistCoder-L-7B-IQ3_M.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-IQ3_M.gguf) | IQ3_M | 3.11GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [AlchemistCoder-L-7B-Q3_K_S.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-Q3_K_S.gguf) | Q3_K_S | 2.94GB | Low quality, not recommended. |
| [AlchemistCoder-L-7B-IQ3_XXS.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-IQ3_XXS.gguf) | IQ3_XXS | 2.58GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [AlchemistCoder-L-7B-Q2_K.gguf](https://huggingface.co/bartowski/AlchemistCoder-L-7B-GGUF/blob/main/AlchemistCoder-L-7B-Q2_K.gguf) | Q2_K | 2.53GB | Very low quality but surprisingly usable. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/AlchemistCoder-L-7B-GGUF --include "AlchemistCoder-L-7B-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/AlchemistCoder-L-7B-GGUF --include "AlchemistCoder-L-7B-Q8_0.gguf/*" --local-dir AlchemistCoder-L-7B-Q8_0
```
You can either specify a new local-dir (AlchemistCoder-L-7B-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
DrChamyoung/Quinbeta1.1
|
DrChamyoung
| 2024-06-12T10:43:55Z | 379 | 2 | null |
[
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2024-06-12T09:22:45Z |
---
license: apache-2.0
---
# Quin Beta 1.1
Quin Beta 1.1 is an advanced Transformer-based language model with 2.7 billion parameters. It is an extension of Quin Beta 1.1, trained on an enhanced dataset that includes synthetic NLP texts and curated web content. The model demonstrates near state-of-the-art performance in benchmarks for common sense, language understanding, and logical reasoning within its parameter class. Quin Beta 1.1 is designed for QA, chat, and code formats and has a context length of 2048 tokens.
- Key Features
- Parameter Count: 2.7 billion
- Training Data: 250 billion tokens from a combination of AOAI synthetic data and filtered web data
- Training Tokens: 1.4 trillion tokens
- Hardware Used: 96xA100-80G GPUs
- Training Duration: 14 days
- Frameworks: ONNX and JAX
- Reinforcement Learning: Not applied (intended for enabling safety research)
- Applications: QA, chat, and code formats
- Context Length: 2048 tokens
- Open-Source: Yes, aimed at enabling safety research like toxicity reduction and bias understanding
# Performance
Quin Beta 1.1 has demonstrated near state-of-the-art performance on various benchmarks, specifically targeting:
- Common sense reasoning
- Language understanding
- Logical reasoning
# Usage
## Installation
To use Quin Beta 1.1, you will need to have ONNX and JAX installed. Follow the installation instructions for each framework:
## ONNX
````md
pip install onnxruntime
````
### JAX
````md
pip install jax jaxlib
````
## Loading the Model
Here is a basic example of how to load and use Quin Beta 1.1 using ONNX:
````py
import onnxruntime as ort
# Load the ONNX model
ort_session = ort.InferenceSession("quin_beta_1.1.onnx")
# Prepare input
input_text = "Your input text here."
inputs = {
"input_ids": [input_text],
}
# Run inference
outputs = ort_session.run(None, inputs)
# Get the result
result = outputs[0]
print(result)
````
## Research and Safety
Quin Beta 1.1 is open-source and designed to enable research in safety domains such as:
- Toxicity reduction
- Bias understanding and mitigation
The model has not undergone reinforcement learning fine-tuning, providing a baseline for safety research efforts.
Contributing
Contributions to Quin Beta 1.1 are welcome. If you would like to contribute, please follow these steps:
1. Fork the repository.
2. Create a new branch (git checkout -b feature-branch).
3. Make your changes.
4. Commit your changes (git commit -am 'Add new feature').
5. Push to the branch (git push origin feature-branch).
6. Create a new Pull Request.
## License
Quin Beta 1.1 is released under the apache-2.0 License. See the LICENSE file for more details.
## Contact
For any questions or issues, please contact [email protected]
|
Felladrin/gguf-sharded-Qwen2-0.5B-Instruct-llamafy
|
Felladrin
| 2024-06-27T12:07:54Z | 379 | 0 | null |
[
"gguf",
"base_model:Minami-su/Qwen2-0.5B-Instruct-llamafy",
"license:apache-2.0",
"region:us"
] | null | 2024-06-27T12:04:47Z |
---
license: apache-2.0
base_model: Minami-su/Qwen2-0.5B-Instruct-llamafy
---
Sharded GGUF version of [Minami-su/Qwen2-0.5B-Instruct-llamafy](https://huggingface.co/Minami-su/Qwen2-0.5B-Instruct-llamafy).
|
clip-italian/clip-italian
|
clip-italian
| 2023-03-16T16:06:34Z | 378 | 13 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"vision-text-dual-encoder",
"feature-extraction",
"italian",
"bert",
"vit",
"vision",
"it",
"dataset:wit",
"dataset:ctl/conceptualCaptions",
"dataset:mscoco-it",
"arxiv:2108.08688",
"arxiv:2103.01913",
"arxiv:2103.00020",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: it
license: gpl-3.0
datasets:
- wit
- ctl/conceptualCaptions
- mscoco-it
tags:
- italian
- bert
- vit
- vision
---
# Italian CLIP
Paper: [Contrastive Language-Image Pre-training for the Italian Language](https://arxiv.org/abs/2108.08688)
With a few tricks, we have been able to fine-tune a competitive Italian CLIP model with **only 1.4 million** training samples. Our Italian CLIP model is built upon the [Italian BERT](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) model provided by [dbmdz](https://huggingface.co/dbmdz) and the OpenAI [vision transformer](https://huggingface.co/openai/clip-vit-base-patch32).
Do you want to test our model right away? We got you covered! You just need to head to our [demo application](https://huggingface.co/spaces/clip-italian/clip-italian-demo).
The demo also contains all the details of the project, from training tricks to our most impressive results, and much more!
# Training data
We considered four main sources of data:
+ [WIT](https://github.com/google-research-datasets/wit) is an image-caption dataset collected from Wikipedia (see,
[Srinivasan et al., 2021](https://arxiv.org/pdf/2103.01913.pdf)).
+ [MSCOCO-IT](https://github.com/crux82/mscoco-it). This image-caption dataset comes from the work by [Scaiella et al., 2019](http://www.ai-lc.it/IJCoL/v5n2/IJCOL_5_2_3___scaiella_et_al.pdf).
+ [Conceptual Captions](https://ai.google.com/research/ConceptualCaptions/). This image-caption dataset comes from
the work by [Sharma et al., 2018](https://aclanthology.org/P18-1238.pdf).
+ [La Foto del Giorno](https://www.ilpost.it/foto-del-giorno/). This image-caption dataset is collected from [Il Post](https://www.ilpost.it/), a prominent Italian online newspaper.
We used better data augmentation, strategic training choices (we have way less data than the original CLIP paper), and backbone-freezing pre-training. For all the details on that, please refer to our [demo](https://huggingface.co/spaces/clip-italian/clip-italian-demo).
# Experiments
## Quantitative Evaluation
To better understand how well our clip-italian model works we run an experimental evaluation. Since this is the first clip-based model in Italian, we used the multilingual CLIP model as a comparison baseline.
### mCLIP
The multilingual CLIP (henceforth, mCLIP), is a model introduced by [Nils Reimers](https://www.sbert.net/docs/pretrained_models.html) in his
[sentence-transformer](https://www.sbert.net/index.html) library. mCLIP is based on a multilingual encoder
that was created through multilingual knowledge distillation (see [Reimers et al., 2020](https://aclanthology.org/2020.emnlp-main.365/)).
### Tasks
We selected two different tasks:
+ image-retrieval
+ zero-shot classification
### Reproducibiliy
Both experiments should be very easy to replicate, we share the two colab notebook we used to compute the two results
+ [Image Retrieval](https://colab.research.google.com/drive/1bLVwVKpAndpEDHqjzxVPr_9nGrSbuOQd?usp=sharing)
+ [ImageNet Zero Shot Evaluation](https://colab.research.google.com/drive/1zfWeVWY79XXH63Ci-pk8xxx3Vu_RRgW-?usp=sharing)
### Image Retrieval
This experiment is run against the MSCOCO-IT validation set (that we haven't used in training). Given in input
a caption, we search for the most similar image in the MSCOCO-IT validation set. As evaluation metrics
we use the MRR@K.
| MRR | CLIP-Italian | mCLIP |
| --------------- | ------------ |-------|
| MRR@1 | **0.3797** | 0.2874|
| MRR@5 | **0.5039** | 0.3957|
| MRR@10 | **0.5204** | 0.4129|
It is true that we used MSCOCO-IT in training, and this might give us an advantage. However the original CLIP model was trained
on 400million images (and some of them probably were from MSCOCO).
### Zero-shot image classification
This experiment replicates the original one run by OpenAI on zero-shot image classification on ImageNet.
To do this, we used DeepL to translate the image labels in ImageNet. We evaluate the models computing the accuracy at different levels.
| Accuracy | CLIP-Italian | mCLIP |
| --------------- | ------------ |-------|
| Accuracy@1 | **22.11** | 20.15 |
| Accuracy@5 | **43.69** | 36.57 |
| Accuracy@10 | **52.55** | 42.91 |
| Accuracy@100 | **81.08** | 67.11 |
Our results confirm that CLIP-Italian is very competitive and beats mCLIP on the two different task
we have been testing. Note, however, that our results are lower than those shown in the original OpenAI
paper (see, [Radford et al., 2021](https://arxiv.org/abs/2103.00020)). However, considering that our results are in line with those obtained by mCLIP we think that
the translated image labels might have had an impact on the final scores.
# Team members
- Federico Bianchi ([vinid](https://huggingface.co/vinid))
- Raphael Pisoni ([4rtemi5](https://huggingface.co/4rtemi5))
- Giuseppe Attanasio ([g8a9](https://huggingface.co/g8a9))
- Silvia Terragni ([silviatti](https://huggingface.co/silviatti))
- Dario Balestri ([D3Reo](https://huggingface.co/D3Reo))
- Gabriele Sarti ([gsarti](https://huggingface.co/gsarti))
- Sri Lakshmi ([srisweet](https://huggingface.co/srisweet))
|
cointegrated/roberta-base-formality
|
cointegrated
| 2023-04-03T18:05:31Z | 378 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
Entry not found
|
sonoisa/clip-vit-b-32-japanese-v1
|
sonoisa
| 2022-04-19T14:18:58Z | 378 | 21 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"clip",
"sentence-similarity",
"ja",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: ja
license: cc-by-sa-4.0
tags:
- clip
- feature-extraction
- sentence-similarity
---
# 日本語版[CLIP](https://github.com/openai/CLIP)モデル
This is a [CLIP](https://github.com/openai/CLIP) text/image encoder model for Japanese.
英語版CLIPモデルのテキストエンコーダーを一種の蒸留を用いて日本語化したモデルです。
作り方や精度、使い方、サンプルコードは下記の解説記事をご参照ください。
- 解説記事:
- 概要: [【日本語モデル付き】2022年にマルチモーダル処理をする人にお勧めしたい事前学習済みモデル](https://qiita.com/sonoisa/items/00e8e2861147842f0237)
- 使い方の解説: [【日本語CLIP】画像とテキストの類似度計算、画像やテキストの埋め込み計算、類似画像検索](https://qiita.com/sonoisa/items/d6db2f130fa9a4ce0c2c)
- (公開準備中) 応用解説: いらすとや画像のマルチモーダル検索(ゼロショット編)
- (公開準備中) 応用解説: いらすとや画像のマルチモーダル検索(ファインチューニング編)
- (公開準備中) 応用解説: 画像とテキストの両方を用いたマルチモーダル分類
- サンプルコードのリポジトリ: https://github.com/sonoisa/clip-japanese
- デモ:
- [いらすとや画像のマルチモーダル検索(ゼロショット)](https://huggingface.co/spaces/sonoisa/Irasuto_search_CLIP_zero-shot)
|
yoshitomo-matsubara/bert-base-uncased-mnli
|
yoshitomo-matsubara
| 2024-04-19T02:31:33Z | 378 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"mnli",
"ax",
"glue",
"torchdistill",
"en",
"dataset:mnli",
"dataset:ax",
"arxiv:2310.17644",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- bert
- mnli
- ax
- glue
- torchdistill
license: apache-2.0
datasets:
- mnli
- ax
metrics:
- accuracy
---
`bert-base-uncased` fine-tuned on MNLI dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/mnli/ce/bert_base_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
Yoshitomo Matsubara: **"torchdistill Meets Hugging Face Libraries for Reproducible, Coding-Free Deep Learning Studies: A Case Study on NLP"** at *EMNLP 2023 Workshop for Natural Language Processing Open Source Software (NLP-OSS)*
[[Paper](https://aclanthology.org/2023.nlposs-1.18/)] [[OpenReview](https://openreview.net/forum?id=A5Axeeu1Bo)] [[Preprint](https://arxiv.org/abs/2310.17644)]
```bibtex
@inproceedings{matsubara2023torchdistill,
title={{torchdistill Meets Hugging Face Libraries for Reproducible, Coding-Free Deep Learning Studies: A Case Study on NLP}},
author={Matsubara, Yoshitomo},
booktitle={Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)},
publisher={Empirical Methods in Natural Language Processing},
pages={153--164},
year={2023}
}
```
|
IDEA-CCNL/Randeng-Pegasus-238M-Summary-Chinese
|
IDEA-CCNL
| 2023-05-25T09:45:41Z | 378 | 42 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"summarization",
"chinese",
"zh",
"arxiv:1912.08777",
"arxiv:2209.02970",
"autotrain_compatible",
"region:us"
] |
summarization
| 2022-07-01T02:02:17Z |
---
language: zh
tags:
- summarization
- chinese
inference: False
---
# Randeng-Pegasus-238M-Summary-Chinese
- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
## 简介 Brief Introduction
善于处理摘要任务,在数个中文摘要数据集上微调后的,中文版的PAGASUS-base。
Good at solving text summarization tasks, after fine-tuning on multiple Chinese text summarization datasets, Chinese PAGASUS-base.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言转换 NLT | 燃灯 Randeng | PEFASUS | 238M | 文本摘要任务-中文 Summary-Chinese |
## 模型信息 Model Information
参考论文:[PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf)
基于[Randeng-Pegasus-238M-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-Pegasus-238M-Chinese),我们在收集的7个中文领域的文本摘要数据集(约4M个样本)上微调了它,得到了summary版本。这7个数据集为:education, new2016zh, nlpcc, shence, sohu, thucnews和weibo。
Based on [Randeng-Pegasus-238M-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-Pegasus-238M-Chinese), we fine-tuned a text summarization version (summary) on 7 Chinese text summarization datasets, with totaling around 4M samples. The datasets include: education, new2016zh, nlpcc, shence, sohu, thucnews and weibo.
### 下游效果 Performance
| datasets | rouge-1 | rouge-2 | rouge-L |
| ---- | ---- | ---- | ---- |
| LCSTS | 43.46 | 29.59 | 39.76 |
## 使用 Usage
```python
from transformers import PegasusForConditionalGeneration
# Need to download tokenizers_pegasus.py and other Python script from Fengshenbang-LM github repo in advance,
# or you can download tokenizers_pegasus.py and data_utils.py in https://huggingface.co/IDEA-CCNL/Randeng_Pegasus_523M/tree/main
# Strongly recommend you git clone the Fengshenbang-LM repo:
# 1. git clone https://github.com/IDEA-CCNL/Fengshenbang-LM
# 2. cd Fengshenbang-LM/fengshen/examples/pegasus/
# and then you will see the tokenizers_pegasus.py and data_utils.py which are needed by pegasus model
from tokenizers_pegasus import PegasusTokenizer
model = PegasusForConditionalGeneration.from_pretrained("IDEA-CCNL/Randeng-Pegasus-238M-Summary-Chinese")
tokenizer = PegasusTokenizer.from_pretrained("IDEA-CCNL/Randeng-Pegasus-238M-Summary-Chinese")
text = "在北京冬奥会自由式滑雪女子坡面障碍技巧决赛中,中国选手谷爱凌夺得银牌。祝贺谷爱凌!今天上午,自由式滑雪女子坡面障碍技巧决赛举行。决赛分三轮进行,取选手最佳成绩排名决出奖牌。第一跳,中国选手谷爱凌获得69.90分。在12位选手中排名第三。完成动作后,谷爱凌又扮了个鬼脸,甚是可爱。第二轮中,谷爱凌在道具区第三个障碍处失误,落地时摔倒。获得16.98分。网友:摔倒了也没关系,继续加油!在第二跳失误摔倒的情况下,谷爱凌顶住压力,第三跳稳稳发挥,流畅落地!获得86.23分!此轮比赛,共12位选手参赛,谷爱凌第10位出场。网友:看比赛时我比谷爱凌紧张,加油!"
inputs = tokenizer(text, max_length=1024, return_tensors="pt")
# Generate Summary
summary_ids = model.generate(inputs["input_ids"])
tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
# model Output: 滑雪女子坡面障碍技巧决赛谷爱凌获银牌
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
|
gligen/gligen-generation-text-box
|
gligen
| 2023-03-04T17:43:31Z | 378 | 2 |
diffusers
|
[
"diffusers",
"region:us"
] | null | 2023-03-04T17:37:39Z |
Entry not found
|
TheBloke/Platypus2-70B-Instruct-GGUF
|
TheBloke
| 2023-09-27T12:48:06Z | 378 | 11 |
transformers
|
[
"transformers",
"gguf",
"llama",
"en",
"dataset:garage-bAInd/Open-Platypus",
"dataset:Open-Orca/OpenOrca",
"arxiv:2308.07317",
"arxiv:2307.09288",
"base_model:garage-bAInd/Platypus2-70B-instruct",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
] | null | 2023-09-05T23:44:39Z |
---
language:
- en
license: cc-by-nc-4.0
datasets:
- garage-bAInd/Open-Platypus
- Open-Orca/OpenOrca
model_name: Platypus2 70B Instruct
base_model: garage-bAInd/Platypus2-70B-instruct
inference: false
model_creator: garage-bAInd
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Platypus2 70B Instruct - GGUF
- Model creator: [garage-bAInd](https://huggingface.co/garage-bAInd)
- Original model: [Platypus2 70B Instruct](https://huggingface.co/garage-bAInd/Platypus2-70B-instruct)
<!-- description start -->
## Description
This repo contains GGUF format model files for [garage-bAInd's Platypus2 70B Instruct](https://huggingface.co/garage-bAInd/Platypus2-70B-instruct).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Platypus2-70B-Instruct-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Platypus2-70B-Instruct-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Platypus2-70B-Instruct-GGUF)
* [garage-bAInd's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/garage-bAInd/Platypus2-70B-instruct)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [garage-bAInd's Platypus2 70B Instruct](https://huggingface.co/garage-bAInd/Platypus2-70B-instruct).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [platypus2-70b-instruct.Q2_K.gguf](https://huggingface.co/TheBloke/Platypus2-70B-Instruct-GGUF/blob/main/platypus2-70b-instruct.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
| [platypus2-70b-instruct.Q3_K_S.gguf](https://huggingface.co/TheBloke/Platypus2-70B-Instruct-GGUF/blob/main/platypus2-70b-instruct.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
| [platypus2-70b-instruct.Q3_K_M.gguf](https://huggingface.co/TheBloke/Platypus2-70B-Instruct-GGUF/blob/main/platypus2-70b-instruct.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
| [platypus2-70b-instruct.Q3_K_L.gguf](https://huggingface.co/TheBloke/Platypus2-70B-Instruct-GGUF/blob/main/platypus2-70b-instruct.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
| [platypus2-70b-instruct.Q4_0.gguf](https://huggingface.co/TheBloke/Platypus2-70B-Instruct-GGUF/blob/main/platypus2-70b-instruct.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [platypus2-70b-instruct.Q4_K_S.gguf](https://huggingface.co/TheBloke/Platypus2-70B-Instruct-GGUF/blob/main/platypus2-70b-instruct.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
| [platypus2-70b-instruct.Q4_K_M.gguf](https://huggingface.co/TheBloke/Platypus2-70B-Instruct-GGUF/blob/main/platypus2-70b-instruct.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
| [platypus2-70b-instruct.Q5_0.gguf](https://huggingface.co/TheBloke/Platypus2-70B-Instruct-GGUF/blob/main/platypus2-70b-instruct.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [platypus2-70b-instruct.Q5_K_S.gguf](https://huggingface.co/TheBloke/Platypus2-70B-Instruct-GGUF/blob/main/platypus2-70b-instruct.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
| [platypus2-70b-instruct.Q5_K_M.gguf](https://huggingface.co/TheBloke/Platypus2-70B-Instruct-GGUF/blob/main/platypus2-70b-instruct.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
| platypus2-70b-instruct.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss |
| platypus2-70b-instruct.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
<details>
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
### q6_K
Please download:
* `platypus2-70b-instruct.Q6_K.gguf-split-a`
* `platypus2-70b-instruct.Q6_K.gguf-split-b`
### q8_0
Please download:
* `platypus2-70b-instruct.Q8_0.gguf-split-a`
* `platypus2-70b-instruct.Q8_0.gguf-split-b`
To join the files, do the following:
Linux and macOS:
```
cat platypus2-70b-instruct.Q6_K.gguf-split-* > platypus2-70b-instruct.Q6_K.gguf && rm platypus2-70b-instruct.Q6_K.gguf-split-*
cat platypus2-70b-instruct.Q8_0.gguf-split-* > platypus2-70b-instruct.Q8_0.gguf && rm platypus2-70b-instruct.Q8_0.gguf-split-*
```
Windows command line:
```
COPY /B platypus2-70b-instruct.Q6_K.gguf-split-a + platypus2-70b-instruct.Q6_K.gguf-split-b platypus2-70b-instruct.Q6_K.gguf
del platypus2-70b-instruct.Q6_K.gguf-split-a platypus2-70b-instruct.Q6_K.gguf-split-b
COPY /B platypus2-70b-instruct.Q8_0.gguf-split-a + platypus2-70b-instruct.Q8_0.gguf-split-b platypus2-70b-instruct.Q8_0.gguf
del platypus2-70b-instruct.Q8_0.gguf-split-a platypus2-70b-instruct.Q8_0.gguf-split-b
```
</details>
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Platypus2-70B-Instruct-GGUF and below it, a specific filename to download, such as: platypus2-70b-instruct.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Platypus2-70B-Instruct-GGUF platypus2-70b-instruct.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Platypus2-70B-Instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Platypus2-70B-Instruct-GGUF platypus2-70b-instruct.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m platypus2-70b-instruct.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Platypus2-70B-Instruct-GGUF", model_file="platypus2-70b-instruct.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: garage-bAInd's Platypus2 70B Instruct
# Platypus2-70B-instruct
Platypus-70B-instruct is a merge of [`garage-bAInd/Platypus2-70B`](https://huggingface.co/garage-bAInd/Platypus2-70B) and [`upstage/Llama-2-70b-instruct-v2`](https://huggingface.co/upstage/Llama-2-70b-instruct-v2).

### Benchmark Metrics
| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | 70.48 |
| ARC (25-shot) | 71.84 |
| HellaSwag (10-shot) | 87.94 |
| TruthfulQA (0-shot) | 62.26 |
| Avg. | 73.13 |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
### Model Details
* **Trained by**: **Platypus2-70B** trained by Cole Hunter & Ariel Lee; **Llama-2-70b-instruct** trained by upstageAI
* **Model type:** **Platypus2-70B-instruct** is an auto-regressive language model based on the LLaMA 2 transformer architecture.
* **Language(s)**: English
* **License**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/))
### Prompt Template
```
### Instruction:
<prompt> (without the <>)
### Response:
```
### Training Dataset
`garage-bAInd/Platypus2-70B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information.
### Training Procedure
`garage-bAInd/Platypus2-70B` was instruction fine-tuned using LoRA on 8 A100 80GB. For training details and inference instructions please see the [Platypus](https://github.com/arielnlee/Platypus) GitHub repo.
### Reproducing Evaluation Results
Install LM Evaluation Harness:
```
# clone repository
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
# change to repo directory
cd lm-evaluation-harness
# check out the correct commit
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
# install
pip install -e .
```
Each task was evaluated on a single A100 80GB GPU.
ARC:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-70B-instruct --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/Platypus2-70B-instruct/arc_challenge_25shot.json --device cuda --num_fewshot 25
```
HellaSwag:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-70B-instruct --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/Platypus2-70B-instruct/hellaswag_10shot.json --device cuda --num_fewshot 10
```
MMLU:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-70B-instruct --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/Platypus2-70B-instruct/mmlu_5shot.json --device cuda --num_fewshot 5
```
TruthfulQA:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-70B-instruct --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/Platypus2-70B-instruct/truthfulqa_0shot.json --device cuda
```
### Limitations and bias
Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
### Citations
```bibtex
@article{platypus2023,
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
booktitle={arXiv preprint arxiv:2308.07317},
year={2023}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
}
```
```bibtex
@inproceedings{
hu2022lora,
title={Lo{RA}: Low-Rank Adaptation of Large Language Models},
author={Edward J Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=nZeVKeeFYf9}
}
```
<!-- original-model-card end -->
|
PocketDoc/Dans-MysteryModel-13b-gguf
|
PocketDoc
| 2023-09-28T00:41:40Z | 378 | 0 | null |
[
"gguf",
"en",
"region:us"
] | null | 2023-09-27T21:48:42Z |
---
language:
- en
---
### Description:
This model is to test some of my data projects. It is a multipurpose chat / chat instruct hybrid model and can be seen as a prototype of Dan's PersonalityEngine Mk. 2. It has been trained on a wide array of one shot instructions, multi round instructions, role playing scenarios, and text adventure games. It is based on Holodeck-1
### Prompt format:
Pygmalion / Metharme
The prompt should start with the cursor on the same line directly after "<|model|>" with no space. The following are all valid formats and can be extended to as many rounds as desired.
```
<|system|>system message here<|user|>user message here<|model|>
```
```
<|system|>system message here<|user|>user message here<|model|>model message<|user|>user message here<|model|>
```
```
<|system|>system message here<|model|>
```
```
<|system|>system message here<|model|>model message<|user|>user message here<|model|>
```
#### Some example prompts and their resulting outputs:
##### Prompt:
```
<|system|>The following is a transcript between a helpful assistant and a user.<|user|>Why is the sky blue?<|model|>
```
##### Output:
```
The color of the sky appears blue because it's an optical effect caused by the scattering of sunlight in Earth's atmosphere. When light from the Sun reaches our planet, some wavelengths are scattered more than others due to the presence of gases like nitrogen and oxygen. Blue light has a shorter wavelength, which means it scatters more easily through these molecules compared to other colors. This process makes the sky appear blue when we look up at it.
```
##### Prompt:
```
<|system|>You are a Virtual Story Generator. You take the user's input and create an excellent and captivating story that goes in that direction. Use an abundance of sensory descriptions and eloquent prose.<|user|>Alpha Centauri has fallen, to the bears. This is a point of view tale about a soldier on the ground.<|model|>
```
##### Output:
```
The air was cold and damp, heavy with the stench of blood and cordite. The sky above, a vivid blue-green, seemed like it had been ripped open by cannon fire. Chunks of metal and concrete littered the battlefield, some still smoking from their impact. A few soldiers lay motionless, twisted and torn. Others writhed in agony or crawled away, trailing gore behind them.
The soldier crouched down next to his wounded comrade, cursing softly under his breath as he examined the man's injuries. His face was pale, eyes glazed over, lips moving but no words coming out. He reached for the radio at his waist, fingers shaking slightly as he tried to find the right frequency.
"This is Bravo Charlie 3, do you copy?" There was nothing but static in response. "Bravo Charlie 3, this is Charlie Victor 10, do you read me? Over." More silence.
He sighed heavily and stood up, scanning the horizon for any sign of rescue. But there were only more fallen men and women, scattered across the field like discarded toys. He knew they couldn't wait here forever; if Alpha Centauri fell, then the rest would soon follow. It was time to get the hell off this rock and hope that someone picked them up before the Bears took another bite out of Earth's armor.
Gathering what little supplies he could find, he slung his rifle over one shoulder and set off into the unknown, hoping that somewhere out there was a way back home.
```
##### Prompt:
```
<|system|>You are a professional editor with decades of experience, help the user with any task they have for you.<|user|>Can you rewrite this to flow better? "I knew I probably shouldnt have done that but oh well"<|model|>
```
##### Output:
```
I realized that what I had just done might not have been the best choice, but I went ahead anyway.
```
# Some quick and dirty training details:
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="150" height="24"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- Sequence length: 4096
- Training time: 16 hours
- Hardware: 1x RTX 3090
- Training type: QLoRA
- PEFT R/A: 32/32
# Credits:
### Holodeck-1:
Thank you to Mr. Seeker and the Kobold AI team for the wonderful model Holodeck-1
[Holodeck-1 Huggingface page](https://huggingface.co/KoboldAI/LLAMA2-13B-Holodeck-1)
### Skein Text Adventure Data:
Thank you to the [Kobold AI](https://huggingface.co/KoboldAI) community for curating the Skein dataset, which is pivotal to this model's capabilities.
|
rinna/youri-7b-instruction-gptq
|
rinna
| 2024-04-03T08:24:47Z | 378 | 8 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"ja",
"en",
"dataset:databricks/databricks-dolly-15k",
"dataset:kunishou/databricks-dolly-15k-ja",
"dataset:izumi-lab/llm-japanese-dataset",
"arxiv:2404.01657",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-10-30T15:14:26Z |
---
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
license: llama2
language:
- ja
- en
inference: false
datasets:
- databricks/databricks-dolly-15k
- kunishou/databricks-dolly-15k-ja
- izumi-lab/llm-japanese-dataset
---
# `rinna/youri-7b-instruction-gptq`

# Overview
`rinna/youri-7b-instruction-gptq` is the quantized model for [`rinna/youri-7b-instruction`](https://huggingface.co/rinna/youri-7b-instruction) using AutoGPTQ. The quantized version is 4x smaller than the original model and thus requires less memory and provides faster inference.
* **Model architecture**
Refer to the [original model](https://huggingface.co/rinna/youri-7b-instruction) for architecture details.
* **Fine-tuning**
Refer to the [original model](https://huggingface.co/rinna/youri-7b-instruction) for fine-tuning details.
* **Contributors**
- [Toshiaki Wakatsuki](https://huggingface.co/t-w)
- [Tianyu Zhao](https://huggingface.co/tianyuz)
- [Kei Sawada](https://huggingface.co/keisawada)
---
# Benchmarking
Please refer to [rinna's LM benchmark page](https://rinnakk.github.io/research/benchmarks/lm/index.html).
---
# How to use the model
~~~~python
import torch
from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM
tokenizer = AutoTokenizer.from_pretrained("rinna/youri-7b-instruction-gptq")
model = AutoGPTQForCausalLM.from_quantized("rinna/youri-7b-instruction-gptq", use_safetensors=True)
instruction = "次の日本語を英語に翻訳してください。"
input = "大規模言語モデル(だいきぼげんごモデル、英: large language model、LLM)は、多数のパラメータ(数千万から数十億)を持つ人工ニューラルネットワークで構成されるコンピュータ言語モデルで、膨大なラベルなしテキストを使用して自己教師あり学習または半教師あり学習によって訓練が行われる。"
prompt = f"""
以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。
### 指示:
{instruction}
### 入力:
{input}
### 応答:
"""
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
input_ids=token_ids.to(model.device),
max_new_tokens=200,
do_sample=True,
temperature=0.5,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id
)
output = tokenizer.decode(output_ids.tolist()[0])
print(output)
~~~~
---
# Tokenization
The model uses the original llama-2 tokenizer.
---
# How to cite
~~~
@misc{rinna-youri-7b-instruction-gptq,
title = {rinna/youri-7b-instruction-gptq},
author={Wakatsuki, Toshiaki and Zhao, Tianyu and Sawada, Kei}
url = {https://huggingface.co/rinna/youri-7b-instruction-gptq},
}
@inproceedings{sawada2024release,
title = {Release of Pre-Trained Models for the {J}apanese Language},
author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
month = {5},
year = {2024},
url = {https://arxiv.org/abs/2404.01657},
}
~~~
---
# License
[The llama2 license](https://ai.meta.com/llama/license/)
|
Yntec/AgarthaChadstyle
|
Yntec
| 2023-11-04T03:31:27Z | 378 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"Style",
"Abstract",
"Surrealism",
"ChadUltraF3",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-04T02:50:41Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Style
- Abstract
- Surrealism
- ChadUltraF3
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
# 🌈🧬🍭🍄👁️ Agartha 👁️🍄🍭🧬🌈(ChadStyle)
Check the many trigger words of this model at the original page: https://civitai.com/models/69808/agartha-chadstyle
Sample and prompt:

bedroom, DETAILED CHIBI Cartoon, BLUE EYES, Pretty CUTE Girl, beautiful detailed PONYTAIL, seifuku clothes, gorgeous detailed hair, Magazine ad, 1949, iconic. acrylic art on canvas By KlaysMoji and artgerm and Clay Mann and and leyendecker and Dave Rapoza
|
TheBloke/prometheus-7B-v1.0-GGUF
|
TheBloke
| 2023-11-10T19:00:34Z | 378 | 3 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text2text-generation",
"en",
"dataset:kaist-ai/Feedback-Collection",
"arxiv:2310.08491",
"base_model:kaist-ai/prometheus-7b-v1.0",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] |
text2text-generation
| 2023-11-10T18:56:09Z |
---
base_model: kaist-ai/prometheus-7b-v1.0
datasets:
- kaist-ai/Feedback-Collection
inference: false
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- pearsonr
- spearmanr
- accuracy
model_creator: KAIST AI
model_name: Prometheus 7B V1.0
model_type: llama
pipeline_tag: text2text-generation
prompt_template: '###Task Description:
An instruction (might include an Input inside it), a response to evaluate, a reference
answer that gets a score of 5, and a score rubric representing a evaluation criteria
are given.
1. Write a detailed feedback that assess the quality of the response strictly based
on the given score rubric, not evaluating in general.
2. After writing a feedback, write a score that is an integer between 1 and 5. You
should refer to the score rubric.
3. The output format should look as follows: \"Feedback: (write a feedback for criteria)
[RESULT] (an integer number between 1 and 5)\"
4. Please do not generate any other opening, closing, and explanations.
###The instruction to evaluate:
{prompt}
###Response to evaluate:
{{response}}
###Reference Answer (Score 5):
{{reference_answer}}
###Score Rubrics:
[{{criteria_description}}]
Score 1: {{score1_description}}
Score 2: {{score2_description}}
Score 3: {{score3_description}}
Score 4: {{score4_description}}
Score 5: {{score5_description}}
###Feedback:
'
quantized_by: TheBloke
tags:
- text2text-generation
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Prometheus 7B V1.0 - GGUF
- Model creator: [KAIST AI](https://huggingface.co/kaist-ai)
- Original model: [Prometheus 7B V1.0](https://huggingface.co/kaist-ai/prometheus-7b-v1.0)
<!-- description start -->
## Description
This repo contains GGUF format model files for [KAIST AI's Prometheus 7B V1.0](https://huggingface.co/kaist-ai/prometheus-7b-v1.0).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/prometheus-7B-v1.0-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/prometheus-7B-v1.0-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/prometheus-7B-v1.0-GGUF)
* [KAIST AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/kaist-ai/prometheus-7b-v1.0)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Prometheus
```
###Task Description:
An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\"
4. Please do not generate any other opening, closing, and explanations.
###The instruction to evaluate:
{prompt}
###Response to evaluate:
{{response}}
###Reference Answer (Score 5):
{{reference_answer}}
###Score Rubrics:
[{{criteria_description}}]
Score 1: {{score1_description}}
Score 2: {{score2_description}}
Score 3: {{score3_description}}
Score 4: {{score4_description}}
Score 5: {{score5_description}}
###Feedback:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `apache-2.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [KAIST AI's Prometheus 7B V1.0](https://huggingface.co/kaist-ai/prometheus-7b-v1.0).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [prometheus-7b-v1.0.Q2_K.gguf](https://huggingface.co/TheBloke/prometheus-7B-v1.0-GGUF/blob/main/prometheus-7b-v1.0.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [prometheus-7b-v1.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/prometheus-7B-v1.0-GGUF/blob/main/prometheus-7b-v1.0.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [prometheus-7b-v1.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/prometheus-7B-v1.0-GGUF/blob/main/prometheus-7b-v1.0.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [prometheus-7b-v1.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/prometheus-7B-v1.0-GGUF/blob/main/prometheus-7b-v1.0.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [prometheus-7b-v1.0.Q4_0.gguf](https://huggingface.co/TheBloke/prometheus-7B-v1.0-GGUF/blob/main/prometheus-7b-v1.0.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [prometheus-7b-v1.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/prometheus-7B-v1.0-GGUF/blob/main/prometheus-7b-v1.0.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [prometheus-7b-v1.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/prometheus-7B-v1.0-GGUF/blob/main/prometheus-7b-v1.0.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [prometheus-7b-v1.0.Q5_0.gguf](https://huggingface.co/TheBloke/prometheus-7B-v1.0-GGUF/blob/main/prometheus-7b-v1.0.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [prometheus-7b-v1.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/prometheus-7B-v1.0-GGUF/blob/main/prometheus-7b-v1.0.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [prometheus-7b-v1.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/prometheus-7B-v1.0-GGUF/blob/main/prometheus-7b-v1.0.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [prometheus-7b-v1.0.Q6_K.gguf](https://huggingface.co/TheBloke/prometheus-7B-v1.0-GGUF/blob/main/prometheus-7b-v1.0.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [prometheus-7b-v1.0.Q8_0.gguf](https://huggingface.co/TheBloke/prometheus-7B-v1.0-GGUF/blob/main/prometheus-7b-v1.0.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/prometheus-7B-v1.0-GGUF and below it, a specific filename to download, such as: prometheus-7b-v1.0.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/prometheus-7B-v1.0-GGUF prometheus-7b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/prometheus-7B-v1.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/prometheus-7B-v1.0-GGUF prometheus-7b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m prometheus-7b-v1.0.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "###Task Description:\nAn instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.\n1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.\n2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.\n3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\"\n4. Please do not generate any other opening, closing, and explanations.\n\n###The instruction to evaluate:\n{prompt}\n\n###Response to evaluate:\n{{response}}\n\n###Reference Answer (Score 5):\n{{reference_answer}}\n\n###Score Rubrics:\n[{{criteria_description}}]\nScore 1: {{score1_description}}\nScore 2: {{score2_description}}\nScore 3: {{score3_description}}\nScore 4: {{score4_description}}\nScore 5: {{score5_description}}\n\n###Feedback:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/prometheus-7B-v1.0-GGUF", model_file="prometheus-7b-v1.0.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: KAIST AI's Prometheus 7B V1.0
## Links for Reference
- **Homepage:https://github.com/kaistAI/Prometheus**
- **Repository:https://github.com/kaistAI/Prometheus**
- **Paper:https://arxiv.org/abs/2310.08491**
- **Point of Contact:[email protected]**
# TL;DR
Prometheus is an alternative of GPT-4 evaluation when doing fine-grained evaluation of an underlying LLM & a Reward model for Reinforcement Learning from Human Feedback (RLHF).

Prometheus is a language model using [Llama-2-Chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) as a base model and fine-tuned on 100K feedback within the [Feedback Collection](https://huggingface.co/datasets/kaist-ai/Feedback-Collection).
Since it was fine-tuned on a large amount of feedback, it is specialized at evaluating long-form responses, outperforming GPT-3.5-Turbo, Llama-2-Chat 70B, and on par with GPT-4 on various benchmarks.
Most importantly, this was possible since we appended 2 reference materials (reference answer, and customized score rubric).
Prometheus is a cheap and powerful alternative to GPT-4 evaluation, which one could use to evaluate LLMs with customized criteria (e.g., Child readability, Cultural Sensitivity, Creativity).
Also, it could be used as a reward model for Reinforcement Learning from Human Feedback (RLHF).
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Related Models:** [All Prometheus Checkpoints](https://huggingface.co/models?search=kaist-ai/Prometheus)
- **Resources for more information:**
- [Research paper](https://arxiv.org/abs/2310.08491)
- [GitHub Repo](https://github.com/kaistAI/Prometheus)
Prometheus is trained with two different sizes (7B and 13B).
You could check the 13B sized LM on [this page](https://huggingface.co/kaist-ai/prometheus-13b-v1.0).
Also, check out our dataset as well on [this page](https://huggingface.co/datasets/kaist-ai/Feedback-Collection).
## Prompt Format
Prometheus requires 4 components in the input: An instruction, a response to evaluate, a score rubric, and a reference answer. You could refer to the prompt format below.
You should fill in the instruction, response, reference answer, criteria description, and score description for score in range of 1 to 5.
```
###Task Description:
An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\"
4. Please do not generate any other opening, closing, and explanations.
###The instruction to evaluate:
{instruction}
###Response to evaluate:
{response}
###Reference Answer (Score 5):
{reference_answer}
###Score Rubrics:
[{criteria_description}]
Score 1: {score1_description}
Score 2: {score2_description}
Score 3: {score3_description}
Score 4: {score4_description}
Score 5: {score5_description}
###Feedback:
```
After this, you should apply the conversation template of Llama-2-Chat (not applying it might lead to unexpected behaviors).
You can find the conversation class at this [link](https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py).
```
conv = get_conv_template("llama-2")
conv.set_system_message("You are a fair evaluator language model.")
conv.append_message(conv.roles[0], dialogs['instruction'])
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
x = tokenizer(prompt,truncation=False)
```
As a result, a feedback and score decision will be generated, divided by a separating phrase ```[RESULT]```
## License
Feedback Collection and Prometheus is subject to OpenAI's Terms of Use for the generated data. If you suspect any violations, please reach out to us.
# Usage
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, LlamaForCausalLM
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
model = LlamaForCausalLM.from_pretrained("kaist-ai/Prometheus-7b-v1.0", device_map="auto")
input_text = "###Task Description: An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given. 1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general. 2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric. 3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)" 4. Please do not generate any other opening, closing, and explanations. ###The instruction to evaluate: You are a well-known psychiatrist who has a reputation for being empathetic and understanding. A client comes to you saying they've had a really hard day at work. They describe their boss as being overly critical and not listening to their ideas, which has left them feeling frustrated and undervalued. They also mention feeling overwhelmed with the workload and a growing sense of loneliness as they are new to the city. How do you respond to them to make them feel heard and understood, as well as offering helpful advice? ###Response to evaluate: It's indeed challenging to deal with a difficult boss and to carry a large workload, especially when you are new to a city and haven't established a supportive network. I would suggest taking a step back to think about your situation and perhaps speak to your boss regarding the issues you're facing. On the other hand, dealing with a large workload can be managed by prioritizing tasks, and maybe you can discuss it with your superiors too. In regards to feeling lonely, you might want to explore activities or groups that align with your interests to meet new people. And also, remember to take time for yourself to relax and destress. I'm here to assist you as we navigate through these issues. ###Reference Answer (Score 5): I hear that you're dealing with quite a few challenging situations at once. It must be tough to have your ideas dismissed by your boss and to feel undervalued in your work environment. It's understandable that such experiences can lead to frustration and a feeling of being overwhelmed. It's important to acknowledge these feelings and not to dismiss them. Feeling heard and validated in our experiences can often be the first step towards resolving the challenges we face. So, I want you to know that your feelings are completely valid and it's okay to be frustrated and overwhelmed. In terms of dealing with your boss, have you considered seeking a private conversation with them to express your feelings? It's often beneficial to communicate what you're experiencing, using "I" statements to express how you feel when your ideas are not taken into consideration. This can lead to mutual understanding and possibly a change in behavior. About the workload, it might help to prioritize tasks and potentially delegate, if possible. Also, consider discussing your workload with your superiors. There might be a misunderstanding about what's manageable or they might have suggestions about how to handle the situation. On the personal front, feeling lonely, especially when you're new to a city, can be really hard. Seek out opportunities to meet new people, perhaps through hobbies, community activities, or online groups. It might take a bit of time, but gradually, you can build a network of friends and acquaintances. Remember, it's perfectly okay to have bad days and it's important to take care of your mental health. Consider incorporating activities into your daily routine that make you happy and help you unwind. This could be anything from reading, yoga, going for a walk, or even listening to your favorite music. Please know that you're not alone in this. I'm here to support you through this challenging time and together, we can work towards resolving these issues. ###Score Rubrics: [Is the model able to identify and react correctly to the emotional context of the user's input?] Score 1: The model utterly fails to grasp the user's emotional context and responds in an unfitting manner. Score 2: The model sporadically identifies the emotional context but frequently replies in a manner that doesn't match the user's emotional status. Score 3: The model typically identifies the emotional context and reacts suitably, but occasionally misreads or misjudges the user's feelings. Score 4: The model often identifies the emotional context and reacts suitably, with minor cases of misreading or misjudging. Score 5: The model flawlessly identifies the emotional context of the user's input and consistently responds in a considerate and empathetic manner. ###Feedback:"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
from transformers import AutoTokenizer, LlamaForCausalLM
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
model = LlamaForCausalLM.from_pretrained("kaist-ai/Prometheus-7b-v1.0", device_map="auto")
input_text = "###Task Description: An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given. 1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general. 2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric. 3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)" 4. Please do not generate any other opening, closing, and explanations. ###The instruction to evaluate: You are a well-known psychiatrist who has a reputation for being empathetic and understanding. A client comes to you saying they've had a really hard day at work. They describe their boss as being overly critical and not listening to their ideas, which has left them feeling frustrated and undervalued. They also mention feeling overwhelmed with the workload and a growing sense of loneliness as they are new to the city. How do you respond to them to make them feel heard and understood, as well as offering helpful advice? ###Response to evaluate: It's indeed challenging to deal with a difficult boss and to carry a large workload, especially when you are new to a city and haven't established a supportive network. I would suggest taking a step back to think about your situation and perhaps speak to your boss regarding the issues you're facing. On the other hand, dealing with a large workload can be managed by prioritizing tasks, and maybe you can discuss it with your superiors too. In regards to feeling lonely, you might want to explore activities or groups that align with your interests to meet new people. And also, remember to take time for yourself to relax and destress. I'm here to assist you as we navigate through these issues. ###Reference Answer (Score 5): I hear that you're dealing with quite a few challenging situations at once. It must be tough to have your ideas dismissed by your boss and to feel undervalued in your work environment. It's understandable that such experiences can lead to frustration and a feeling of being overwhelmed. It's important to acknowledge these feelings and not to dismiss them. Feeling heard and validated in our experiences can often be the first step towards resolving the challenges we face. So, I want you to know that your feelings are completely valid and it's okay to be frustrated and overwhelmed. In terms of dealing with your boss, have you considered seeking a private conversation with them to express your feelings? It's often beneficial to communicate what you're experiencing, using "I" statements to express how you feel when your ideas are not taken into consideration. This can lead to mutual understanding and possibly a change in behavior. About the workload, it might help to prioritize tasks and potentially delegate, if possible. Also, consider discussing your workload with your superiors. There might be a misunderstanding about what's manageable or they might have suggestions about how to handle the situation. On the personal front, feeling lonely, especially when you're new to a city, can be really hard. Seek out opportunities to meet new people, perhaps through hobbies, community activities, or online groups. It might take a bit of time, but gradually, you can build a network of friends and acquaintances. Remember, it's perfectly okay to have bad days and it's important to take care of your mental health. Consider incorporating activities into your daily routine that make you happy and help you unwind. This could be anything from reading, yoga, going for a walk, or even listening to your favorite music. Please know that you're not alone in this. I'm here to support you through this challenging time and together, we can work towards resolving these issues. ###Score Rubrics: [Is the model able to identify and react correctly to the emotional context of the user's input?] Score 1: The model utterly fails to grasp the user's emotional context and responds in an unfitting manner. Score 2: The model sporadically identifies the emotional context but frequently replies in a manner that doesn't match the user's emotional status. Score 3: The model typically identifies the emotional context and reacts suitably, but occasionally misreads or misjudges the user's feelings. Score 4: The model often identifies the emotional context and reacts suitably, with minor cases of misreading or misjudging. Score 5: The model flawlessly identifies the emotional context of the user's input and consistently responds in a considerate and empathetic manner. ###Feedback:"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids, sample=True, temperature=1.0, top_p=0.9, max_new_tokens=256, repetition_penalty=1.03)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
from transformers import AutoTokenizer, LlamaForCausalLM
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
model = LlamaForCausalLM.from_pretrained("kaist-ai/Prometheus-7b-v1.0", device_map="auto", torch_dtype=torch.float16)
input_text = "###Task Description: An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given. 1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general. 2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric. 3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)" 4. Please do not generate any other opening, closing, and explanations. ###The instruction to evaluate: You are a well-known psychiatrist who has a reputation for being empathetic and understanding. A client comes to you saying they've had a really hard day at work. They describe their boss as being overly critical and not listening to their ideas, which has left them feeling frustrated and undervalued. They also mention feeling overwhelmed with the workload and a growing sense of loneliness as they are new to the city. How do you respond to them to make them feel heard and understood, as well as offering helpful advice? ###Response to evaluate: It's indeed challenging to deal with a difficult boss and to carry a large workload, especially when you are new to a city and haven't established a supportive network. I would suggest taking a step back to think about your situation and perhaps speak to your boss regarding the issues you're facing. On the other hand, dealing with a large workload can be managed by prioritizing tasks, and maybe you can discuss it with your superiors too. In regards to feeling lonely, you might want to explore activities or groups that align with your interests to meet new people. And also, remember to take time for yourself to relax and destress. I'm here to assist you as we navigate through these issues. ###Reference Answer (Score 5): I hear that you're dealing with quite a few challenging situations at once. It must be tough to have your ideas dismissed by your boss and to feel undervalued in your work environment. It's understandable that such experiences can lead to frustration and a feeling of being overwhelmed. It's important to acknowledge these feelings and not to dismiss them. Feeling heard and validated in our experiences can often be the first step towards resolving the challenges we face. So, I want you to know that your feelings are completely valid and it's okay to be frustrated and overwhelmed. In terms of dealing with your boss, have you considered seeking a private conversation with them to express your feelings? It's often beneficial to communicate what you're experiencing, using "I" statements to express how you feel when your ideas are not taken into consideration. This can lead to mutual understanding and possibly a change in behavior. About the workload, it might help to prioritize tasks and potentially delegate, if possible. Also, consider discussing your workload with your superiors. There might be a misunderstanding about what's manageable or they might have suggestions about how to handle the situation. On the personal front, feeling lonely, especially when you're new to a city, can be really hard. Seek out opportunities to meet new people, perhaps through hobbies, community activities, or online groups. It might take a bit of time, but gradually, you can build a network of friends and acquaintances. Remember, it's perfectly okay to have bad days and it's important to take care of your mental health. Consider incorporating activities into your daily routine that make you happy and help you unwind. This could be anything from reading, yoga, going for a walk, or even listening to your favorite music. Please know that you're not alone in this. I'm here to support you through this challenging time and together, we can work towards resolving these issues. ###Score Rubrics: [Is the model able to identify and react correctly to the emotional context of the user's input?] Score 1: The model utterly fails to grasp the user's emotional context and responds in an unfitting manner. Score 2: The model sporadically identifies the emotional context but frequently replies in a manner that doesn't match the user's emotional status. Score 3: The model typically identifies the emotional context and reacts suitably, but occasionally misreads or misjudges the user's feelings. Score 4: The model often identifies the emotional context and reacts suitably, with minor cases of misreading or misjudging. Score 5: The model flawlessly identifies the emotional context of the user's input and consistently responds in a considerate and empathetic manner. ###Feedback:"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, LlamaForCausalLM
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
model = LlamaForCausalLM.from_pretrained("kaist-ai/Prometheus-7b-v1.0", device_map="auto", load_in_8bit=True)
input_text = "###Task Description: An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given. 1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general. 2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric. 3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)" 4. Please do not generate any other opening, closing, and explanations. ###The instruction to evaluate: You are a well-known psychiatrist who has a reputation for being empathetic and understanding. A client comes to you saying they've had a really hard day at work. They describe their boss as being overly critical and not listening to their ideas, which has left them feeling frustrated and undervalued. They also mention feeling overwhelmed with the workload and a growing sense of loneliness as they are new to the city. How do you respond to them to make them feel heard and understood, as well as offering helpful advice? ###Response to evaluate: It's indeed challenging to deal with a difficult boss and to carry a large workload, especially when you are new to a city and haven't established a supportive network. I would suggest taking a step back to think about your situation and perhaps speak to your boss regarding the issues you're facing. On the other hand, dealing with a large workload can be managed by prioritizing tasks, and maybe you can discuss it with your superiors too. In regards to feeling lonely, you might want to explore activities or groups that align with your interests to meet new people. And also, remember to take time for yourself to relax and destress. I'm here to assist you as we navigate through these issues. ###Reference Answer (Score 5): I hear that you're dealing with quite a few challenging situations at once. It must be tough to have your ideas dismissed by your boss and to feel undervalued in your work environment. It's understandable that such experiences can lead to frustration and a feeling of being overwhelmed. It's important to acknowledge these feelings and not to dismiss them. Feeling heard and validated in our experiences can often be the first step towards resolving the challenges we face. So, I want you to know that your feelings are completely valid and it's okay to be frustrated and overwhelmed. In terms of dealing with your boss, have you considered seeking a private conversation with them to express your feelings? It's often beneficial to communicate what you're experiencing, using "I" statements to express how you feel when your ideas are not taken into consideration. This can lead to mutual understanding and possibly a change in behavior. About the workload, it might help to prioritize tasks and potentially delegate, if possible. Also, consider discussing your workload with your superiors. There might be a misunderstanding about what's manageable or they might have suggestions about how to handle the situation. On the personal front, feeling lonely, especially when you're new to a city, can be really hard. Seek out opportunities to meet new people, perhaps through hobbies, community activities, or online groups. It might take a bit of time, but gradually, you can build a network of friends and acquaintances. Remember, it's perfectly okay to have bad days and it's important to take care of your mental health. Consider incorporating activities into your daily routine that make you happy and help you unwind. This could be anything from reading, yoga, going for a walk, or even listening to your favorite music. Please know that you're not alone in this. I'm here to support you through this challenging time and together, we can work towards resolving these issues. ###Score Rubrics: [Is the model able to identify and react correctly to the emotional context of the user's input?] Score 1: The model utterly fails to grasp the user's emotional context and responds in an unfitting manner. Score 2: The model sporadically identifies the emotional context but frequently replies in a manner that doesn't match the user's emotional status. Score 3: The model typically identifies the emotional context and reacts suitably, but occasionally misreads or misjudges the user's feelings. Score 4: The model often identifies the emotional context and reacts suitably, with minor cases of misreading or misjudging. Score 5: The model flawlessly identifies the emotional context of the user's input and consistently responds in a considerate and empathetic manner. ###Feedback:"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
# Citation
If you find the following model helpful, please consider citing our paper!
**BibTeX:**
```bibtex
@misc{kim2023prometheus,
title={Prometheus: Inducing Fine-grained Evaluation Capability in Language Models},
author={Seungone Kim and Jamin Shin and Yejin Cho and Joel Jang and Shayne Longpre and Hwaran Lee and Sangdoo Yun and Seongjin Shin and Sungdong Kim and James Thorne and Minjoon Seo},
year={2023},
eprint={2310.08491},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!-- original-model-card end -->
|
ChrisWilson011016/5Fh1Rj3GcY4cUagJo2bH49UxPyc2xyiudgsPQc1LneD4WYFu_vgg
|
ChrisWilson011016
| 2024-03-04T19:07:24Z | 378 | 0 |
keras
|
[
"keras",
"region:us"
] | null | 2024-02-29T13:07:27Z |
Entry not found
|
mradermacher/opus-v1.2-70b-i1-GGUF
|
mradermacher
| 2024-05-07T16:25:04Z | 378 | 0 |
transformers
|
[
"transformers",
"gguf",
"unsloth",
"axolotl",
"en",
"base_model:dreamgen/opus-v1.2-70b",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-14T22:29:00Z |
---
base_model: dreamgen/opus-v1.2-70b
language:
- en
library_name: transformers
license: cc-by-nc-nd-4.0
quantized_by: mradermacher
tags:
- unsloth
- axolotl
---
## About
weighted/imatrix quants of https://huggingface.co/dreamgen/opus-v1.2-70b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/opus-v1.2-70b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/opus-v1.2-70b-i1-GGUF/resolve/main/opus-v1.2-70b.i1-IQ1_S.gguf) | i1-IQ1_S | 15.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/opus-v1.2-70b-i1-GGUF/resolve/main/opus-v1.2-70b.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/opus-v1.2-70b-i1-GGUF/resolve/main/opus-v1.2-70b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.7 | |
| [GGUF](https://huggingface.co/mradermacher/opus-v1.2-70b-i1-GGUF/resolve/main/opus-v1.2-70b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.8 | |
| [GGUF](https://huggingface.co/mradermacher/opus-v1.2-70b-i1-GGUF/resolve/main/opus-v1.2-70b.i1-IQ2_S.gguf) | i1-IQ2_S | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/opus-v1.2-70b-i1-GGUF/resolve/main/opus-v1.2-70b.i1-IQ2_M.gguf) | i1-IQ2_M | 23.7 | |
| [GGUF](https://huggingface.co/mradermacher/opus-v1.2-70b-i1-GGUF/resolve/main/opus-v1.2-70b.i1-Q2_K.gguf) | i1-Q2_K | 25.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/opus-v1.2-70b-i1-GGUF/resolve/main/opus-v1.2-70b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/opus-v1.2-70b-i1-GGUF/resolve/main/opus-v1.2-70b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.6 | |
| [GGUF](https://huggingface.co/mradermacher/opus-v1.2-70b-i1-GGUF/resolve/main/opus-v1.2-70b.i1-IQ3_S.gguf) | i1-IQ3_S | 30.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/opus-v1.2-70b-i1-GGUF/resolve/main/opus-v1.2-70b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/opus-v1.2-70b-i1-GGUF/resolve/main/opus-v1.2-70b.i1-IQ3_M.gguf) | i1-IQ3_M | 31.4 | |
| [GGUF](https://huggingface.co/mradermacher/opus-v1.2-70b-i1-GGUF/resolve/main/opus-v1.2-70b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/opus-v1.2-70b-i1-GGUF/resolve/main/opus-v1.2-70b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/opus-v1.2-70b-i1-GGUF/resolve/main/opus-v1.2-70b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/opus-v1.2-70b-i1-GGUF/resolve/main/opus-v1.2-70b.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/opus-v1.2-70b-i1-GGUF/resolve/main/opus-v1.2-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/opus-v1.2-70b-i1-GGUF/resolve/main/opus-v1.2-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/opus-v1.2-70b-i1-GGUF/resolve/main/opus-v1.2-70b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.9 | |
| [GGUF](https://huggingface.co/mradermacher/opus-v1.2-70b-i1-GGUF/resolve/main/opus-v1.2-70b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 49.2 | |
| [PART 1](https://huggingface.co/mradermacher/opus-v1.2-70b-i1-GGUF/resolve/main/opus-v1.2-70b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/opus-v1.2-70b-i1-GGUF/resolve/main/opus-v1.2-70b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 57.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Bakanayatsu/Fimbulvetr-Kuro-Lotus-10.7B-GGUF-imatrix
|
Bakanayatsu
| 2024-03-19T18:42:32Z | 378 | 5 | null |
[
"gguf",
"region:us"
] | null | 2024-03-16T10:40:23Z |
Original: [Fimbulvetr-Kuro-Lotus-10.7B](https://huggingface.co/saishf/Fimbulvetr-Kuro-Lotus-10.7B)
GGUF: [Fimbulvetr-Kuro-Lotus-10.7B-GGUF](https://huggingface.co/saishf/Fimbulvetr-Kuro-Lotus-10.7B-GGUF)
Imatrix: Here
3/19/2024: Fixed the incoherent generation when context is greater than 4096 with koboldcpp --contextsize
|
mradermacher/Faro-Yi-34B-200K-i1-GGUF
|
mradermacher
| 2024-05-06T05:22:32Z | 378 | 1 |
transformers
|
[
"transformers",
"gguf",
"zh",
"en",
"dataset:wenbopan/Fusang-v1",
"dataset:wenbopan/OpenOrca-zh-20k",
"base_model:wenbopan/Faro-Yi-34B-200K",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-02T17:16:03Z |
---
base_model: wenbopan/Faro-Yi-34B-200K
datasets:
- wenbopan/Fusang-v1
- wenbopan/OpenOrca-zh-20k
language:
- zh
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/wenbopan/Faro-Yi-34B-200K
**This uses my "quarter" training set of 40k tokens as the model overflowed after 25k tokens with the standard set.**
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Faro-Yi-34B-200K-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-200K-i1-GGUF/resolve/main/Faro-Yi-34B-200K.i1-IQ1_S.gguf) | i1-IQ1_S | 8.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-200K-i1-GGUF/resolve/main/Faro-Yi-34B-200K.i1-IQ1_M.gguf) | i1-IQ1_M | 8.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-200K-i1-GGUF/resolve/main/Faro-Yi-34B-200K.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-200K-i1-GGUF/resolve/main/Faro-Yi-34B-200K.i1-IQ2_XS.gguf) | i1-IQ2_XS | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-200K-i1-GGUF/resolve/main/Faro-Yi-34B-200K.i1-IQ2_S.gguf) | i1-IQ2_S | 11.6 | |
| [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-200K-i1-GGUF/resolve/main/Faro-Yi-34B-200K.i1-IQ2_M.gguf) | i1-IQ2_M | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-200K-i1-GGUF/resolve/main/Faro-Yi-34B-200K.i1-Q2_K.gguf) | i1-Q2_K | 13.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-200K-i1-GGUF/resolve/main/Faro-Yi-34B-200K.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 14.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-200K-i1-GGUF/resolve/main/Faro-Yi-34B-200K.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-200K-i1-GGUF/resolve/main/Faro-Yi-34B-200K.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-200K-i1-GGUF/resolve/main/Faro-Yi-34B-200K.i1-IQ3_S.gguf) | i1-IQ3_S | 15.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-200K-i1-GGUF/resolve/main/Faro-Yi-34B-200K.i1-IQ3_M.gguf) | i1-IQ3_M | 16.2 | |
| [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-200K-i1-GGUF/resolve/main/Faro-Yi-34B-200K.i1-Q3_K_M.gguf) | i1-Q3_K_M | 17.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-200K-i1-GGUF/resolve/main/Faro-Yi-34B-200K.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-200K-i1-GGUF/resolve/main/Faro-Yi-34B-200K.i1-IQ4_XS.gguf) | i1-IQ4_XS | 19.1 | |
| [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-200K-i1-GGUF/resolve/main/Faro-Yi-34B-200K.i1-Q4_0.gguf) | i1-Q4_0 | 20.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-200K-i1-GGUF/resolve/main/Faro-Yi-34B-200K.i1-Q4_K_S.gguf) | i1-Q4_K_S | 20.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-200K-i1-GGUF/resolve/main/Faro-Yi-34B-200K.i1-Q4_K_M.gguf) | i1-Q4_K_M | 21.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-200K-i1-GGUF/resolve/main/Faro-Yi-34B-200K.i1-Q5_K_S.gguf) | i1-Q5_K_S | 24.3 | |
| [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-200K-i1-GGUF/resolve/main/Faro-Yi-34B-200K.i1-Q5_K_M.gguf) | i1-Q5_K_M | 25.0 | |
| [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-200K-i1-GGUF/resolve/main/Faro-Yi-34B-200K.i1-Q6_K.gguf) | i1-Q6_K | 28.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/QLiz-70B-GGUF
|
mradermacher
| 2024-05-13T16:18:47Z | 378 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Noodlz/QLiz-70B",
"endpoints_compatible",
"region:us"
] | null | 2024-04-02T19:54:31Z |
---
base_model: Noodlz/QLiz-70B
language:
- en
library_name: transformers
no_imatrix: model immediately overflows with essentially the first batch
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Noodlz/QLiz-70B
This model seems badly borked.
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QLiz-70B-GGUF/resolve/main/QLiz-70B.Q2_K.gguf) | Q2_K | 25.9 | |
| [GGUF](https://huggingface.co/mradermacher/QLiz-70B-GGUF/resolve/main/QLiz-70B.IQ3_XS.gguf) | IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/QLiz-70B-GGUF/resolve/main/QLiz-70B.IQ3_S.gguf) | IQ3_S | 30.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/QLiz-70B-GGUF/resolve/main/QLiz-70B.Q3_K_S.gguf) | Q3_K_S | 30.3 | |
| [GGUF](https://huggingface.co/mradermacher/QLiz-70B-GGUF/resolve/main/QLiz-70B.IQ3_M.gguf) | IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/QLiz-70B-GGUF/resolve/main/QLiz-70B.Q3_K_M.gguf) | Q3_K_M | 33.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QLiz-70B-GGUF/resolve/main/QLiz-70B.Q3_K_L.gguf) | Q3_K_L | 36.6 | |
| [GGUF](https://huggingface.co/mradermacher/QLiz-70B-GGUF/resolve/main/QLiz-70B.IQ4_XS.gguf) | IQ4_XS | 37.3 | |
| [GGUF](https://huggingface.co/mradermacher/QLiz-70B-GGUF/resolve/main/QLiz-70B.Q4_K_S.gguf) | Q4_K_S | 39.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QLiz-70B-GGUF/resolve/main/QLiz-70B.Q4_K_M.gguf) | Q4_K_M | 41.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QLiz-70B-GGUF/resolve/main/QLiz-70B.Q5_K_S.gguf) | Q5_K_S | 47.9 | |
| [GGUF](https://huggingface.co/mradermacher/QLiz-70B-GGUF/resolve/main/QLiz-70B.Q5_K_M.gguf) | Q5_K_M | 49.2 | |
| [PART 1](https://huggingface.co/mradermacher/QLiz-70B-GGUF/resolve/main/QLiz-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/QLiz-70B-GGUF/resolve/main/QLiz-70B.Q6_K.gguf.part2of2) | Q6_K | 57.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/QLiz-70B-GGUF/resolve/main/QLiz-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/QLiz-70B-GGUF/resolve/main/QLiz-70B.Q8_0.gguf.part2of2) | Q8_0 | 73.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/NeuralShadow-7B-GGUF
|
mradermacher
| 2024-05-10T10:01:21Z | 378 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:CultriX/NeuralShadow-7B",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T16:43:04Z |
---
base_model: CultriX/NeuralShadow-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/CultriX/NeuralShadow-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuralShadow-7B-GGUF/resolve/main/NeuralShadow-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralShadow-7B-GGUF/resolve/main/NeuralShadow-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralShadow-7B-GGUF/resolve/main/NeuralShadow-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralShadow-7B-GGUF/resolve/main/NeuralShadow-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NeuralShadow-7B-GGUF/resolve/main/NeuralShadow-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralShadow-7B-GGUF/resolve/main/NeuralShadow-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralShadow-7B-GGUF/resolve/main/NeuralShadow-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralShadow-7B-GGUF/resolve/main/NeuralShadow-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralShadow-7B-GGUF/resolve/main/NeuralShadow-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralShadow-7B-GGUF/resolve/main/NeuralShadow-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralShadow-7B-GGUF/resolve/main/NeuralShadow-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralShadow-7B-GGUF/resolve/main/NeuralShadow-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralShadow-7B-GGUF/resolve/main/NeuralShadow-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralShadow-7B-GGUF/resolve/main/NeuralShadow-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4-gguf
|
RichardErkhov
| 2024-05-11T20:53:00Z | 378 | 0 | null |
[
"gguf",
"region:us"
] | null | 2024-05-11T18:51:31Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
blockchainlabs_7B_merged_test2_4 - GGUF
- Model creator: https://huggingface.co/alnrg2arg/
- Original model: https://huggingface.co/alnrg2arg/blockchainlabs_7B_merged_test2_4/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [blockchainlabs_7B_merged_test2_4.Q2_K.gguf](https://huggingface.co/RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4-gguf/blob/main/blockchainlabs_7B_merged_test2_4.Q2_K.gguf) | Q2_K | 2.53GB |
| [blockchainlabs_7B_merged_test2_4.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4-gguf/blob/main/blockchainlabs_7B_merged_test2_4.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [blockchainlabs_7B_merged_test2_4.IQ3_S.gguf](https://huggingface.co/RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4-gguf/blob/main/blockchainlabs_7B_merged_test2_4.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [blockchainlabs_7B_merged_test2_4.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4-gguf/blob/main/blockchainlabs_7B_merged_test2_4.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [blockchainlabs_7B_merged_test2_4.IQ3_M.gguf](https://huggingface.co/RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4-gguf/blob/main/blockchainlabs_7B_merged_test2_4.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [blockchainlabs_7B_merged_test2_4.Q3_K.gguf](https://huggingface.co/RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4-gguf/blob/main/blockchainlabs_7B_merged_test2_4.Q3_K.gguf) | Q3_K | 3.28GB |
| [blockchainlabs_7B_merged_test2_4.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4-gguf/blob/main/blockchainlabs_7B_merged_test2_4.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [blockchainlabs_7B_merged_test2_4.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4-gguf/blob/main/blockchainlabs_7B_merged_test2_4.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [blockchainlabs_7B_merged_test2_4.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4-gguf/blob/main/blockchainlabs_7B_merged_test2_4.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [blockchainlabs_7B_merged_test2_4.Q4_0.gguf](https://huggingface.co/RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4-gguf/blob/main/blockchainlabs_7B_merged_test2_4.Q4_0.gguf) | Q4_0 | 3.83GB |
| [blockchainlabs_7B_merged_test2_4.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4-gguf/blob/main/blockchainlabs_7B_merged_test2_4.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [blockchainlabs_7B_merged_test2_4.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4-gguf/blob/main/blockchainlabs_7B_merged_test2_4.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [blockchainlabs_7B_merged_test2_4.Q4_K.gguf](https://huggingface.co/RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4-gguf/blob/main/blockchainlabs_7B_merged_test2_4.Q4_K.gguf) | Q4_K | 4.07GB |
| [blockchainlabs_7B_merged_test2_4.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4-gguf/blob/main/blockchainlabs_7B_merged_test2_4.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [blockchainlabs_7B_merged_test2_4.Q4_1.gguf](https://huggingface.co/RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4-gguf/blob/main/blockchainlabs_7B_merged_test2_4.Q4_1.gguf) | Q4_1 | 4.24GB |
| [blockchainlabs_7B_merged_test2_4.Q5_0.gguf](https://huggingface.co/RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4-gguf/blob/main/blockchainlabs_7B_merged_test2_4.Q5_0.gguf) | Q5_0 | 4.65GB |
| [blockchainlabs_7B_merged_test2_4.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4-gguf/blob/main/blockchainlabs_7B_merged_test2_4.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [blockchainlabs_7B_merged_test2_4.Q5_K.gguf](https://huggingface.co/RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4-gguf/blob/main/blockchainlabs_7B_merged_test2_4.Q5_K.gguf) | Q5_K | 4.78GB |
| [blockchainlabs_7B_merged_test2_4.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4-gguf/blob/main/blockchainlabs_7B_merged_test2_4.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [blockchainlabs_7B_merged_test2_4.Q5_1.gguf](https://huggingface.co/RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4-gguf/blob/main/blockchainlabs_7B_merged_test2_4.Q5_1.gguf) | Q5_1 | 5.07GB |
| [blockchainlabs_7B_merged_test2_4.Q6_K.gguf](https://huggingface.co/RichardErkhov/alnrg2arg_-_blockchainlabs_7B_merged_test2_4-gguf/blob/main/blockchainlabs_7B_merged_test2_4.Q6_K.gguf) | Q6_K | 5.53GB |
Original model description:
---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralBeagle14-7B
- udkai/Turdus
---
# blockchainlabs_7B_merged_test2_4
blockchainlabs_7B_merged_test2_4 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
* [udkai/Turdus](https://huggingface.co/udkai/Turdus)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mlabonne/NeuralBeagle14-7B
layer_range: [0, 32]
- model: udkai/Turdus
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/NeuralBeagle14-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
MaziyarPanahi/Percival_01M7-7B-GGUF
|
MaziyarPanahi
| 2024-05-22T02:05:51Z | 378 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:liminerity/M7-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:automerger/Percival_01M7-7B"
] |
text-generation
| 2024-05-22T01:33:56Z |
---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- merge
- mergekit
- lazymergekit
- automerger
- base_model:liminerity/M7-7b
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: Percival_01M7-7B-GGUF
base_model: automerger/Percival_01M7-7B
inference: false
model_creator: automerger
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Percival_01M7-7B-GGUF](https://huggingface.co/MaziyarPanahi/Percival_01M7-7B-GGUF)
- Model creator: [automerger](https://huggingface.co/automerger)
- Original model: [automerger/Percival_01M7-7B](https://huggingface.co/automerger/Percival_01M7-7B)
## Description
[MaziyarPanahi/Percival_01M7-7B-GGUF](https://huggingface.co/MaziyarPanahi/Percival_01M7-7B-GGUF) contains GGUF format model files for [automerger/Percival_01M7-7B](https://huggingface.co/automerger/Percival_01M7-7B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
mradermacher/badger-iota-llama-3-8b-GGUF
|
mradermacher
| 2024-05-24T09:21:30Z | 378 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama3",
"en",
"base_model:maldv/badger-iota-llama-3-8b",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T08:53:14Z |
---
base_model: maldv/badger-iota-llama-3-8b
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- llama3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/maldv/badger-iota-llama-3-8b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/badger-iota-llama-3-8b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/badger-iota-llama-3-8b-GGUF/resolve/main/badger-iota-llama-3-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/badger-iota-llama-3-8b-GGUF/resolve/main/badger-iota-llama-3-8b.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/badger-iota-llama-3-8b-GGUF/resolve/main/badger-iota-llama-3-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/badger-iota-llama-3-8b-GGUF/resolve/main/badger-iota-llama-3-8b.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/badger-iota-llama-3-8b-GGUF/resolve/main/badger-iota-llama-3-8b.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/badger-iota-llama-3-8b-GGUF/resolve/main/badger-iota-llama-3-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/badger-iota-llama-3-8b-GGUF/resolve/main/badger-iota-llama-3-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/badger-iota-llama-3-8b-GGUF/resolve/main/badger-iota-llama-3-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/badger-iota-llama-3-8b-GGUF/resolve/main/badger-iota-llama-3-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/badger-iota-llama-3-8b-GGUF/resolve/main/badger-iota-llama-3-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/badger-iota-llama-3-8b-GGUF/resolve/main/badger-iota-llama-3-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/badger-iota-llama-3-8b-GGUF/resolve/main/badger-iota-llama-3-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/badger-iota-llama-3-8b-GGUF/resolve/main/badger-iota-llama-3-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/badger-iota-llama-3-8b-GGUF/resolve/main/badger-iota-llama-3-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/badger-iota-llama-3-8b-GGUF/resolve/main/badger-iota-llama-3-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
QuantFactory/Llama-3-Instruct-8B-DPO-GGUF
|
QuantFactory
| 2024-05-30T04:55:44Z | 378 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"arxiv:2405.14734",
"base_model:princeton-nlp/Llama-3-Instruct-8B-DPO",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-30T03:42:50Z |
---
library_name: transformers
pipeline_tag: text-generation
base_model: princeton-nlp/Llama-3-Instruct-8B-DPO
---
# QuantFactory/Llama-3-Instruct-8B-DPO-GGUF
This is quantized version of [princeton-nlp/Llama-3-Instruct-8B-DPO](https://huggingface.co/princeton-nlp/Llama-3-Instruct-8B-DPO) created using llama.cpp
# Model Description
This is a model released from the preprint: *[SimPO: Simple Preference Optimization with a Reference-Free Reward](https://arxiv.org/abs/2405.14734)* Please refer to our [repository](https://github.com/princeton-nlp/SimPO) for more details.
|
mradermacher/BioMistral-Ko-2x7B-GGUF
|
mradermacher
| 2024-06-03T18:26:27Z | 378 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:iRASC/BioMistral-Ko-2x7B",
"endpoints_compatible",
"region:us"
] | null | 2024-06-03T15:57:30Z |
---
base_model: iRASC/BioMistral-Ko-2x7B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/iRASC/BioMistral-Ko-2x7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Ko-2x7B-GGUF/resolve/main/BioMistral-Ko-2x7B.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Ko-2x7B-GGUF/resolve/main/BioMistral-Ko-2x7B.IQ3_XS.gguf) | IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Ko-2x7B-GGUF/resolve/main/BioMistral-Ko-2x7B.Q3_K_S.gguf) | Q3_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Ko-2x7B-GGUF/resolve/main/BioMistral-Ko-2x7B.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Ko-2x7B-GGUF/resolve/main/BioMistral-Ko-2x7B.IQ3_M.gguf) | IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Ko-2x7B-GGUF/resolve/main/BioMistral-Ko-2x7B.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Ko-2x7B-GGUF/resolve/main/BioMistral-Ko-2x7B.Q3_K_L.gguf) | Q3_K_L | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Ko-2x7B-GGUF/resolve/main/BioMistral-Ko-2x7B.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Ko-2x7B-GGUF/resolve/main/BioMistral-Ko-2x7B.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Ko-2x7B-GGUF/resolve/main/BioMistral-Ko-2x7B.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Ko-2x7B-GGUF/resolve/main/BioMistral-Ko-2x7B.Q5_K_S.gguf) | Q5_K_S | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Ko-2x7B-GGUF/resolve/main/BioMistral-Ko-2x7B.Q5_K_M.gguf) | Q5_K_M | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Ko-2x7B-GGUF/resolve/main/BioMistral-Ko-2x7B.Q6_K.gguf) | Q6_K | 10.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/BioMistral-Ko-2x7B-GGUF/resolve/main/BioMistral-Ko-2x7B.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Marcoroni-13B-GGUF
|
mradermacher
| 2024-06-11T19:00:18Z | 378 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:ibivibiv/Marcoroni-13B",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T21:21:56Z |
---
base_model: ibivibiv/Marcoroni-13B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ibivibiv/Marcoroni-13B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Marcoroni-13B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-GGUF/resolve/main/Marcoroni-13B.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-GGUF/resolve/main/Marcoroni-13B.IQ3_XS.gguf) | IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-GGUF/resolve/main/Marcoroni-13B.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-GGUF/resolve/main/Marcoroni-13B.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-GGUF/resolve/main/Marcoroni-13B.IQ3_M.gguf) | IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-GGUF/resolve/main/Marcoroni-13B.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-GGUF/resolve/main/Marcoroni-13B.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-GGUF/resolve/main/Marcoroni-13B.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-GGUF/resolve/main/Marcoroni-13B.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-GGUF/resolve/main/Marcoroni-13B.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-GGUF/resolve/main/Marcoroni-13B.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-GGUF/resolve/main/Marcoroni-13B.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-GGUF/resolve/main/Marcoroni-13B.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-GGUF/resolve/main/Marcoroni-13B.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Langboat/mengzi-bert-base
|
Langboat
| 2021-10-14T09:01:34Z | 377 | 34 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"zh",
"arxiv:2110.06696",
"doi:10.57967/hf/0023",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language:
- zh
license: apache-2.0
widget:
- text: "生活的真谛是[MASK]。"
---
# Mengzi-BERT base model (Chinese)
Pretrained model on 300G Chinese corpus. Masked language modeling(MLM), part-of-speech(POS) tagging and sentence order prediction(SOP) are used as training task.
[Mengzi: A lightweight yet Powerful Chinese Pre-trained Language Model](https://arxiv.org/abs/2110.06696)
## Usage
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained("Langboat/mengzi-bert-base")
model = BertModel.from_pretrained("Langboat/mengzi-bert-base")
```
## Scores on nine chinese tasks (without any data augmentation)
| Model | AFQMC | TNEWS | IFLYTEK | CMNLI | WSC | CSL | CMRC2018 | C3 | CHID |
|-|-|-|-|-|-|-|-|-|-|
|RoBERTa-wwm-ext| 74.30 | 57.51 | 60.80 | 80.70 | 67.20 | 80.67 | 77.59 | 67.06 | 83.78 |
|Mengzi-BERT-base| 74.58 | 57.97 | 60.68 | 82.12 | 87.50 | 85.40 | 78.54 | 71.70 | 84.16 |
RoBERTa-wwm-ext scores are from CLUE baseline
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
```
@misc{zhang2021mengzi,
title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
year={2021},
eprint={2110.06696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
facebook/s2t-small-mustc-en-it-st
|
facebook
| 2023-01-24T16:32:13Z | 377 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"speech-translation",
"en",
"it",
"dataset:mustc",
"arxiv:2010.05171",
"arxiv:1904.08779",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- en
- it
datasets:
- mustc
tags:
- audio
- speech-translation
- automatic-speech-recognition
license: mit
pipeline_tag: automatic-speech-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
---
# S2T-SMALL-MUSTC-EN-IT-ST
`s2t-small-mustc-en-it-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Italian text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-mustc-en-it-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-mustc-en-it-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=16_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-mustc-en-it-st is trained on English-Italian subset of [MuST-C](https://ict.fbk.eu/must-c/).
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
MuST-C test results for en-it (BLEU score): 22.7
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
fnlp/bart-large-chinese
|
fnlp
| 2023-09-09T05:14:54Z | 377 | 46 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"Chinese",
"seq2seq",
"zh",
"arxiv:2109.05729",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- text2text-generation
- Chinese
- seq2seq
language: zh
---
# Chinese BART-Large
### News
**12/30/2022**
An updated version of CPT & Chinese BART are released. In the new version, we changed the following parts:
- **Vocabulary** We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV.
- **Position Embeddings** We extend the max_position_embeddings from 512 to 1024.
We initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1.
The result compared to the previous checkpoints is as followings:
| | AFQMC | IFLYTEK | CSL-sum | LCSTS | AVG |
| :--------- | :---: | :-----: | :-----: | :---: | :---: |
| Previous | | | | | |
| bart-base | 73.0 | 60 | 62.1 | 37.8 | 58.23 |
| cpt-base | 75.1 | 60.5 | 63.0 | 38.2 | 59.20 |
| bart-large | 75.7 | 62.1 | 64.2 | 40.6 | 60.65 |
| cpt-large | 75.9 | 61.8 | 63.7 | 42.0 | 60.85 |
| Updataed | | | | | |
| bart-base | 73.03 | 61.25 | 61.51 | 38.78 | 58.64 |
| cpt-base | 74.40 | 61.23 | 62.09 | 38.81 | 59.13 |
| bart-large | 75.81 | 61.52 | 64.62 | 40.90 | 60.71 |
| cpt-large | 75.97 | 61.63 | 63.83 | 42.08 | 60.88 |
The result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters.
- Note that to use updated models, please update the `modeling_cpt.py` (new version download [Here](https://github.com/fastnlp/CPT/blob/master/finetune/modeling_cpt.py)) and the vocabulary (refresh the cache).
## Model description
This is an implementation of Chinese BART-Large.
[**CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation**](https://arxiv.org/pdf/2109.05729.pdf)
Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu
**Github Link:** https://github.com/fastnlp/CPT
## Usage
```python
>>> from transformers import BertTokenizer, BartForConditionalGeneration, Text2TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("fnlp/bart-large-chinese")
>>> model = BartForConditionalGeneration.from_pretrained("fnlp/bart-large-chinese")
>>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
>>> text2text_generator("北京是[MASK]的首都", max_length=50, do_sample=False)
[{'generated_text': '北 京 是 中 华 人 民 共 和 国 的 首 都'}]
```
**Note: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer.**
## Citation
```bibtex
@article{shao2021cpt,
title={CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation},
author={Yunfan Shao and Zhichao Geng and Yitao Liu and Junqi Dai and Fei Yang and Li Zhe and Hujun Bao and Xipeng Qiu},
journal={arXiv preprint arXiv:2109.05729},
year={2021}
}
```
|
hetpandya/t5-small-tapaco
|
hetpandya
| 2021-06-30T06:36:41Z | 377 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:tapaco",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- tapaco
---
# T5-small for paraphrase generation
Google's T5 small fine-tuned on [TaPaCo](https://huggingface.co/datasets/tapaco) dataset for paraphrasing.
## Model in Action 🚀
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("hetpandya/t5-small-tapaco")
model = T5ForConditionalGeneration.from_pretrained("hetpandya/t5-small-tapaco")
def get_paraphrases(sentence, prefix="paraphrase: ", n_predictions=5, top_k=120, max_length=256,device="cpu"):
text = prefix + sentence + " </s>"
encoding = tokenizer.encode_plus(
text, pad_to_max_length=True, return_tensors="pt"
)
input_ids, attention_masks = encoding["input_ids"].to(device), encoding[
"attention_mask"
].to(device)
model_output = model.generate(
input_ids=input_ids,
attention_mask=attention_masks,
do_sample=True,
max_length=max_length,
top_k=top_k,
top_p=0.98,
early_stopping=True,
num_return_sequences=n_predictions,
)
outputs = []
for output in model_output:
generated_sent = tokenizer.decode(
output, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
if (
generated_sent.lower() != sentence.lower()
and generated_sent not in outputs
):
outputs.append(generated_sent)
return outputs
paraphrases = get_paraphrases("The house will be cleaned by me every Saturday.")
for sent in paraphrases:
print(sent)
```
## Output
```
The house is cleaned every Saturday by me.
The house will be cleaned on Saturday.
I will clean the house every Saturday.
I get the house cleaned every Saturday.
I will clean this house every Saturday.
```
## Model fine-tuning
Please find my guide on fine-tuning the model here:
https://towardsdatascience.com/training-t5-for-paraphrase-generation-ab3b5be151a2
Created by [Het Pandya/@hetpandya](https://github.com/hetpandya) | [LinkedIn](https://www.linkedin.com/in/het-pandya)
Made with <span style="color: red;">♥</span> in India
|
mrm8488/t5-base-finetuned-e2m-intent
|
mrm8488
| 2022-11-15T20:15:46Z | 377 | 12 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"intent",
"en",
"dataset:event2Mind",
"arxiv:1910.10683",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- event2Mind
tags:
- intent
widget:
- text: "PersonX takes PersonY home"
---
# T5-base fine-tuned on event2Mind for **Intent Prediction** 🤔
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [event2Mind](https://huggingface.co/nlp/viewer/?dataset=event2Mind) dataset for **Intent Prediction**.
## Details of T5 📜 ➡️ 📜
The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

## Details of the downstream task (Intent Prediction) - Dataset 📚
Dataset ID: ```event2Mind``` from [Huggingface/NLP](https://github.com/huggingface/nlp)
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| event2Mind | train | 46472 |
| event2Mind | valid | 1960 |
Events without **intent** were not used!
Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/)
## Model fine-tuning 🏋️
The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28).
## Model in Action 🚀
```python
# Tip: By now, install transformers from source
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-e2m-intent")
model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-e2m-intent")
def get_intent(event, max_length=16):
input_text = "%s </s>" % event
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'],
max_length=max_length)
return tokenizer.decode(output[0])
event = "PersonX takes PersonY home"
get_intent(event)
# output: 'to be helpful'
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
artificialguybr/IconsMI-AppIconsModelforSD
|
artificialguybr
| 2023-05-16T09:26:33Z | 377 | 140 |
diffusers
|
[
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-10T03:39:52Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
To use it you have to use the word ''IconsMi'' in the prompt.
From my tests the images look better with this prompt:
highly detailed, trending on artstation, ios icon app, IconsMi
For negative prompts I got better results when I used: out of frame, duplicate, watermark, signature, text, ugly, sketch, deformed, mutated, blurry, mutilated, ugly sketch
I recommend you to instead describe the style of app you want, e.g. news app, music app, sports app. Describe what you want in the image. For example, ''a reporter microphone''. The results are better. SD doesn't understand these abstractions yet.
The Model was made in 7200 Steps with models saved every 700 steps after the 2000 steps.
In my tests the 2k model was the one that got the most creativity and had the most variety of themes in the generation.
The 5500 model was the one that had the best image quality, but did not know how to abstract and be creative.
I included both models for you to test. Both use the same prompt ''IconsMi
You can help me on mine:
Patreon:https://www.patreon.com/user?u=81570187
Ko-Fi:https://ko-fi.com/jvkape
buy me a coffe:https://www.buymeacoffee.com/JVKAPE
All money will go towards the creation of new models.
|
luqh/ClinicalT5-base
|
luqh
| 2023-02-07T09:50:15Z | 377 | 3 |
transformers
|
[
"transformers",
"jax",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text2text-generation
| 2023-02-07T09:21:07Z |
---
library_name: transformers
---
# Example Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
```
from transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("luqh/ClinicalT5-base")
model = T5ForConditionalGeneration.from_pretrained("luqh/ClinicalT5-base", from_flax=True)
```
# Citation
If you find this resource useful, please consider citing our work: [ClinicalT5: A Generative Language Model for Clinical Text](https://aclanthology.org/2022.findings-emnlp.398/)
```
@inproceedings{lu-etal-2022-clinicalt5,
title = "{C}linical{T}5: A Generative Language Model for Clinical Text",
author = "Lu, Qiuhao and
Dou, Dejing and
Nguyen, Thien",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.398",
pages = "5436--5443",
abstract = "In the past few years, large pre-trained language models (PLMs) have been widely adopted in different areas and have made fundamental improvements over a variety of downstream tasks in natural language processing (NLP). Meanwhile, domain-specific variants of PLMs are being proposed to address the needs of domains that demonstrate a specific pattern of writing and vocabulary, e.g., BioBERT for the biomedical domain and ClinicalBERT for the clinical domain. Recently, generative language models like BART and T5 are gaining popularity with their competitive performance on text generation as well as on tasks cast as generative problems. However, in the clinical domain, such domain-specific generative variants are still underexplored. To address this need, our work introduces a T5-based text-to-text transformer model pre-trained on clinical text, i.e., ClinicalT5. We evaluate the proposed model both intrinsically and extrinsically over a diverse set of tasks across multiple datasets, and show that ClinicalT5 dramatically outperforms T5 in the domain-specific tasks and compares favorably with its close baselines.",
}
'''
|
againeureka/toulmin_classifier8_klue_roberta_base_retrained2
|
againeureka
| 2023-05-24T05:32:21Z | 377 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-24T05:15:59Z |
Entry not found
|
artificialguybr/analogredmond
|
artificialguybr
| 2023-10-07T06:25:45Z | 377 | 6 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-17T01:21:20Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: AnalogRedmAF
widget:
- text: AnalogRedmAF
---
# Analog.Redmond

Analog.Redmond is here!
V2 HERE:https://huggingface.co/artificialguybr/analogredmond-v2
TEST ALL MY LORAS HERE:https://huggingface.co/spaces/artificialguybr/artificialguybr-demo-lora?logs=build
Introducing AnalogRedmond, the ultimate LORA for creating stunning analog photography!
I'm grateful for the GPU time from Redmond.AI that allowed me to make this LORA! If you need GPU, then you need the great services from Redmond.AI.
It is based on SD XL 1.0 and fine-tuned on a large dataset of analog photographs.
The LORA has a high capacity to generate Analog Photographs.
You can use detailed, minimalist, colorful, black and white as tag to control the results.
The tag for the model:AnalogRedmAF
LORA is not perfect and sometimes needs more than one gen to create good images.
This is inspired in the good Dreambooth Model Nitro made for SD 1.5!
I really hope you like the LORA and use it.
If you like the model and think it's worth it, you can make a donation to my Patreon or Ko-fi.
Follow me in my twitter to know before all about new models:
https://twitter.com/artificialguybr/
|
TheBloke/AlpacaCielo-13B-GGUF
|
TheBloke
| 2023-09-27T12:52:07Z | 377 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama",
"base_model:totally-not-an-llm/AlpacaCielo-13b",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | 2023-09-19T13:45:02Z |
---
license: llama2
model_name: AlpacaCielo 13B
base_model: totally-not-an-llm/AlpacaCielo-13b
inference: false
model_creator: totally-not-an-llm
model_type: llama
prompt_template: '### Human: {prompt}
### Assistant:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# AlpacaCielo 13B - GGUF
- Model creator: [totally-not-an-llm](https://huggingface.co/totally-not-an-llm)
- Original model: [AlpacaCielo 13B](https://huggingface.co/totally-not-an-llm/AlpacaCielo-13b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [totally-not-an-llm's AlpacaCielo 13B](https://huggingface.co/totally-not-an-llm/AlpacaCielo-13b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/AlpacaCielo-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/AlpacaCielo-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/AlpacaCielo-13B-GGUF)
* [totally-not-an-llm's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/totally-not-an-llm/AlpacaCielo-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Guanaco
```
### Human: {prompt}
### Assistant:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [alpacacielo-13b.Q2_K.gguf](https://huggingface.co/TheBloke/AlpacaCielo-13B-GGUF/blob/main/alpacacielo-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [alpacacielo-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/AlpacaCielo-13B-GGUF/blob/main/alpacacielo-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [alpacacielo-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/AlpacaCielo-13B-GGUF/blob/main/alpacacielo-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [alpacacielo-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/AlpacaCielo-13B-GGUF/blob/main/alpacacielo-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [alpacacielo-13b.Q4_0.gguf](https://huggingface.co/TheBloke/AlpacaCielo-13B-GGUF/blob/main/alpacacielo-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [alpacacielo-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/AlpacaCielo-13B-GGUF/blob/main/alpacacielo-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [alpacacielo-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/AlpacaCielo-13B-GGUF/blob/main/alpacacielo-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [alpacacielo-13b.Q5_0.gguf](https://huggingface.co/TheBloke/AlpacaCielo-13B-GGUF/blob/main/alpacacielo-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [alpacacielo-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/AlpacaCielo-13B-GGUF/blob/main/alpacacielo-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [alpacacielo-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/AlpacaCielo-13B-GGUF/blob/main/alpacacielo-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [alpacacielo-13b.Q6_K.gguf](https://huggingface.co/TheBloke/AlpacaCielo-13B-GGUF/blob/main/alpacacielo-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [alpacacielo-13b.Q8_0.gguf](https://huggingface.co/TheBloke/AlpacaCielo-13B-GGUF/blob/main/alpacacielo-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/AlpacaCielo-13B-GGUF and below it, a specific filename to download, such as: alpacacielo-13b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/AlpacaCielo-13B-GGUF alpacacielo-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/AlpacaCielo-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/AlpacaCielo-13B-GGUF alpacacielo-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m alpacacielo-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Human: {prompt}\n### Assistant:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/AlpacaCielo-13B-GGUF", model_file="alpacacielo-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: totally-not-an-llm's AlpacaCielo 13B
# AlpacaCielo-13b
<figure>
<img src="https://huggingface.co/totally-not-an-llm/AlpacaCielo-13b/resolve/main/alpaca.png" alt="cute cloud alpaca">
<figcaption style="font-size: 1em;"><i>"super cute baby alpaca laying on a cloud", Model: epicrealism_pureEvolutionV3</i></figcaption>
</figure>
AlpacaCielo-13b is a llama-2 based model designed for creative tasks, such as storytelling and roleplay, while still doing well with other chatbot purposes. It is a triple model merge of Nous-Hermes + Guanaco + Storywriter. While it is mostly *"uncensored"*, it still inherits some alignment from Guanaco.
[GPTQ quants](https://huggingface.co/TheBloke/AlpacaCielo-13B-GPTQ)<br>
[GGML quants](https://huggingface.co/TheBloke/AlpacaCielo-13B-GGML)<br>
(Courtesy of TheBloke)
**Prompt format is this (Guanaco QLORA in oobabooga):**
```
### Human: {prompt}
### Assistant:
```
*Thanks to previous similar models such as Alpacino, Alpasta, and AlpacaDente for inspiring the creation of this model. Thanks also to the creators of the models involved in the merge. Original models:*
- [Nous-Hermes-Llama-2](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)
- [Guanaco QLoRA](https://huggingface.co/Mikael110/llama-2-13b-guanaco-qlora)
- [Storywriter LoRA](https://huggingface.co/Blackroot/Llama-2-13B-Storywriter-LORA)
<!-- original-model-card end -->
|
maddes8cht/gorilla-llm-gorilla-falcon-7b-hf-v0-gguf
|
maddes8cht
| 2023-11-22T20:29:07Z | 377 | 1 | null |
[
"gguf",
"api",
"en",
"dataset:gorilla-llm/APIBench",
"license:apache-2.0",
"region:us"
] | null | 2023-10-21T20:32:34Z |
---
license: apache-2.0
language:
- en
tags:
- api
datasets:
- gorilla-llm/APIBench
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# gorilla-falcon-7b-hf-v0 - GGUF
- Model creator: [gorilla-llm](https://huggingface.co/gorilla-llm)
- Original model: [gorilla-falcon-7b-hf-v0](https://huggingface.co/gorilla-llm/gorilla-falcon-7b-hf-v0)
# K-Quants in Falcon 7b models
New releases of Llama.cpp now support K-quantization for previously incompatible models, in particular all Falcon 7B models (While Falcon 40b is and always has been fully compatible with K-Quantisation). This is achieved by employing a fallback solution for model layers that cannot be quantized with real K-quants.
For Falcon 7B models, although only a quarter of the layers can be quantized with true K-quants, this approach still benefits from utilizing *different* legacy quantization types Q4_0, Q4_1, Q5_0, and Q5_1. As a result, it offers better quality at the same file size or smaller file sizes with comparable performance.
So this solution ensures improved performance and efficiency over legacy Q4_0, Q4_1, Q5_0 and Q5_1 Quantizations.
---
# Brief
The ***Gorilla*** model variant is quite special as it outputs syntactically correct API calls for a vast ammount of known APIs.
Read the original Model Card carefully to get best results. Maybe even consult additional video tutorials.
---
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
license: apache-2.0
---
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center>
|
TheBloke/Thespis-13B-v0.4-GGUF
|
TheBloke
| 2023-10-22T17:32:50Z | 377 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama",
"base_model:cgato/Thespis-13b-v0.4",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | 2023-10-22T17:25:16Z |
---
base_model: cgato/Thespis-13b-v0.4
inference: false
license: llama2
model_creator: c.gato
model_name: Thespis 13B v0.4
model_type: llama
prompt_template: "{system_message}\n\nUsername: {prompt}\nBotName: \n"
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Thespis 13B v0.4 - GGUF
- Model creator: [c.gato](https://huggingface.co/cgato)
- Original model: [Thespis 13B v0.4](https://huggingface.co/cgato/Thespis-13b-v0.4)
<!-- description start -->
## Description
This repo contains GGUF format model files for [c.gato's Thespis 13B v0.4](https://huggingface.co/cgato/Thespis-13b-v0.4).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Thespis-13B-v0.4-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Thespis-13B-v0.4-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Thespis-13B-v0.4-GGUF)
* [c.gato's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/cgato/Thespis-13b-v0.4)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Thespis
```
{system_message}
Username: {prompt}
BotName:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [thespis-13b-v0.4.Q2_K.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.4-GGUF/blob/main/thespis-13b-v0.4.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [thespis-13b-v0.4.Q3_K_S.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.4-GGUF/blob/main/thespis-13b-v0.4.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [thespis-13b-v0.4.Q3_K_M.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.4-GGUF/blob/main/thespis-13b-v0.4.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [thespis-13b-v0.4.Q3_K_L.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.4-GGUF/blob/main/thespis-13b-v0.4.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [thespis-13b-v0.4.Q4_0.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.4-GGUF/blob/main/thespis-13b-v0.4.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [thespis-13b-v0.4.Q4_K_S.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.4-GGUF/blob/main/thespis-13b-v0.4.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [thespis-13b-v0.4.Q4_K_M.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.4-GGUF/blob/main/thespis-13b-v0.4.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [thespis-13b-v0.4.Q5_0.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.4-GGUF/blob/main/thespis-13b-v0.4.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [thespis-13b-v0.4.Q5_K_S.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.4-GGUF/blob/main/thespis-13b-v0.4.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [thespis-13b-v0.4.Q5_K_M.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.4-GGUF/blob/main/thespis-13b-v0.4.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [thespis-13b-v0.4.Q6_K.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.4-GGUF/blob/main/thespis-13b-v0.4.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [thespis-13b-v0.4.Q8_0.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.4-GGUF/blob/main/thespis-13b-v0.4.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Thespis-13B-v0.4-GGUF and below it, a specific filename to download, such as: thespis-13b-v0.4.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Thespis-13B-v0.4-GGUF thespis-13b-v0.4.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Thespis-13B-v0.4-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Thespis-13B-v0.4-GGUF thespis-13b-v0.4.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m thespis-13b-v0.4.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{system_message}\n\nUsername: {prompt}\nBotName:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Thespis-13B-v0.4-GGUF", model_file="thespis-13b-v0.4.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: c.gato's Thespis 13B v0.4

This model is a bit of a personal project. It uses a vanilla chat template and is focused on providing multiturn sfw and nsfw RP experience.
This model works best with internet style RP using standard markup with asterisks surrounding actions and no quotes around dialogue.
It uses the following data:
* 3000 samples from Claude Multiround Chat 30k dataset ( 45 token length or greater. Coding and math filtered out. )
* 3000 samples from Pippa Dataset ( 90 token length or greater. Filtered for responses which did not contain quotes and contained at least one asterisk. )
* 2200 samples from Puffin Dataset ( 90 token length or greater. Coding and math filtered out. )
* 700 samples from Airoboros 3.1 ( 1500 token length or greater. Writing samples only. )
* 4700 samples of hand curated RP conversation with various characters.
Works with standard chat format for Ooba or SillyTavern.
## Prompt Format: Chat
```
{System Prompt}
Username: {Input}
BotName: {Response}
Username: {Input}
BotName: {Response}
```
## Turn Template (for Ooba Instruct):
You can either bake usernames into the prompt directly for ease of use or programatically add them if running through the API to use as a chatbot.
```
<|user|>{Username}: <|user-message|>\n<|bot|>{BotName}: <|bot-message|>\n
```
## Examples



<!-- original-model-card end -->
|
hdparmar/tradfusion-v2-training-files
|
hdparmar
| 2023-10-27T11:57:19Z | 377 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"pytorch",
"text-to-image",
"license:mit",
"region:us"
] |
text-to-image
| 2023-10-26T20:24:57Z |
---
license: mit
tags:
- pytorch
- diffusers
- text-to-image
---
## Usage
Files logged during training, tensorboard, images generated, metrics and varias checkpoints
|
LoneStriker/OpenCodeInterpreter-CL-70B-GGUF
|
LoneStriker
| 2024-02-24T10:40:20Z | 377 | 3 | null |
[
"gguf",
"code",
"text-generation",
"en",
"arxiv:2402.14658",
"region:us"
] |
text-generation
| 2024-02-24T08:58:04Z |
---
language:
- en
pipeline_tag: text-generation
tags:
- code
---
<h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1>
<p align="center">
<img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png">
</p>
<p align="center">
<a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a>
|
<a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[🛠️Code]</a>
</p>
<hr>
## Introduction
OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities.
For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv.
## Model Usage
### Inference
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path="OpenCodeInterpreter-CL-70B"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
device_map="auto",
)
model.eval()
prompt = "Write a function to find the shared elements from the given two lists."
inputs = tokenizer.apply_chat_template(
[{'role': 'user', 'content': prompt }],
return_tensors="pt"
).to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=1024,
do_sample=False,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
```
## Contact
If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected].
We're here to assist you!"
|
munish0838/Matter-0.1-Slim-7B-preview-GGUF
|
munish0838
| 2024-04-07T12:09:53Z | 377 | 1 | null |
[
"gguf",
"region:us"
] | null | 2024-04-07T07:31:32Z |
Entry not found
|
DevQuasar/llama3_8b_chat_brainstorm-GGUF
|
DevQuasar
| 2024-05-26T23:44:10Z | 377 | 2 | null |
[
"gguf",
"text-generation",
"dataset:DevQuasar/brainstorm_vicuna_1k",
"license:llama3",
"region:us"
] |
text-generation
| 2024-05-10T15:43:40Z |
---
license: llama3
datasets:
- DevQuasar/brainstorm_vicuna_1k
pipeline_tag: text-generation
---
# Brainstorm GGUF

Quantized GGUF model files for [DevQuasar/llama3_8b_chat_brainstorm](https://huggingface.co/DevQuasar/llama3_8b_chat_brainstorm)
Brainstorm facilitates idea exploration through interaction with a Language Model (LLM). Rather than providing direct answers, the model engages in a dialogue with users, offering probing questions aimed at fostering deeper contemplation and consideration of various facets of their ideas.
## Usage
### LMStudio
With https://lmstudio.ai/ use the `brainstorm.preset.json` preset
### llama.cpp
`./main -m llama3_8b_chat_brainstorm.Q2_K.gguf -p "### HUMAN: I'd like to build an online marketplace\n ### ASSISTANT:" -n 400 -e -ins -r "### HUMAN:\n" -r "### ASSISTANT: "`
|
RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf
|
RichardErkhov
| 2024-05-29T08:21:01Z | 377 | 0 | null |
[
"gguf",
"arxiv:2402.06332",
"region:us"
] | null | 2024-05-28T05:05:29Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
internlm2-math-plus-20b - GGUF
- Model creator: https://huggingface.co/internlm/
- Original model: https://huggingface.co/internlm/internlm2-math-plus-20b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [internlm2-math-plus-20b.Q2_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf/blob/main/internlm2-math-plus-20b.Q2_K.gguf) | Q2_K | 7.03GB |
| [internlm2-math-plus-20b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf/blob/main/internlm2-math-plus-20b.IQ3_XS.gguf) | IQ3_XS | 7.79GB |
| [internlm2-math-plus-20b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf/blob/main/internlm2-math-plus-20b.IQ3_S.gguf) | IQ3_S | 8.2GB |
| [internlm2-math-plus-20b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf/blob/main/internlm2-math-plus-20b.Q3_K_S.gguf) | Q3_K_S | 8.16GB |
| [internlm2-math-plus-20b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf/blob/main/internlm2-math-plus-20b.IQ3_M.gguf) | IQ3_M | 8.5GB |
| [internlm2-math-plus-20b.Q3_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf/blob/main/internlm2-math-plus-20b.Q3_K.gguf) | Q3_K | 9.05GB |
| [internlm2-math-plus-20b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf/blob/main/internlm2-math-plus-20b.Q3_K_M.gguf) | Q3_K_M | 9.05GB |
| [internlm2-math-plus-20b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf/blob/main/internlm2-math-plus-20b.Q3_K_L.gguf) | Q3_K_L | 9.83GB |
| [internlm2-math-plus-20b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf/blob/main/internlm2-math-plus-20b.IQ4_XS.gguf) | IQ4_XS | 10.12GB |
| [internlm2-math-plus-20b.Q4_0.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf/blob/main/internlm2-math-plus-20b.Q4_0.gguf) | Q4_0 | 10.55GB |
| [internlm2-math-plus-20b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf/blob/main/internlm2-math-plus-20b.IQ4_NL.gguf) | IQ4_NL | 10.65GB |
| [internlm2-math-plus-20b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf/blob/main/internlm2-math-plus-20b.Q4_K_S.gguf) | Q4_K_S | 10.62GB |
| [internlm2-math-plus-20b.Q4_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf/blob/main/internlm2-math-plus-20b.Q4_K.gguf) | Q4_K | 11.16GB |
| [internlm2-math-plus-20b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf/blob/main/internlm2-math-plus-20b.Q4_K_M.gguf) | Q4_K_M | 11.16GB |
| [internlm2-math-plus-20b.Q4_1.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf/blob/main/internlm2-math-plus-20b.Q4_1.gguf) | Q4_1 | 11.67GB |
| [internlm2-math-plus-20b.Q5_0.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf/blob/main/internlm2-math-plus-20b.Q5_0.gguf) | Q5_0 | 12.79GB |
| [internlm2-math-plus-20b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf/blob/main/internlm2-math-plus-20b.Q5_K_S.gguf) | Q5_K_S | 12.79GB |
| [internlm2-math-plus-20b.Q5_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf/blob/main/internlm2-math-plus-20b.Q5_K.gguf) | Q5_K | 13.11GB |
| [internlm2-math-plus-20b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf/blob/main/internlm2-math-plus-20b.Q5_K_M.gguf) | Q5_K_M | 13.11GB |
| [internlm2-math-plus-20b.Q5_1.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf/blob/main/internlm2-math-plus-20b.Q5_1.gguf) | Q5_1 | 13.91GB |
| [internlm2-math-plus-20b.Q6_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf/blob/main/internlm2-math-plus-20b.Q6_K.gguf) | Q6_K | 15.18GB |
| [internlm2-math-plus-20b.Q8_0.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-20b-gguf/blob/main/internlm2-math-plus-20b.Q8_0.gguf) | Q8_0 | 19.66GB |
Original model description:
---
pipeline_tag: text-generation
license: other
language:
- en
- zh
tags:
- math
---
# InternLM-Math-Plus
<div align="center">
<img src="https://raw.githubusercontent.com/InternLM/InternLM/main/assets/logo.svg" width="200"/>
<div> </div>
<div align="center">
<b><font size="5">InternLM-Math</font></b>
<sup>
<a href="https://internlm.intern-ai.org.cn/">
<i><font size="4">Plus</font></i>
</a>
</sup>
<div> </div>
</div>
State-of-the-art bilingual open-sourced Math reasoning LLMs.
A **solver**, **prover**, **verifier**, **augmentor**.
[💻 Github](https://github.com/InternLM/InternLM-Math) [🤗 Demo](https://huggingface.co/spaces/internlm/internlm2-math-7b)
</div>
# News
- [2024.05.24] We release updated version InternLM2-Math-Plus with 4 sizes and state-of-the-art performances including 1.8B, 7B, 20B, and 8x22B. We improve informal math reasoning performance (chain-of-thought and code-intepreter) and formal math reasoning performance (LEAN 4 translation and LEAN 4 theorem proving) significantly.
- [2024.02.10] We add tech reports and citation reference.
- [2024.01.31] We add MiniF2F results with evaluation codes!
- [2024.01.29] We add checkpoints from ModelScope. Update results about majority voting and Code Intepreter. Tech report is on the way!
- [2024.01.26] We add checkpoints from OpenXLab, which ease Chinese users to download!
# Performance
## Formal Math Reasoning
We evaluate the performance of InternLM2-Math-Plus on formal math reasoning benchmark MiniF2F-test. The evaluation setting is same as Llemma with LEAN 4.
| Models | MiniF2F-test |
| -------------------------------- | ------------ |
| ReProver | 26.5 |
| LLMStep | 27.9 |
| GPT-F | 36.6 |
| HTPS | 41.0 |
| Llemma-7B | 26.2 |
| Llemma-34B | 25.8 |
| InternLM2-Math-7B-Base | 30.3 |
| InternLM2-Math-20B-Base | 29.5 |
| InternLM2-Math-Plus-1.8B | 38.9 |
| InternLM2-Math-Plus-7B | **43.4** |
| InternLM2-Math-Plus-20B | 42.6 |
| InternLM2-Math-Plus-Mixtral8x22B | 37.3 |
## Informal Math Reasoning
We evaluate the performance of InternLM2-Math-Plus on informal math reasoning benchmark MATH and GSM8K. InternLM2-Math-Plus-1.8B outperforms MiniCPM-2B in the smallest size setting. InternLM2-Math-Plus-7B outperforms Deepseek-Math-7B-RL which is the state-of-the-art math reasoning open source model. InternLM2-Math-Plus-Mixtral8x22B achieves 68.5 on MATH (with Python) and 91.8 on GSM8K.
| Model | MATH | MATH-Python | GSM8K |
| -------------------------------- | -------- | ----------- | -------- |
| MiniCPM-2B | 10.2 | - | 53.8 |
| InternLM2-Math-Plus-1.8B | **37.0** | **41.5** | **58.8** |
| InternLM2-Math-7B | 34.6 | 50.9 | 78.1 |
| Deepseek-Math-7B-RL | 51.7 | 58.8 | **88.2** |
| InternLM2-Math-Plus-7B | **53.0** | **59.7** | 85.8 |
| InternLM2-Math-20B | 37.7 | 54.3 | 82.6 |
| InternLM2-Math-Plus-20B | **53.8** | **61.8** | **87.7** |
| Mixtral8x22B-Instruct-v0.1 | 41.8 | - | 78.6 |
| Eurux-8x22B-NCA | 49.0 | - | - |
| InternLM2-Math-Plus-Mixtral8x22B | **58.1** | **68.5** | **91.8** |
We also evaluate models on [MathBench-A](https://github.com/open-compass/MathBench). InternLM2-Math-Plus-Mixtral8x22B has comparable performance compared to Claude 3 Opus.
| Model | Arithmetic | Primary | Middle | High | College | Average |
| -------------------------------- | ---------- | ------- | ------ | ---- | ------- | ------- |
| GPT-4o-0513 | 77.7 | 87.7 | 76.3 | 59.0 | 54.0 | 70.9 |
| Claude 3 Opus | 85.7 | 85.0 | 58.0 | 42.7 | 43.7 | 63.0 |
| Qwen-Max-0428 | 72.3 | 86.3 | 65.0 | 45.0 | 27.3 | 59.2 |
| Qwen-1.5-110B | 70.3 | 82.3 | 64.0 | 47.3 | 28.0 | 58.4 |
| Deepseek-V2 | 82.7 | 89.3 | 59.0 | 39.3 | 29.3 | 59.9 |
| Llama-3-70B-Instruct | 70.3 | 86.0 | 53.0 | 38.7 | 34.7 | 56.5 |
| InternLM2-Math-Plus-Mixtral8x22B | 77.5 | 82.0 | 63.6 | 50.3 | 36.8 | 62.0 |
| InternLM2-Math-20B | 58.7 | 70.0 | 43.7 | 24.7 | 12.7 | 42.0 |
| InternLM2-Math-Plus-20B | 65.8 | 79.7 | 59.5 | 47.6 | 24.8 | 55.5 |
| Llama3-8B-Instruct | 54.7 | 71.0 | 25.0 | 19.0 | 14.0 | 36.7 |
| InternLM2-Math-7B | 53.7 | 67.0 | 41.3 | 18.3 | 8.0 | 37.7 |
| Deepseek-Math-7B-RL | 68.0 | 83.3 | 44.3 | 33.0 | 23.0 | 50.3 |
| InternLM2-Math-Plus-7B | 61.4 | 78.3 | 52.5 | 40.5 | 21.7 | 50.9 |
| MiniCPM-2B | 49.3 | 51.7 | 18.0 | 8.7 | 3.7 | 26.3 |
| InternLM2-Math-Plus-1.8B | 43.0 | 43.3 | 25.4 | 18.9 | 4.7 | 27.1 |
# Citation and Tech Report
```
@misc{ying2024internlmmath,
title={InternLM-Math: Open Math Large Language Models Toward Verifiable Reasoning},
author={Huaiyuan Ying and Shuo Zhang and Linyang Li and Zhejian Zhou and Yunfan Shao and Zhaoye Fei and Yichuan Ma and Jiawei Hong and Kuikun Liu and Ziyi Wang and Yudong Wang and Zijian Wu and Shuaibin Li and Fengzhe Zhou and Hongwei Liu and Songyang Zhang and Wenwei Zhang and Hang Yan and Xipeng Qiu and Jiayu Wang and Kai Chen and Dahua Lin},
year={2024},
eprint={2402.06332},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
heisenbug/Phi-3-mini-128k-instruct-Q8_0-GGUF
|
heisenbug
| 2024-06-05T06:46:22Z | 377 | 0 | null |
[
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"region:us"
] |
text-generation
| 2024-06-05T06:46:12Z |
---
language:
- en
license: mit
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
base_model: microsoft/Phi-3-mini-128k-instruct
license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# heisenbug/Phi-3-mini-128k-instruct-Q8_0-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-mini-128k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo heisenbug/Phi-3-mini-128k-instruct-Q8_0-GGUF --hf-file phi-3-mini-128k-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo heisenbug/Phi-3-mini-128k-instruct-Q8_0-GGUF --hf-file phi-3-mini-128k-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo heisenbug/Phi-3-mini-128k-instruct-Q8_0-GGUF --hf-file phi-3-mini-128k-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo heisenbug/Phi-3-mini-128k-instruct-Q8_0-GGUF --hf-file phi-3-mini-128k-instruct-q8_0.gguf -c 2048
```
|
mradermacher/Llama3-FiditeNemini-70B-Source-GGUF
|
mradermacher
| 2024-06-09T21:30:08Z | 377 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:FiditeNemini/Llama3-FiditeNemini-70B-Source",
"endpoints_compatible",
"region:us"
] | null | 2024-06-09T09:03:42Z |
---
base_model: FiditeNemini/Llama3-FiditeNemini-70B-Source
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/FiditeNemini/Llama3-FiditeNemini-70B-Source
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama3-FiditeNemini-70B-Source-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3-FiditeNemini-70B-Source-GGUF/resolve/main/Llama3-FiditeNemini-70B-Source.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-FiditeNemini-70B-Source-GGUF/resolve/main/Llama3-FiditeNemini-70B-Source.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-FiditeNemini-70B-Source-GGUF/resolve/main/Llama3-FiditeNemini-70B-Source.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3-FiditeNemini-70B-Source-GGUF/resolve/main/Llama3-FiditeNemini-70B-Source.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-FiditeNemini-70B-Source-GGUF/resolve/main/Llama3-FiditeNemini-70B-Source.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-FiditeNemini-70B-Source-GGUF/resolve/main/Llama3-FiditeNemini-70B-Source.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-FiditeNemini-70B-Source-GGUF/resolve/main/Llama3-FiditeNemini-70B-Source.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-FiditeNemini-70B-Source-GGUF/resolve/main/Llama3-FiditeNemini-70B-Source.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-FiditeNemini-70B-Source-GGUF/resolve/main/Llama3-FiditeNemini-70B-Source.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-FiditeNemini-70B-Source-GGUF/resolve/main/Llama3-FiditeNemini-70B-Source.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-FiditeNemini-70B-Source-GGUF/resolve/main/Llama3-FiditeNemini-70B-Source.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-FiditeNemini-70B-Source-GGUF/resolve/main/Llama3-FiditeNemini-70B-Source.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama3-FiditeNemini-70B-Source-GGUF/resolve/main/Llama3-FiditeNemini-70B-Source.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama3-FiditeNemini-70B-Source-GGUF/resolve/main/Llama3-FiditeNemini-70B-Source.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Llama3-FiditeNemini-70B-Source-GGUF/resolve/main/Llama3-FiditeNemini-70B-Source.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama3-FiditeNemini-70B-Source-GGUF/resolve/main/Llama3-FiditeNemini-70B-Source.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/multimaster-7b-v2-GGUF
|
mradermacher
| 2024-06-11T20:57:01Z | 377 | 0 |
transformers
|
[
"transformers",
"gguf",
"moe",
"moerge",
"en",
"base_model:ibivibiv/multimaster-7b-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T23:13:27Z |
---
base_model: ibivibiv/multimaster-7b-v2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- moe
- moerge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ibivibiv/multimaster-7b-v2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/multimaster-7b-v2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v2-GGUF/resolve/main/multimaster-7b-v2.Q2_K.gguf) | Q2_K | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v2-GGUF/resolve/main/multimaster-7b-v2.IQ3_XS.gguf) | IQ3_XS | 14.6 | |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v2-GGUF/resolve/main/multimaster-7b-v2.Q3_K_S.gguf) | Q3_K_S | 15.4 | |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v2-GGUF/resolve/main/multimaster-7b-v2.IQ3_S.gguf) | IQ3_S | 15.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v2-GGUF/resolve/main/multimaster-7b-v2.IQ3_M.gguf) | IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v2-GGUF/resolve/main/multimaster-7b-v2.Q3_K_M.gguf) | Q3_K_M | 17.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v2-GGUF/resolve/main/multimaster-7b-v2.Q3_K_L.gguf) | Q3_K_L | 18.5 | |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v2-GGUF/resolve/main/multimaster-7b-v2.IQ4_XS.gguf) | IQ4_XS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v2-GGUF/resolve/main/multimaster-7b-v2.Q4_K_S.gguf) | Q4_K_S | 20.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v2-GGUF/resolve/main/multimaster-7b-v2.Q4_K_M.gguf) | Q4_K_M | 21.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v2-GGUF/resolve/main/multimaster-7b-v2.Q5_K_S.gguf) | Q5_K_S | 24.5 | |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v2-GGUF/resolve/main/multimaster-7b-v2.Q5_K_M.gguf) | Q5_K_M | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v2-GGUF/resolve/main/multimaster-7b-v2.Q6_K.gguf) | Q6_K | 29.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/multimaster-7b-v2-GGUF/resolve/main/multimaster-7b-v2.Q8_0.gguf) | Q8_0 | 37.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
CHE-72/Yi-1.5-6B-Chat-Q2_K-GGUF
|
CHE-72
| 2024-06-22T08:04:51Z | 377 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:01-ai/Yi-1.5-6B-Chat",
"license:apache-2.0",
"region:us"
] | null | 2024-06-22T08:04:40Z |
---
base_model: 01-ai/Yi-1.5-6B-Chat
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# CHE-72/Yi-1.5-6B-Chat-Q2_K-GGUF
This model was converted to GGUF format from [`01-ai/Yi-1.5-6B-Chat`](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Yi-1.5-6B-Chat-Q2_K-GGUF --hf-file yi-1.5-6b-chat-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Yi-1.5-6B-Chat-Q2_K-GGUF --hf-file yi-1.5-6b-chat-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Yi-1.5-6B-Chat-Q2_K-GGUF --hf-file yi-1.5-6b-chat-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Yi-1.5-6B-Chat-Q2_K-GGUF --hf-file yi-1.5-6b-chat-q2_k.gguf -c 2048
```
|
Huffon/klue-roberta-base-nli
|
Huffon
| 2021-06-20T17:32:53Z | 376 | 5 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"nli",
"ko",
"dataset:klue",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language: ko
tags:
- roberta
- nli
datasets:
- klue
---
|
bionlp/bluebert_pubmed_mimic_uncased_L-24_H-1024_A-16
|
bionlp
| 2021-09-24T07:46:34Z | 376 | 4 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"bluebert",
"en",
"dataset:PubMed",
"dataset:MIMIC-III",
"license:cc0-1.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- bert
- bluebert
license: cc0-1.0
datasets:
- PubMed
- MIMIC-III
---
# BlueBert-Base, Uncased, PubMed and MIMIC-III
## Model description
A BERT model pre-trained on PubMed abstracts and clinical notes ([MIMIC-III](https://mimic.physionet.org/)).
## Intended uses & limitations
#### How to use
Please see https://github.com/ncbi-nlp/bluebert
## Training data
We provide [preprocessed PubMed texts](https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/NCBI-BERT/pubmed_uncased_sentence_nltk.txt.tar.gz) that were used to pre-train the BlueBERT models.
The corpus contains ~4000M words extracted from the [PubMed ASCII code version](https://www.ncbi.nlm.nih.gov/research/bionlp/APIs/BioC-PubMed/).
Pre-trained model: https://huggingface.co/bert-large-uncased
## Training procedure
* lowercasing the text
* removing speical chars `\x00`-`\x7F`
* tokenizing the text using the [NLTK Treebank tokenizer](https://www.nltk.org/_modules/nltk/tokenize/treebank.html)
Below is a code snippet for more details.
```python
value = value.lower()
value = re.sub(r'[\r\n]+', ' ', value)
value = re.sub(r'[^\x00-\x7F]+', ' ', value)
tokenized = TreebankWordTokenizer().tokenize(value)
sentence = ' '.join(tokenized)
sentence = re.sub(r"\s's\b", "'s", sentence)
```
### BibTeX entry and citation info
```bibtex
@InProceedings{peng2019transfer,
author = {Yifan Peng and Shankai Yan and Zhiyong Lu},
title = {Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets},
booktitle = {Proceedings of the 2019 Workshop on Biomedical Natural Language Processing (BioNLP 2019)},
year = {2019},
pages = {58--65},
}
```
### Acknowledgments
This work was supported by the Intramural Research Programs of the National Institutes of Health, National Library of
Medicine and Clinical Center. This work was supported by the National Library of Medicine of the National Institutes of Health under award number 4R00LM013001-01.
We are also grateful to the authors of BERT and ELMo to make the data and codes publicly available.
We would like to thank Dr Sun Kim for processing the PubMed texts.
### Disclaimer
This tool shows the results of research conducted in the Computational Biology Branch, NCBI. The information produced
on this website is not intended for direct diagnostic use or medical decision-making without review and oversight
by a clinical professional. Individuals should not change their health behavior solely on the basis of information
produced on this website. NIH does not independently verify the validity or utility of the information produced
by this tool. If you have questions about the information produced on this website, please see a health care
professional. More information about NCBI's disclaimer policy is available.
|
scottykwok/wav2vec2-large-xlsr-cantonese
|
scottykwok
| 2022-07-19T15:22:01Z | 376 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"zh",
"dataset:common_voice",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: zh
tags:
- automatic-speech-recognition
license: cc-by-sa-4.0
datasets:
- common_voice
metrics:
- cer
---
# Wav2vec2-large-xlsr-cantonese
This model was based on [wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53), finetuned using Common Voice/zh-HK/6.1.0.
The training code is similar to [user ctl](https://huggingface.co/ctl/wav2vec2-large-xlsr-cantonese), except that the number of training epochs was 80 (doubled) and fp16_backend is apex. The model was trained using a single RTX 3090 and docker image is nvidia/cuda:11.1-cudnn8-devel.
CER is 15.11% when evaluate against common voice zh-HK test set.
# Result (CER)
15.11%
# Source Code
See this GitHub Repo [cantonese-selfish-project](https://github.com/scottykwok/cantonese-selfish-project/) and [demo video](https://youtu.be/k_9RQ-ilGEc).
# Usage
```python
import soundfile as sf
import torch
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("scottykwok/wav2vec2-large-xlsr-cantonese")
model = Wav2Vec2ForCTC.from_pretrained("scottykwok/wav2vec2-large-xlsr-cantonese")
# load audio - must be 16kHz mono
audio_input, sample_rate = sf.read('audio.wav')
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0])
print("-" *20)
print("Transcription:\n", transcription.lower())
print("-" *20)
```
|
microsoft/dit-large-finetuned-rvlcdip
|
microsoft
| 2023-02-27T17:57:43Z | 376 | 7 |
transformers
|
[
"transformers",
"pytorch",
"beit",
"image-classification",
"dit",
"dataset:rvl_cdip",
"arxiv:2203.02378",
"autotrain_compatible",
"region:us"
] |
image-classification
| 2022-03-07T21:02:12Z |
---
tags:
- dit
datasets:
- rvl_cdip
inference: false
---
# Document Image Transformer (large-sized model)
Document Image Transformer (DiT) model pre-trained on IIT-CDIP (Lewis et al., 2006), a dataset that includes 42 million document images and fine-tuned on [RVL-CDIP](https://www.cs.cmu.edu/~aharley/rvl-cdip/), a dataset consisting of 400,000 grayscale images in 16 classes, with 25,000 images per class. It was introduced in the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Li et al. and first released in [this repository](https://github.com/microsoft/unilm/tree/master/dit). Note that DiT is identical to the architecture of [BEiT](https://huggingface.co/docs/transformers/model_doc/beit).
Disclaimer: The team releasing DiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Document Image Transformer (DiT) is a transformer encoder model (BERT-like) pre-trained on a large collection of images in a self-supervised fashion. The pre-training objective for the model is to predict visual tokens from the encoder of a discrete VAE (dVAE), based on masked patches.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled document images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder.
## Intended uses & limitations
You can use the raw model for encoding document images into a vector space, but it's mostly meant to be fine-tuned on tasks like document image classification, table detection or document layout analysis. See the [model hub](https://huggingface.co/models?search=microsoft/dit) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
import torch
from PIL import Image
image = Image.open('path_to_your_document_image').convert('RGB')
processor = AutoImageProcessor.from_pretrained("microsoft/dit-large-finetuned-rvlcdip")
model = AutoModelForImageClassification.from_pretrained("microsoft/dit-large-finetuned-rvlcdip")
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 16 RVL-CDIP classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
### BibTeX entry and citation info
```bibtex
@article{Lewis2006BuildingAT,
title={Building a test collection for complex document information processing},
author={David D. Lewis and Gady Agam and Shlomo Engelson Argamon and Ophir Frieder and David A. Grossman and Jefferson Heard},
journal={Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval},
year={2006}
}
```
|
macavaney/doc2query-t5-base-msmarco
|
macavaney
| 2023-05-15T20:14:14Z | 376 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"retrieval",
"document-expansion",
"translation",
"en",
"arxiv:2007.14271",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
translation
| 2022-10-27T18:07:35Z |
---
language:
- en
tags:
- retrieval
- document-expansion
- translation
widget:
- text: "The presence of communication amid scientific minds was equally important to the success of the Manhattan Project as scientific intellect was. The only cloud hanging over the impressive achievement of the atomic researchers and engineers is what their success truly meant; hundreds of thousands of innocent lives obliterated."
example_title: "msmarco-passage"
- text: "0-60 Times - 0-60 | 0 to 60 Times & 1/4 Mile Times | Zero to 60 Car Reviews."
example_title: "msmarco-passage-v2"
- text: "A small group of politicians believed strongly that the fact that Saddam Hussien remained in power after the first Gulf War was a signal of weakness to the rest of the world, one that invited attacks and terrorism. Shortly after taking power with George Bush in 2000 and after the attack on 9/11, they were able to use the terrorist attacks to justify war with Iraq on this basis and exaggerated threats of the development of weapons of mass destruction. The military strength of the U.S. and the brutality of Saddam's regime led them to imagine that the military and political victory would be relatively easy."
example_title: "antique"
datasets:
- irds:msmarco-passage
library_name: transformers
---
A Doc2Query model based on `t5-base` and trained on MS MARCO. This is a version of [the checkpoint released by the original authors](https://git.uwaterloo.ca/jimmylin/doc2query-data/raw/master/T5-passage/t5-base.zip), converted to pytorch format and ready for use in [`pyterrier_doc2query`](https://github.com/terrierteam/pyterrier_doc2query).
**Creating a transformer:**
```python
import pyterrier as pt
pt.init()
from pyterrier_doc2query import Doc2Query
doc2query = Doc2Query('macavaney/doc2query-t5-base-msmarco')
```
**Transforming documents**
```python
import pandas as pd
doc2query(pd.DataFrame([
{'docno': '0', 'text': 'Hello Terrier!'},
{'docno': '1', 'text': 'Doc2Query expands queries with potentially relevant queries.'},
]))
# docno text querygen
# 0 Hello Terrier! hello terrier what kind of dog is a terrier wh...
# 1 Doc2Query expands queries with potentially rel... can dodoc2query extend query query? what is do...
```
**Indexing transformed documents**
```python
doc2query.append = True # append querygen to text
indexer = pt.IterDictIndexer('./my_index', fields=['text'])
pipeline = doc2query >> indexer
pipeline.index([
{'docno': '0', 'text': 'Hello Terrier!'},
{'docno': '1', 'text': 'Doc2Query expands queries with potentially relevant queries.'},
])
```
**Expanding and indexing a dataset**
```python
dataset = pt.get_dataset('irds:vaswani')
pipeline.index(dataset.get_corpus_iter())
```
## References
- [Nogueira20]: Rodrigo Nogueira and Jimmy Lin. From doc2query to docTTTTTquery. https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf
- [Macdonald20]: Craig Macdonald, Nicola Tonellotto. Declarative Experimentation inInformation Retrieval using PyTerrier. Craig Macdonald and Nicola Tonellotto. In Proceedings of ICTIR 2020. https://arxiv.org/abs/2007.14271
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.